You are on page 1of 173

Physics 611 & 612, 2008-2009

Advanced Quantum Mechanics


Instructors: Dr. Alakabha Datta & Dr. Roger Waxler
Notes Compiled by: Phil Blom
Texts Used:
Modern Quantum Mechanics by J.J. Sakurai,
Quantum Mechanics (Non-Relativistic Theory) by L.D. Landau and E.M. Lifshitz
Last updated: May 26, 2009

Contents
1 Review of Undergraduate Level Quantum Mechanics

1.1

The Schrodinger Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2

Infinite Square Well . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.3

Quantum Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.4

Free Particle

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.5

Delta Function Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.6

Finite Square Well . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

1.7

The Hydrogen Atom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

Fundamental Concepts

11

2 Kets, Bras, and Operators

11

2.1

Kets and Bras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

2.2

Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

2.3

Hermitian Operators - Eigenvalues and Orthonormality . . . . . . . . . . . . . . . .

13

2.4

Base Kets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

2.5

The Projection Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

2.6

Matrix Representation of an Operator . . . . . . . . . . . . . . . . . . . . . . . . . .

14

2.7

Matrix Representation of Bras and Kets . . . . . . . . . . . . . . . . . . . . . . . . .

15

3 Experiments and Measurement

17

3.1

Expectation Values of Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

3.2

The Stern-Gerlach Experiment and the Failure of Classical Mechanics . . . . . . . .

18

4 Constructing Spin 1/2 Operators and Kets


4.1

Spin Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 Compatibility and Operators


5.1

19
21
22

Incompatible Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 Measurements of Compatible and Incompatible Operators


2

23
23

7 The Uncertainty Principle

24

8 Transformations of Base Kets

26

8.1

Diagonalization of Transform Operators . . . . . . . . . . . . . . . . . . . . . . . . .

27

8.2

Unitary Equivalent Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

9 Operators with Continuous Spectra

II

28

9.1

The Position Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29

9.2

The Momentum Operator - Translation Method . . . . . . . . . . . . . . . . . . . . .

30

9.3

The Wave Function - Some New Ideas . . . . . . . . . . . . . . . . . . . . . . . . . .

31

9.4

Transformations in Position and Momentum Space . . . . . . . . . . . . . . . . . . .

34

9.5

The Gaussian Wave Packet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

QUANTUM DYNAMICS

36

10 The Time Evolution Operator

36

10.1 Infinitesimal Evolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

10.2 The Schr


odinger Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

10.3 Time Independent Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

10.4 Expectation Values of Stationary and Non-Stationary States . . . . . . . . . . . . . .

38

10.5 Spin Interactions - Evolution of Spin States . . . . . . . . . . . . . . . . . . . . . . .

39

11 Energy-Time Uncertainty - An Overview

40

12 Schr
odinger and Heisenberg Interpretations of Time Evolution

41

12.1 Ehrenfest Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


13 Quantum Simple Harmonic Oscillator

42
44

13.1 Matrix Elements of Operators and Expectation Values . . . . . . . . . . . . . . . . .

46

13.2 Deriving the Wave Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47

13.3 Time Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49

13.4 Review of Quantum Simple Harmonic Oscillator . . . . . . . . . . . . . . . . . . . .

50

14 Stationary States - Bound and Continuous Solutions


14.1 Dirac Delta Function Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51
52

15 The Feynmann Path Integral & The Propagator

53

16 Potentials and Gauge Transformations in Quantum Mechanics

54

16.1 Gravitational Gauge Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55

16.2 Electromagnetic Gauge Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

16.3 Alternate Approach to Gauge Transforms . . . . . . . . . . . . . . . . . . . . . . . .

61

17 The Aharonov-Bohm Effect

61

III

62

THE THEORY OF ANGULAR MOMENTUM

18 Classical Rotations

62

18.1 Finite Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

62

18.2 Infinitesimal Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

63

18.3 Overview of Group Theory and Group Representation . . . . . . . . . . . . . . . . .

63

19 Evolution of Spin Operators

65

19.1 Spin 1/2 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


20 The Rotation Operator D(R)

66
66

20.1 Experiment - Neutron Interferometer . . . . . . . . . . . . . . . . . . . . . . . . . . .

67

21 Spin 1/2 System - 2 Component Form

67

22 The Density Matrix

72

22.1 Density Matrix For Continuous Variables and Time Evolutions . . . . . . . . . . . .


23 General Representation of Angular Momentum

74
75

23.1 Matrix Representation of Angular Momentum . . . . . . . . . . . . . . . . . . . . . .

77

23.2 Rotation Operator for Generalized Angular Momentum . . . . . . . . . . . . . . . .

78

23.3 Orbital Angular Momentum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

23.4 Quantum Mechanical Treatment of the Spherical Harmonics . . . . . . . . . . . . . .


24 Addition of Angular Momentum

80
81

24.1 Overview - New Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

24.2 A System of 2 Spin 1/2 Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

24.3 Clebsch-Gordon Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

24.4 Interaction Energy - Splitting of Energy Levels . . . . . . . . . . . . . . . . . . . . .

83

25 Tensors in Quantum Mechanics

86

25.1 Reducible and Irreducible Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . .

86

25.2 Cartesian Vectors in Classical Mechanics . . . . . . . . . . . . . . . . . . . . . . . . .

87

25.3 Cartesian Vectors in Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . . . .

87

25.4 Spherical Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

25.5 Transformation of Spherical Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

25.6 Definition of Tensor Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

25.7 Constructing Higher Rank Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

26 Wigner-Eckart Theorem

90

IV

90

MATHEMATICAL REVIEW OF QUANTUM MECHANICS


26.1 Review of the Concepts of Quantum Mechanics . . . . . . . . . . . . . . . . . . . . .

91

26.2 The Propagator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

91

26.3 Qualitative Results of the Propagator . . . . . . . . . . . . . . . . . . . . . . . . . .

93

26.4 Examples Using the Propagator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94

27 Gauge Transforms - An Electron in a Magnetic Field


27.1 Example - Electron in Uniform Magnetic Field . . . . . . . . . . . . . . . . . . . . .
28 The Simplistic Hydrogen Atom

97
98
101

28.1 Background - Setting Up The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 101


28.2 The Hydrogen Atom Wavefunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
28.3 Continuous Spectrum of Hydrogen . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

PERTURBATION THEORY IN QUANTUM MECHANICS

29 Bound State Perturbation Theory

105
105

29.1 Normalizing the Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107


30 Degenerate Eigenvalue Perturbation Theory

108

30.1 First Order Corrections For Degenerate Eigenvalues . . . . . . . . . . . . . . . . . . 108


30.2 Example - Particle in a Symmetric Box in 2 Dimensions . . . . . . . . . . . . . . . . 109
30.3 More on Degenerate State Perturbations . . . . . . . . . . . . . . . . . . . . . . . . . 112
30.4 Time Evolution of Degenerate States . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
31 Time Dependent Perturbations

116

31.1 Time Dependent Perturbation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 119


31.2 Periodic Perturbation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

VI

THE QUASI-CLASSICAL CASE

123

32 The WKB Approximation

124

32.1 Connecting The Quasi-Classical Solutions Around A Turning Point


Airy Function Asymptotics Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
32.2 Connecting The Quasi-Classical Solutions Around A Turning Point
Complex Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
32.3 Bohr-Sommerfeld Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
33 Penetration Through a Potential Barrier - Quantum Tunneling
33.1 Example - Rigid Wall With A Finite Barrier

130

. . . . . . . . . . . . . . . . . . . . . . 130

34 WKB Approximation for a Finite Potential Barrier

139

VII

139

SCATTERING THEORY

35 Potential Scattering

139

35.1 Constant Energy Wave Packet Scattering . . . . . . . . . . . . . . . . . . . . . . . . 140

36 Scattering Theory - A More General Approach

141

36.1 Scattering from a Localized Potential


The Born Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
36.2 Probability Flux Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

VIII

HOMEWORK PROBLEMS

148

37 Semester I

148

38 Semester II

153

IX

156

SEMESTER II FINAL EXAM

39 WKB Problem

156

40 Scattering Problem

161

41 Perturbation Problem

166

1
1.1

Review of Undergraduate Level Quantum Mechanics


The Schrodinger Equation
i
h

h2 2
(
r, t) =
(
r, t) + V (
r)(
r, t)
t
2m

What is the wave function ? Statistically, the wave function defines the probability of finding the
particle in a given region of space at time t. In the 1 dimension case that is:
Rb
2
a |(x, t)| dx = probability of finding the particle between a and b.
Expectation Values:
Z
hxi =
Z
hpi =

xdx



h
i x


dx

The Time Independent Schrodinger Equation:

h2 2
(
r) = (E V (
r)) (
r)
2m
X
i
(
r, t) =
ci n (
r)e h En t
n=1

Where each n (
r) is called a stationary state of the wave function with a corresponding energy
eigenvalue En .
There are several important solutions for simple potentials. These are the infinite square well, the
harmonic oscillator, the free particle, the delta function potential, and the finite square well.

1.2

Infinite Square Well


V (x) = 0

0 x a;

V (x) = ; otherwise

n2 2 h2
En =
2ma2
r
 n 
2
n (x) =
sin
x
a
a

1.3

Quantum Harmonic Oscillator


1
V (x) = m 2 x2
2
1
(ip + mx)
a =
2hm


1
En =
+ n h
2
 m 1/4 m 2
0 (x) =
e 2h x
h
1
n (x) = (a+ )n 0 (x)
n!

1.4

Free Particle
V (x) = 0

Z
k2
1
i kx h
t
2m
(x) =
dk
(k)e
2
Z
1
(k) =
(x, 0)eikx dx
2

1.5

Delta Function Potential


V (x) = (x)
(x) = Aeikx + Beikx ;

x<0

(x) = F eikx + Geikx ;

2mE
k=
h

x>0

Continuity:
A+B =F +G
F G = A (1 + 2i) B (1 2i) ;
9

m
h2 k

Transmission and Reflection


R=

1.6

|B|2
2
=
;
|A|2
1 + 2

T =

|F |2
1
=
R+T =1
2
|A|
1 + 2

Finite Square Well


V (x) = V0

a x a;

V (x) = 0 |x| > a

2mE
ikx
ikx
(x) = Ae + Be
; x < a; k =
h
p
2m(E + V0 )
(x) = C sin(lx) + D cos(lx); a < x < a; l =
h

2mE
(x) = F eikx ; x > a; k =
h
Boundary Conditions
Aeika + Beika = C sin(la) + D cos(la)


ik Aeika + Beika = l (C cos(la) + D sin(la))
C sin(la) + D cos(la) = F eika
l (C cos(la) D sin(la)) = ikF eika

1.7

The Hydrogen Atom


V (r, , ) =

e2
40 r

nlm = R(r) Ylm (, )


"
 2 2 #
m
e
1
E1
= 2
En =
2
2
40
n
n
2
h
H = En
L2 = h2 l (l + 1)
Lz = hm

10

Part I

Fundamental Concepts
2

Kets, Bras, and Operators

Wednesday - 8/27

2.1

Kets and Bras

Any physical state is completely represented by a ket (|i) in a vector space. Some properties of
kets include:
Addition and Subtraction |i |i = |i
Scaling c |i for some possibly complex number c
Null Ket 0 |i = 0
Any physical observable is represented by an operator in the vector space. Operation of an operator
on a ket produces a new ket.
A |i = |i
One mathematical relation of interest is that of eigenkets and eigenvalues. Consider some operator
A, ket |i, and complex number a such that:
A |i = a |i
In this case, |i is the eigenket and a is the eigenvalue. Generally, an operator will have sets of
eigenkets and corresponding eigenvalues. These sets may be finite (such as spin states) or infinite
(energy levels of an infinite square well or the Hydrogen atom). Any ket can be expanded in terms
of the complete set of eigenkets of an operator just like any function can be expanded in terms of
any eigenfunctions (eg. Fourier Series).
For every ket |i there exists a corresponding object called a bra h| in a dual vector space. The
relation between bras and kets is called Dual Correspondence and is written as:
|i h|
An important rule of using bras involves using complex conjugates when scaling them. For example:
c |i h| c

A |i h| A
11

With kets and bras, one can define the inner product and its properties
h|i = c;

h|i = h|i

h|i > 0;

h|i = h|i

Because the inner product of a bra and ket of dual correspondence produces a positive real number,
this can be used to normalize kets.
h
|
i = 1;

|
i =

|i
h|i

Finally, we can make a few statements about operators and their behavior in the vector space.
Operators satisfy the following:

If for every ket and bra: X |i = Y |i and h| X = Y h| Y then X = Y


If for every ket: X |i = 0 then X is the null operator
Commutative and Associative Laws of Addition:
(X + Y ) |i = (Y + X) |i ;

[X + (Y + Z)] |i = [(X + Y ) + Z] |i

Multiplication is NOT commutative: (though is some cases operators do commute...but in


general assume they do not)
(XY ) |i =
6 (Y X) |i
Dual Correspondence reverses order of operators.
(XY ) |i h| Y X
Most Operators are linear:
X[c1 |1 i + c2 |2 i] = c1 X |1 i + c2 X |2 i
Friday - 8/29
Similarly to the defined inner product ha|bi, which results in a number, we can also define an outer
product |ai hb| which results in an operator. With this new definition, it is important to note a
few illegal structures in the vector space:

We can only have an operator acting on a ket by preceding it.


Incorrect : |ai X

12

Correct : X |ai

We can only have operator acting on a bra by following it.


Incorrect : Y ha|

Correct : ha| Y

We cannot have multiple kets (or bras) to describe the state of a system unless there are
separate parts of the system. And in such a case, the kets are combined into a single new ket
Incorrect : |xi |+i

2.2

Correct : |x, +i

Operators

Operators are made up of matrix elements which we can find using several properties of bras and
kets. For instance, the Associative Axiom states that for an operator defined as X = |bi ha| we can
state that:
X |i = (|bi ha|) |i = |bi ha|i
This also gives a useful property of such an operator: if X = |bi ha|, X = |ai hb|.
The elements of an operator can be written as: X |ai = |i hb| X |ai = hb|i
Which also means that: hb|i = h|bi = ha| X |bi
Giving our main result of:
hb| X |ai = ha| X |bi
Note: in the case that the operator is Hermitian, X = X , the above becomes hb| X |ai = ha| X |bi

2.3

Hermitian Operators - Eigenvalues and Orthonormality

Consider the operator A and eigenkets |a1 i and ha2 |:

ha2 | A |a1 i = ha2 | a01 |a1 i = a01 ha2 |a1 i; ha2 | A |a1 i = ha2 | a0
2 |a1 i = a2 ha2 |a1 i

a01 a0
2 ha2 |a1 i = 0

This relation actually proves two results. Firstly, if a01 6= a02 , then the term (a01 a0
2 ) cannot be zero,
therefore in this case the inner product has to be zero. Alternately, if a01 = a02 then the inner product
is not zero and therefore a01 = a0
1 which means the eigenvalues are real. This can be summarized
by:
a01 = a0
1;

hai |aj i = ij
13

2.4

Base Kets

Any general ket can be expanded in terms of an orthonormal basis. That is:
|i =

Z
Ci |ai i

Cp |pi dp;

Ax , Ay , Az = C;

xi = |ai i

The coefficients can be found by:


haj |i =

Ci haj |ai i =

Ci ij = Cj Cj = haj |i

From this result, we can define an Identity operator as:


|i =

haj |i |aj i |i =

|aj i haj |i

|aj i haj | = I

Note: Normalizing A General Ket |i =

i Ci |ai i;

Ci = hai |i Find h|i.

X
X
X
X
h| I |i = h|
|aj i haj | |i =
h|aj ihaj |i =
|haj |i|2 =
Cj Cj
j

|Cj |2 = h|i = 1

2.5

The Projection Operator

We can define the projection operator .


i = |ai i hai | i |i = Ci |ai i
This operator returns the states projection on the ith component. This is useful if we are looking
for a probability of measuring some result from a given state.

2.6

Matrix Representation of an Operator

We can look at the individual elements of a matrix by applying the identity operator:

14

IXI =

|ai i hai |X

|aj i haj | =

|aj i [haj | X |ai i] hai |

ij

The object in the bracket is the matrix element of matching indices.


Xij = haj | X |ai i
As an example, suppose that X is a 3 3 matrix, then the matrix elements of the matrix are then:

ha1 | X |a1 i
X = ha2 | X |a1 i
ha3 | X |a1 i

ha1 | X |a3 i
ha2 | X |a3 i
ha3 | X |a3 i

ha1 | X |a2 i
ha2 | X |a2 i
ha3 | X |a2 i

Monday - 9/1 - LABOR DAY - NO CLASS


Wednesday - 9/3
From the definition for a single operators matrix element one can define the i and j components
of a combination of operators as:
X

Xij = haj | X |ai i Zij = haj | XY |ai i =

hai | X |ak i hak | Y |aj i

Zij =

Xik Ykj

This is simply the matrix resulting form multiplying the two matrix representations together.

2.7

Matrix Representation of Bras and Kets

P
Consider the statement: X |i = |i. We can write this out instead as hai |i = j Xij j . This is
x
the same form as x
=M
0 which is a simple matrix-vector equation. As the operators are matrices,
the bras and kets should be able to be represented by vectors. Consider if we write the relation
|i = X |i as

ha1 |i
X11
ha2 |i
..

.. = .
.
Xn1
han |i

..
.

ha1 |i
X1n

.. ha2 |i

. ..
.
Xnn
han |i

Further, the bra relation h| = h| X can be written as:

h|a1 i


h|an i = h|a1 i

X11
 .
h|an i ..
Xn1

15

..
.

X1n
..
.
Xnn

Which we can write in the same form as the above as:

ha1 |i


han |i = ha1 |i

..
.

X11
 .

han |i ..
Xn1

X1n
..
.
Xnn

So, we can express a ket as a column vector and a bra as a row vector:

ha1 |i


han |i
|i = ... h| = ha1 |i
han |i
Further, the inner and outer product can be expressed in matrix forms.
h|i =

h|ai ihai |i = hai |i hai |i

X = |i |i Xij = hai |ih|ai i = hai |ihaj |i


2.7.1

Example: 2 State System

Consider the two state system describing the spin states of an electron. Using the states |+i and
|i and the spin operator Sz we have the relations:
Sz |+i =

h
Sz |i = |i
2

h
|+i ;
2

We can find the matrix elements of the operator as:



h+| Sz |+i
Sz =
h| Sz |+i

 
h+| Sz |i
h/2
=
h| Sz |i
0

0
h/2

Solved in HW:
 
1
|+i =
;
0


0
S+ = h
|+i h| = h


1
;
0

16

 
0
|i =
1


0
S = h
|i h+| = h

0
0

Friday - 9/5

Experiments and Measurement

The act of measuring a system causes the state of the system to collapse into a single state. If the
state |i describes the state of a system before measurement, then we can write that:
|i =

Ca0 |a0 i,

Ca0 = ha0 |i

a0

If a measurement is taken and the resultant is ai , then the wave function will collapse like:
A |i ai |ai i ;

|i |ai i

A second measurement will yield the same result no matter what the case, as |i has collapsed to
only include |ai i. The probability associated with obtaining ai is related to the coefficient of that
eigenket in the expansion of |i.
P (ai ) = |Cai |2
In the case that the state |i is normalized this is simply the square of the coefficient. For an
ensemble of states given as {|i , |i , |i , . . . , |i}. In this case the probabilities can be written in
terms of the number of times a measurement yields |ai > versus the total number of systems:
P (ai ) =

3.0.2

Nai
N

Example: Spin States

Let |i represent a state for a spin system. In this case, we can write:
|i = C+ |+i + C |i

3.0.3

C+ = h+|i;

|C+ |2 = P (|+i)

C = h|i;

|C |2 = P (|i)

Example: Infinite Square Well

Let |i represent a state for the energy of an infinite square well of length a in 1 dimension. We
can write the states as:

17

n2 2 h2
2ma2
If a measurement yields that the system is in state |E2 i, then a second measurement will yield
the same result. Note (this analysis involves only stationary states, after some length of time, the
system will reset, but for the purposes of this example, an immediate second measurement will yield
the same state)
|i = C1 |E1 i + C2 |E2 i ;

3.1

En =

Expectation Values of Operators

Given an operator A, one can write the expected value of A operating on |i.
h| A |i
h|i
X
X
X
hAi =
h|a00 i ha00 | A |a0 i ha0 |i =
h|a00 ia0 a0 ,a00 ha0 |i =
h|a00 ia0 ha0 |i
hAi =

a0 ,a00

a0 ,a00

hAi =

a0

a0 |Ca0 |2

a0

3.2

The Stern-Gerlach Experiment and the Failure of Classical Mechanics

In this experiment, an oven is heated and emits the outer most electron of Ag (silver). The electron
is a 5s electron and therefore does have vertical spin. The electron stream is put through a small
slit and directed through a magnetic field. According to the laws of classical mechanics, the vertical
component of the spin can have any value and the output from the field should be a distribution
along the various angles. Instead, the beam is split into two separate beams, one corresponding
with the up spin and one with the down spin. The force on the particles is equal to:
F =

U = z B z ;
z
z

z =

|e|
Sz
mc

Clearly the spin must only be able to take on quantized values. The experiment was then extended
to different planes of spin. Three experiment were conducted:
3.2.1

Set Up 1

From the oven, the beam is sent into a single Stern-Gerlach z Device, separating out spin up and
down. The spin down beam is blocked and the spin up beam is directed to a second SG zDevice.
The output of the second device is only spin up particles.

18

3.2.2

Set Up 2

From the oven, the beam is sent into a single Stern-Gerlach z Device, separating out spin up and
down. The spin down beam is blocked and the spin up beam is directed to a second Stern-Gerlach
x
Device. The output of the second device is both spin up and spin down (in the x direction). Once
measured, it was found that the probability of the spin being up or down is 1/2.
3.2.3

Set Up 3

From the oven, the beam is sent into a single Stern-Gerlach z Device, separating out spin up and
down. The spin down beam is blocked and the spin up beam is directed to a second SG x
Device.
The spin down beam is blocked and the spin up is sent into a second SG z Device. The output of
this device is both spin up and spin down.
The output in set up 3 is expected to be only spin up (since it was filtered to do so in the first
device) but instead both up and down are present. this is due to the fact that it is impossible to
specify both the Sz and Sx values.
Monday - 9/8

Constructing Spin 1/2 Operators and Kets

Consider the spin Sx . From the Stern-Gerlach experiment we know that a beam containing only
|+ > entering a Sx device will result in half spin up and spin down beams in the x
direction. That
is:
|h+|Sx , +i|2 = |h+|Sx , i|2 = 0.5
From this we can derive:
1
1
h+|Sx , +i = h+| (a |+i + b |i) = ah+|+i |a|2 = ; a = ei
2
1
1
1
h|Sx , +i = h| (a |+i + b |i) = bh|i |b|2 = ; b = ei
2
1
i
1 h
|Sx , +i = |+i + ei1 |i ;
2

1 =

From orthonormality and normalization conditions, we can write that:


hSx , |Sx , i = 1;

19

hSx , |Sx , i = 0

i
1 h
h+| + ei1 h| [c |+i + d |i] = 0
2
c + ei1 d = 0 c = ei1 d
|c|2 + |d|2 = |d|2 + |d|2 = 1 |d|2 =
1
d = ei ;
2

1
2

1
c = ei ei1
2

Factoring out the phase we can write this as:


i
1 h
|Sx , i = |+i ei1 |i
2
Finally, choosing 1 = 0 yields the known forms:
i
1 h
|Sx , +i = |+i + |i ;
2

i
1 h
|Sx , i = |+i |i
2

The operator Sx can be written in matrix form from this information:


Sx = |Sx , +i hSx , +| Sx |Sx , +i hSx , +| + |Sx , +i hSx , +| Sx |Sx , i hSx , |
+ |Sx , i hSx , | Sx |Sx , +i hSx , +| + |Sx , i hSx , | Sx |Sx , i hSx , |
= |Sx , +i

h
hSx , +| |Sx , i hSx , |
2
2

Inserting the definitions for |Sx , i:

Sx =

h
h
(|+i h| + |i h+|) =
2
2

0
1

1
0


=

h
x
2

And by similar analysis, the y direction spin kets and the operator can be derived to be:

1 
|Sy , +i = |+i + |i ei2
2

1 
|Sy , i = |+i |i ei2
2
But we cant set 2 equal to zero this time therefore we have to find out what values it can take.
From the experiment we know that |hSx , +|Sy , +i|2 = 1/2. So,

 1

1
1 + ei2
hSx , +|Sy , +i = (h+| + h|) |+i + ei2 |i =
2
2

20

1
1

|hSx , +|Sy , +i|2 = (2 + 2 cos(2 )) = 2 =


4
2
2
1
|Sy , +i = [|+i + i |i] ;
2

1
|Sy , i = [|+i i |i]
2

So, again we can solve for the operator as well:

Sy =

4.1

h
h
(i |+i h| + i |i h+|) =
2
2


0
i


h
i
= y
0
2

Spin Summary

Given all of the above information, we can summarize the kets and operators for a spin
1
|Sx , i = (|+i |i) ;
2

h
Sx =
2


0
1

1
0

h
= x ;
2

1
|Sy , i = (|+i i |i) ;
2

h
Sy =
2


0
i

h
i
= y ;
0
2

21

h
Sz =
2

1
2

system.

|Sz , i = |i


1
0

h
0
= z
1
2

Wednesday - 9/10

Compatibility and Operators

If two operators (A and B) are compatible then the commutator of the two is zero. That is,
[A, B]AB BA = 0. In this case, we can change the order since AB = BA. In the case of two
operators (C and D) being incompatible then the commutator of the two will be nonzero. In this
case, [C, D] = CD DC 6= 0 and therefore CD 6= DC. If two operators are compatible and one
has degenerate states, then the second operator can be used to break the degeneracy. Consider the
case that operating A on any of a set of kets |an i always returns the same eigenvalue a. In this
case, the compatible operator B can break the degeneracy.
{A |a1 i = a |a1 i , A |a2 i = a |a2 i , . . . , A |an i = a |an i}
{B |a1 , b1 i = b1 |a1 i , B |a1 , b2 i = b2 |a1 , b2 i , . . . , B |a1 , bn i = bn |a1 , bn i}
An example of this are the various eigenkets of the Hydrogen Atom model. For a state |n, 1, mi,
the operator L2 will always give the same result (l = 1) for any allowed value of m (1, 0, 1). This
can be broken by using the Lz operator to get the value of m.
THEOREM:
If [A, B] = 0 and |ai are nondegenerate eigenkets of A then: (1) ha00 | B |a0 i = Ba0 a00 ,a0 and (2) |ai
is also an eigenket of B.
PROOF:
ha00 | [A, B] |a0 i = 0 ha00 | AB BA |a0 i = 0
ha00 | AB |a0 i ha00 | BA |a0 i = a00 ha00 | B |a0 i a0 ha00 | B |a0 i
(a00 a0 ) ha00 | B |a0 i = 0 ha00 | B |a0 i = Ba0 a00 ,a0
This result proves the first statement above, and by simply expanding the result we can show the
second:
B=

|a00 i ha00 | B |a0 i ha0 |

a00 ,a0

B |ai =

|a00 i ha00 | B |a0 i ha0 |ai =

a00 ,a0

|a00 i ha00 | B |ai =

a00

|a00 i ha00 | B |a0 i a,a0

a00 ,a0

X
a00

Thus the second theorem is true as well.

22

Ba00 ,a a00 ,a |a00 i = Ba |ai

So for any system, we want to find the maximal set of compatible operators such that for operators
A, B, and C the eigenket can be written |ai |a0 , b0 , c0 i such that:
A |a0 , b0 , c0 i = a0 |a0 , b0 , c0 i
B |a0 , b0 , c0 i = b0 |a0 , b0 , c0 i
C |a0 , b0 , c0 i = c0 |a0 , b0 , c0 i
Again, an example of this is the operators H, L2 , and Lz for the state |i = |n, l, mi.

5.1

Incompatible Operators

Theorem: There does not exist a simultaneous eigenket of both A and B if [A, B] 6= 0.
[A, B] |i ;

A |i = a0 |i ;

B |i = b0 |i

(AB BA) |i = AB |i BA |i = a0 b0 | > b0 a0 | >= (a0 b0 b0 a0 )| >= 0.


However, [A,B] 6= 0, so the above eigenvalues cannot both exist.

Measurements of Compatible and Incompatible Operators

Consider an experiment which starts with a quantum state |i and then makes measurements in
various orders via operators A, B, and C. If a measurement takes the sequence A,B,C then the
probability of getting the result |c0 i is:
P r1 = |ha0 |b0 i|2 |hb0 |c0 i|2
However, if the measurement went straight from measuring A to measuring C, then the resultant
probability would be:
P r2 = |ha0 |c0 i|2
The results differ if A and B are compatible versus if they are not. If [A, B] = 0 then P r1 = P r2 ,
however if [A, B] 6= 0 then P r1 6= P r2
An example weve seen would be going from measuring Sz , taking the spin up and measuring Sx
then taking spin up from that and going back to Sz and then simply measuring Sz twice in a row
taking spin up both times. The first method returns a probability of getting spin up as the final
measurement of 25% while the second gives a probability of 100%. This is because Sz and Sx dont
commute.

23

The Uncertainty Principle

It can be shown that for any two operators, the relation must hold that:

(A)2

1
(B)2 | h[A, B]i |2
4

Where weve defined:


A = A hAi (A)2 = A2 + hAi2 2AhAi
E

D


(A)2 = A2 + hAi2 2AhAi = A2 2hAi2 + A2



(A)2 = A2 hAi2

Friday - 9/12
In order to prove the above relation between A and B. Several other relations need to be derived.
First, consider the Schwartz Inequality given as h|ih|i |h|i|2 . If we define a new ket |i
as |i + |i then we can write the following relations:
|i = |i + |i h| = h| + h|
< | >= (|i + |i) (h| + h| ) = h|i + h|i + h|i + ||2 h|i
Then, choosing a form for the variable which will assist in simplifying the above, we can set
= h|i
h|i . After plugging in the value of , we can multiply by h|i. This lets the above be
solved as:
< | >= h|i +

h|i
h|i
h|i h|i
h|i +
h|i +
h|i
h|i
h|i
h|i h|i

= h|ih|i |h|i|2 |h|i|2 + |h|i|2


So, concluding since the above must be positive (since h|i 0 ).
h|ih|i |h|i|2
In addition to this statement if A is an Hermitian operator, then hAi is real.
hAi = A |i hAi = A |i = A |i
Further, if A is anti-Hermitian (that is, A = A), then hAi is pure imaginary.
hAi = A |i hAi = A |i = A |i
24

hAi = hAi
Returning to the Schwartz Inequality, we can define for two Hermitian Operators A and B:
|i A |i h| = h| A
|i B |i h| = h| B

h| AA |i h| BB |i | h| AB |i |2
D
ED
E
(A)2 (B)2 |h|i|2 | hABi |2
We can define the right hang side as:
AB =

AB BA AB + BA
1
+
= ([A, B] + {A, B})
2
2
2

Now, we can define two operators X = [A, B] and Y = {A, B}. It is easily seen that X is
Hermitian while Y is anti-Hermitian. Then the right hand side of the above becomes:


 2
1

X
Y 2 1 2



| hA, Bi | =
([A, B] + {A, B}) = i + = X
2
2
2
4
2

D
ED
E 1
(A)2 (B)2 | h[A, B]i |2
4

25

Transformations of Base Kets

Consider the geometric transformation from (x, y) to (x0 , y 0 ).


 0 
x
a
=
y0
c

b
d

 
x
y

The same can be done to kets in a set of base kets. For example, we can expand some state |i as
either of the two given forms below.
|i =

Ca0 |a0 i;

|i =

a0

Cb0 |b0 i

b0

For any such orthonormal and complete sets of eigenkets, an operator U can be derived for which
U |a0i i = |b0i i and U U = 1. We can immediately define this operator by looking at the first of these
properties:
U

|b0k i ha0k |

The matrix representation of such an operator can be found easily as well:


!
Uij = ha0i | U |a0j i = ha0i |

|b0k i ha0k | |a0j i =

ha0i |b0k iha0k |a0j i

Uij = ha0i |b0j i


Monday - 9/15
As an example of a transformation operator, lets consider a transformation from Sz to Sx .
1
1
Uzx = |Sx +i h+| + |Sx i h| = [|+i + |i] h+| + [|+i |i] h|
2
2
h
i
1
= |+i h+| + |i h+| + |+i h| |i h|
2
Uzx

1
=
2


1
1

1
1

It can be shown that this gives the expected results of:


Uzx |+i = |Sx +i ;

26

Uzx |i = |Sx i

8.1

Diagonalization of Transform Operators

Consider the operators X and X 0 . We can write the matrix elements of X in terms of some set of
eigenket |ai i and the elements of X 0 in terms of a second basis |bj i.
Xij = hai | X |aj i ;

Xij0 = hbi | X |bj i

And if we assume there exists some transformation between the two sets of base kets,
hai | U = hbi |

U |ai i = |bi i ;
We can immediately write the result that:

Xij0 = hbi | X |bj i = hai | U XU |aj i


X 0 = U XU
We can further prove that the trace of X and X 0 are the same since T r(AB) = T r(BA) therefore
in terms of matrices
T r(X 0 ) = T r(U XU ) = T r(U U X) = T r(X)
Alternately, in terms of operators and bras and kets,
T r(X) =

X
a0

ha0 | X |a0 i =

X
a0 ,b0 ,b00

hb00 |b0 i hb0 | X |b00 i =

b0 ,b00

ha0 |b0 i hb0 | X |b00 i hb00 |a0 i =

hb00 |a0 iha0 |b0 i hb0 | X |b00 i

a0 ,b0 ,b00

hb0 | X |b0 i

ha0 | X |a0 i =

a0

b0

hb0 | X |b0 i

b0

Finally, consider the case that operators A and B are known where [A, B] 6= 0. The eigenvalues
and eigenkets of A are known as well and we want to solve for the eigenkets and eigenvalues of B.
We can derive this as:
X

ha00 | B |a0 i ha0 |b0 i = b0 ha00 |b0 i

a0

This is merely an eigenvalue equation which is solvable by:


X

Bij Cjk = bk Ci

Where C k is the k th eigenket and bk the corresponding eigenvalue. From this we can write the
transformation matrix as a combination of the eigenkets of B.
27

C11
..
U = .
Cn1

C1n
..
.

...
..
.
...

Cnn

From this results we can see that the transformation matrix can be used to diagonalize a matrix.
That is:

b1
..

U BU = .
0

8.2

0
..
.

...
..
.
...

bn

Unitary Equivalent Observables

Lastly, well define a Unitary Equivalent Operator. If A is an operator, then the associated
Unitary Equivalent Operator is U AU 1 . This new object has the following property:
A |a0 i = a0 |a0 i U AU 1 |a0 i =?
U A |a0 i = a0 U |a0 i ; U AU 1 U |a0 i = a0 U |a0 i


U AU 1 U |a0 i = a0 U |a0 i
So, U |a0 i is an eigenket of U AU 1 with eigenvalue a0 . Recall that U |a0 i = |b0 i. This means that
|b0 i and U |a0 i are simultaneous eigenkets of both B and U AU 1 which in many cases are the same
operator.
Wednesday - 9/17

Operators with Continuous Spectra

Consider a continuous eigenket |i. We can make the following changes from our relations of discrete
eigenkets to continuous:
ha0 |a00 i = a0 ,a00 h 0 | 00 i = ( 0 00 )

|a i ha | = 1

| 0 i h 0 | d 0 = 1

a0

|i =

X
a0

Ca0 |a0 i =

ha0 |i |a0 i |i =

a0

28

C0 | 0 i d 0 =

h 0 |i | 0 i d 0

|ha |i| = 1

|h 0 |i|2 d 0 = 1

a0

h|i =

h|a0 iha0 |i h|i =

h| 0 ih 0 |id 0

a0

A |a0 i = a0 |a0 i X | 0 i = 0 | 0 i

ha00 | A |a0 i = a0 a0 a00 h 00 | X | 0 i = 0 ( 0 00 )

9.1

The Position Operator

Let the position operator x


be defined in one dimension as:
|
X
x0 i = x
0 |
x0 i
We can expand the state |i in terms of the position operator and get:
Z
|i =

h
x|i |
xi dx0

One might recognize the term h


x|i. It is the wavefunction defined in undergraduate quantum
mechanics classes.
h
x|i = (
x)
When a measurement is made, theoretically the state |i collapses to some position |
xi but in reality
it collapses to some region between x0 dx0 and x0 + dx0 . Therefore, the probability of measuring
a particles location and getting a result between these values is:
P r = |h
x0 |i|2 dx0 = | (
x0 )|2 dx0
These results can easily be generalized to three dimensions. The integral simply becomes three
integrals and the eigenket |
xi now denotes a three dimensional space vector. It is easily shown that
the different position operators commute:
[Xi , Xj ] = 0

29

9.2

The Momentum Operator - Translation Method

We can define the translation operation as an operator that moves a given position ket |xi into
some new position. That is:
T (d
x0 ) |
x0 i = |
x0 + d
x0 i
Then defining the operator itself in terms of some Hermitian operator k to be
T (d
x0 ) = I ik d
x0 + O(d
x0 )
From this definition, the following properties can be demonstrated (its homework to do so)
T T = T T = 1
T (d
x0 )T (d
x00 ) = T (d
x00 )T (d
x0 ) = T (d
x00 + d
x0 )
T (d
x0 )T (d
x0 ) = I T (d
x0 )1 T (d
x0 )
lim T (d
x0 ) = I

d
x0 0

T (d
Consider the commutation relation [X,
x0 )]. We can expand each term:
 0
(d
|
XT
x0 ) |
x0 i = X
x0 + d
x0 i = x
0 + d
x0 |
x + d
x0 i
|
T (d
x0 )X
x0 i = T (d
x0 )
x0 |
x0 i = x
0 |
x0 + d
x0 i
So, then,
T (d
[X,
x0 )] |
x0 i = d
x0 |
x0 + d
x0 i
Expanding this out, we can write:
d
x0 |
x0 + d
x0 i = d
x0

|ai ha|
x0 + d
x0 i = d
x0

|ai a (
x0 + d
x0 )

And expanding the wavefunction while keeping only terms of order d


x0 :
a (
x0 + d
x0 ) a (
x0 ) + d
x0 a (
x0 ) + . . .
d
x0

|ai a (
x0 + d
x0 ) d
x0

|ai a (
x0 ) d
x0 |
x0 + d
x0 i d
x0 |
x0 i

Plugging this into the above,


30

i , T (d
i , I ik d
[X
x0 )] |
x0 i d
x0i |
x0 i [X
x0 ] |
x0 i = d
x0i |
x0 i
X
i , ik d
i , i
[X
x0 ] |
x0 i = [X
kj d
x0j ] |
x0 i = d
x0i |
x0 i
j

i , kj ]d
i , kj ] = ij
[X
x0j |
x0 i = d
x0i |
x0 i i[X

Consider the dimension of the Translation operator. In order for the operator to be unitless, we

require the operator k to be defined as k = p/h. This defines the new momentum operator p = h
k.
This also turns the above relation into the well known commutation relation:
[xi , pj ] = ihij
9.2.1

Finite Translations

Consider a finite translation along the x axis defined by;


T (x0 x
) |
x0 i = |
x0 + x0 x
i
Now, consider defining the finite translation in terms of the infinitesimal translations: x0 = N dx0
where the value of N is very large. Consider the limit of the Translation Operator as N goes off to
infinity.

N
i
i
0
0

lim I px dx
= e h px dx
N
h
Translations along different directions will always commute and a general translation operator can
be written as:
i

T (
x) = e h pdx = e h (px dx +py dy +pz dz )
Friday - 9/19

9.3

The Wave Function - Some New Ideas

Define a momentum eigenket |


p0 i = |p0x , p0y , p0z i such that
p0 i
Pi |
p0 i = p0i |
If we operator on this ket with the translation operator we get:

31

T (d
x0 ) |
p0 i = e h dx P |
p0 i
"
#


i 0 1 i 0 2
1 d
x P +
d
x P
+ . . . |
p0 i
h

2 h
#
"


i 0 0 1 i 0 0 2
+ . . . |
p0 i
1 d
x p +
d
x p
h

2 h
p0 i
T (d
x0 ) |
p0 i = e h dx p |
So, the eigenvalue is a pure phase. This is expected since T is a unitary operator. This can be
proven easily:
T |i = |i h| = h| T
h| T T |i = h| |i
||2 = 1 = ei
Now, consider the wave function in the context of what weve discussed. We can expand any ket
as:
Z
|i =

|x i hx |idx =
Z
h|i =

|x0 i (x0 )dx0

(x0 ) (x0 )dx0

If Ua0 are eigenfunctions of some operator, then we can expand any wave function in terms of them.
|i =

|a0 i ha0 |i hx|i =

a0

hx|a0 iha0 |i

a0

(x) =

Ua0 (x)Ca0 ;

Ua0 = hx|a0 i

a0

Consider the term h| A |i in wave function notation:


Z Z
h| A |i =

h|x0 i hx0 | A |x00 i hx00 |idx0 dx00

Consider the case that A is some function of the position operator, then we can write that hx0 | f (x0 ) |x00 i =
f (x0 )(x0 x00 ). And the above simply becomes
Z
h| f (x) |i =

(x0 )f (x) (x0 )dx0

32

Consider also the case that A is the momentum operator. In order to see what this is, consider the
Transport Operator acting on the ket |i.
0

T (d
x ) |i = T (d
x)

|x i hx |idx =

|x0 + dx0 i hx0 |idx0

But we can redefine the variables and then expand the wave function term:
Z

|x + dx i hx |idx

|x0 i hx0 + dx0 |idx0

hx0 + dx0 |i = (x0 + dx0 ) = (x0 ) + dx0

(x0 ) + . . .
x0

Plugging this back into the expression above:




0
0
0
T (d
x ) |i = |x i (x ) + dx
(x ) + . . . dx0
x0




Z
i 0
0
0
0
0
1 d
x P |i = |x i (x ) + dx
(x ) + . . . dx0
h

x0
Z

This can be written as:


i
|i dx0 p |i = |i
h

Z
p |i =

hx0 |

|x i ih 0
x
0

(x0 )dx0 dx0


x0


(x0 )dx0

Further, we have the original result we were interested in


Z
h| p |i =

(x0 )

ih 0
x

(x0 )dx0

One last thing we can calculate to look at is the term hx| p |i. This is going to give:
Z
hx| p |i =

hx|x i i
h 0
x
0

(x )dx =

hx| p |i = ih




(x x ) ih 0 (x0 )dx0
x
0

(x)
x

Monday - 9/22
From this result, we can consider what the result would be if the momentum operator repeatedly
operating on a ket, pm |i. Its easiest to consider n = 2 and extend that result:
33

p2 |i = p |i ; |i = p |i


Z

0 0
p |i = dx |x i i
h 0 hx0 |i; hx0 |i = hx0 | p |i = ih 0 (x0 )
x
x


Z
2
p2 |i = dx0 |x0 i (ih)2 02 hx0 |i
x
From this, we can easily see that:

p |i =

9.4



n
n
(x0 );
dx |x i (i
h)
x0n
0

hx| pn |i = (ih)n

n
(x)
x0n

Transformations in Position and Momentum Space

Just as we defined position eigenkets x |x0 i = x0 |x0 i, we can define momentum eigenkets p |p0 i =
p0 |p0 i. These momentum eigenkets have the properties (similar to the position eigenkets):
p |p0 i = p0 |p0 i
hp0 |p00 i = (p0 p00 )
Z
Z
0 0
0
|i = dp |p i hp |i = dp0 |p0 i (p0 )
Here we have defined the momentum space wavefunction as hp0 |i = (p0 ). From all of this we
can define a transformation (p0 ) (x0 ). In general a transform between bases |a0 i and |b0 i
can be given as Ua0 b0 = hb0 |a0 i. Therefore we can write:
Ux0 p0 = hx0 |p0 i
We need to derive some kind of structure for this transformation. In the discrete case, a summation
gave the elements of a matrix. In this case, an integral gives the infinite elements of an infinite
matrix. That wont work, lets instead look at some other quantities:
hx0 | p |p0 i = i
h

ip0 x0

0 0
0 0
0 0
h

hx
|p
i
=
phx
|p
i

hx
|p
i
=
N
e
x0

We can find the coefficient by the completeness relation:


hx0 |x00 i = (x0 x00 )
Z

dp0 N e

ip0 x0
h

N e

ip0 x00
h

= |N |2

dp0 e

ip0
(x0 x00 )
h

dp0 hx0 |p0 ihp0 |x00 i = (x0 x00 )

h;

34

|N |2 2h(x0 x00 ) = (x0 x00 ) N =

1
2
h

hx0 |p0 i =

ip0 x0
1
e h
2h

From this result we can see that the transform between momentum and position space is actually
the form for a Fourier transform:
1
(x ) =
2
h
0

9.5

dp e

ip0 x0
h

(p );

1
(p ) =
2h
0

dx0 e

ip0 x0
h

The Gaussian Wave Packet

Consider the solution of the equation:




0 x02
2
1
i
h 02 (x0 ) = E(x0 ) (x0 ) = 1 eikx 2d2
x
4 d
As homework, it is shown that for the above,
Z

hx i =
Z

02

hx i =

dx0 (x0 )x0 (x0 ) = 0


dx0 (x0 )x02 (x0 ) =

d2
2

d2
h(x0 )2 i = hx02 i hx0 i2 =
2
Z
hp0 i = dx0 (x0 )p (x0 ) = hk
02

hp i =

dx0 (x0 )p2 (x0 ) =

2
h
+ h2 k 2
2d2

h2
h(p0 )2 i = hp02 i hp0 i2 = 2
2d
s
(p0 hk)2 d3
d
e 2h2
(p0 ) =
h
Wednesday - 9/24 - CLASS CANCELED
Friday - 9/26 - NO CLASS (DEBATE DAY)

35

(x0 )

Part II

QUANTUM DYNAMICS
Monday - 9/29

10

The Time Evolution Operator

Consider some operator U (t0 ; t) which turns any ket |; t0 i into its time evolved ket |; t0 ; ti = |; ti.
U (t0 ; t) |; t0 i = |; ti
We can require this operator exhibits the properties of Conservation of Probability and Compositionality. That is:
U (t, t0 )U (t, t0 ) = 1;

h, t|, ti = h, t0 |, t0 i

This implies the result that the probabilities given by


in the time evolution:
X

|Ca0 (t0 )|2 =

a0

a0

|Ca0 (t0 )|2 and

a0

|Ca0 (t)|2 are conserved

|Ca0 (t)|2

a0

Secondly, any time evolution U (t0 , t2 ) can be written as a composition of two evolutions U (t0 , t1 )U (t1 , t2 ).
This property is also of great importance.
U (t0 , t2 ) = U (t0 , t1 )U (t1 , t2 )

10.1

Infinitesimal Evolutions

Consider the infinitesimal time evolution give by U (t0 + dt, t0 ). We firstly require that in the limit
of dt 0 this becomes the identity operator. We can define this operator as:
U (t0 + dt, t0 ) = 1 idt;

Where the condition that U U = 1 requires the operator to be Hermitian. The units of are
inverse seconds since dt has to be unitless. From this we can define the Hamiltonian operator as
the generator of time evolutions:
H=h

36

This definition leads us to the derivation of the Schrodinger Equation.

10.2

The Schr
odinger Equation

Consider the evolution from t0 to t and then to t + dt. That is:


U (t + dt, t0 ) = U (t + dt, t)U (t, t0 ) = [1 idt] U (t, t0 )
Dividing this result by dt
U (t + dt, t0 ) U (t, t0 )
H dt
=
U (t, t0 )
dt
h dt

ih U (t, t0 ) = HU (t, t0 )
t
This leads to the most general form for the Schrodinger equation and the foundation of all of
quantum mechanics:

i
h

U (t, t0 ) |, t0 i = HU (t, t0 ) |, t0 i ih |, ti = H |, ti
t
t
2

p
If we assume that the Hamiltonian has the form of H = 2m
+ V (x) then we get the familiar form
of the Schrodinger Equation from our undergraduate course in Quantum Mechanics:

hx| i
h

p2

|, ti = hx| H |, ti hx| ih |, ti = hx|


|, ti + hx| V (x) |, ti
t
t
2m

i
h

10.3

2
(x, t) = (ih)2 2 (x, t) + V (x) (x, t)
t
x

Time Independent Hamiltonian

If we assume that the Hamiltonian operator is any function which is independent of time we can
solve the relation above as:
i
h

U (t, t0 ) = HU (t, t0 ) U (t, t0 ) = e h H(tt0 )


t

This result can be checked by expanding both sides. (Check written notes for expansion) Next, if
we assume that some state |, ti can be expanded in terms of energy eigenkets as:
|, ti =

Ca0 (t) |a0 i;

a0

37

H |a0 i = Ea0 |a0 i

P
Given this expansion form and some initial state |, t0 i = a0 Ca0 (t0 ) |a0 i where all values of Ca0
are known, the problem is then to find |, ti at any time t. From the above result we can see that
this is simply:
|, ti =

Ca0 (t) |a0 i;

Ca0 (t) = Ca0 (t0 )e h Ea0 (tt0 )

a0

As an example consider an infinite square well with initial state (x, 0) given. We can use the above
to find the form at some later time t.
r
(x, 0) =

10.4

1
1 (x) +
3

2
2 (x) (x, t) =
3

1 i E1 t
e h 1 (x) +
3

2 i E2 t
e h 2 (x)
3

Expectation Values of Stationary and Non-Stationary States

P
Consider two states described by (x, 0) = n (x) and (x, t) = j Cj (t)j (x). The expectation
value of some operator B in for each of these states can be written as:
i

h, t| B |, ti = h, t0 | e h En (tt0 ) Be h En (tt0 ) |, t0 i = h, t0 | B |, t0 i

h, t| B |, ti =

Cj (t0 )Ck (t0 )e h (Ej Ek )(tt0 ) haj , t0 | B |ak , t0 i

j,k

Note the difference. In the case that a state is made up of a single stationary state, then the
expectation value of that state for some operator is independent of time. However, if the state is
made up of a superposition of several states, then the expectation value will vary depending on the
energies of the states. Summarizing these results:
(x, 0) = n (x) h, t0 | B |, t0 i = h, t| B |, ti
(x, t) =

X
j

Cj (t)j (x) h, t| B |, ti =

X
j,k

38

Cj (t0 )Ck (t0 )e h (Ej Ek )(tt0 ) haj , t0 | B |ak , t0 i

Wednesday - 10/1

10.5

Spin Interactions - Evolution of Spin States

Consider the Hamiltonian for an electron in a magnetic field oriented in the z direction. For such
a system:

H=

p2
p2
e
+ Hspin =

BSz
2m
2m mc

The eigenkets of this operator can be written as: |p0 , i. This gives:
p2 0
e
|p , i
BSz |p0 , i
2m
mc
 02 

p
e h
0
H |p , i =

B |p0 , i
2m
mc 2
H |p0 , i =

Assuming that p0 is very small, we can neglect the momentum of the particle and simplify the above
|e|
and E = h2 .:
in terms of new variables = mc
Hs = Sz Hs |i = E |i
As seen previously, if some state |i is a single stationary state, then time evolution simply adds
a phase ei , however if |i is made up of a superposition of several states, then the probability of
measuring any one of those states will vary in time. In our case:
i

|, t = 0i = c+ |+i + c |i |, ti = c+ e h E+ t |+i + c e h E t |i
If we let the initial state be |i = |Sx +i then the initial and evolved states are:
1
1
|, t = 0i = |Sx , +i = |+i + |i
2
2
i
i
1
1
|, ti = e h E+ t |+i + e h E t |i
2
2
Now, we can calculate the probability of making a measurement of the system at some later time t
and finding it is in the initial state |Sx , i.


 2
1

i
i
1
1
1

E
t

E
t
+

|hSx , |, ti| = h+| h| e h


|+i + e h
|i
2
2
2
2









 2
2 1

i
1 i E+ t
E+
E+
E
E

E
t


= e h
e h
t i sin
t cos
t i sin
t
= cos
4
4
h

h
h
h
2

39

h
Plugging in our definition for E =
2






 
  2
1
t
t
t
t
|hSx , |, ti| = cos
i sin
cos
i sin
4
2
2
2
2

 
 
 
  2
1
t
t
t
t
2
|hSx , |, ti| = cos
cos
+ i sin
i sin
4
2
2
2
2
2

P+ (t) = cos

t
2

P (t) = sin

t
2

Next, we can calculate the expectation value of Sx , Sy , and Sz . Well do the first now, the other
two are homework, but the results are given below.

h
[|Sx , +i hSx , +| |Sx , i hSx , |] |, ti
2

 
 
 h
h

t
t
=
|hSx , +|ti|2 |hSx , |ti|2 =
cos2
sin2
2
2
2
2

hSx i = h, t| Sx |, ti = h, t|

So, in total:

hSx i =

11

h
cos (t) ;
2

hSy i =

h
sin (t) ;
2

hSz i = 0

Energy-Time Uncertainty - An Overview

Consider some system in the initial state described as:

|, t = 0i =

CE 0 |E i =

E0

E 0 +E 0

E 0 E 0

dE 0 CE 0 |E 0 i

Let t be the life time of the state. That is, after some amount of time t, the state will no longer
be described by |, t = 0i given above. The state after some time t can be written as:
Z

E 0 +E 0

|, ti =
E 0 E 0

dE 0 CE 0 e h E t |E 0 i

Now consider the correlation probability amplitude in terms of the lifetime:


Z 0
2 Z 0
2
E +E 0

E +E 0

i
i 0
0 E 0 t i E 0 t

E
0
2 h
E
t
0
2
(
)
0
0
h

0
0
|h, t = 0|, ti| =
dE |CE | e
dE |CE | e
e
=

E 0 E 0

E 0 E 0

2

40

Friday - 10/3
From the above, if we define the change in energy as E 0 = E 0 E00 then we can write the relation
as:

h|, ti = e

i 0
E t
h
0

dE 0 |CE 0 |2 e h E t

From this, an important result can be determined. Firstly though, consider the above for a stationary state; that is E 0 = 0.
h|, ti = e

i 0
E t
h
0

dE 0 |CE 0 |2 e h (0)t = 1

Thus if |i is a single energy eigenket, then the energy is determined indefinitely and the probability
of finding the system in the same state after some time t is always 1. Secondly, consider the case
that E 0 . The exponential is then behaving like:

cos




E 0
E 0
t i sin
t
h
h

as the period of each oscillation goes to zero, t also goes to zero. Thus for E 0 , the lifetime
of the state t 0. From this, we can see the known result:
Et h

12

Schr
odinger and Heisenberg Interpretations of Time Evolution

Consider the evolution of the expectation value of an operator X. We can write this as:
hXi(t) = h, 0| U (t, 0)XU (t, 0) |, 0i
There are two interpretations of this expectation value. According to the Schrodinger interpretation,
the ket |i evolves in time and the operator X is time independent. In this case, the above can be
written as:
h, 0| U (t, 0)XU (t, 0) |, 0i = h, t| X (S) |, ti ;

|, ti = U (t, 0) |, 0i

According to the Heisenberg interpretation, the state |i is constant in time and the operator X (H)
is what evolves in time. With these assumptions, one can write the above as:
h, 0| U (t, 0)XU (t, 0) |, 0i = h| X (H) (t) |i ;

41

X (H) (t) = U (t, 0)XU (t, 0)

We will do some analysis working with the Heisenberg interpretation. Consider the time evolution
of some operator A. We can describe this as:
A(H) (t) = U (t, 0)A(H) (0)U (t, 0)



 d
d 
d
d (H)
(H)

(H)

(H)
A (t) =
U (t, 0)A (0)U (t, 0) =
U (t, 0) A (0)U (t, 0)+U (t, 0)A (0)
U (t, 0)
dt
dt
dt
dt
However, recall that the time derivative of the time evolution operator is known in terms of the
Hamiltonian.
i
h

d
U = HU ;
dt

ih

d
U = U H
dt

And so we can write that above time evolution as:


i
h

d (H)
A (t) = HU A(H) (0)U + U A(H) (0)U H
dt
i
h

d (H)
A (t) = [A(H) (t), H]
dt

Thus the time derivative of an operator is determined by its commutator with the Hamiltonian for
p2
. Classically, we
the system. As an example, consider the Hamiltonian of a free particle H = 2m
can find equations of motion from Hamiltons Equations:
x
H
p
=
=
t
p
m
p
H
=
=0
t
x
Alternately, we can find the same results quantum mechanically:
i
h

pp
p
p
2p
dx
p
d
x(t) = [x, H] = [x,
] = [x,
]p + p[x,
]=
ih
=
dt
2m
2m
2m
2m
dt
m
i
h

d
pp
p(t) = [x, H] = [p,
]=0
dt
2m

Monday - 10/6

12.1

Ehrenfest Theorem

According to the theorem proposed by Ehrenfest, the expectation values of physical observables
should follow their classical counterparts behavior. This is evident in the position and momentum
operators. Consider a free particle.

42


h
i
ih dx = x, p2 dx =
dt
h2m 2 i dt
H=

dp
p

2m
ih dt = p, 2m
=0
p2

x(t) = x0 +

p
t;
m

p
m

p(t) = p0

Next, consider the commutator of the position and its initial state and the resulting uncertainty
relation
h
i
p
ih
[x(t), x0 ] = x0 + t, x0 = t
m
m

h(x(t))2 ih(x0 )2 i

2 2
h
t
4m2

This result is important for understanding the behavior of a system. According to this, the dispersion of a wave packet expands in time as:
h(x(t))2 i

ct2
h(x0 )2 i

Consider a particle moving in some potential described by V (x). The Hamiltonian then becomes:

h
i
pi
dxi
ih dxi = xi , p2 + V (
x
)
i
h
=
dt
2m
dt
m
h
i
H=
+ V (x)
2
dp
p

V
i
ih
2m
x) = 0 ih xi
dt = pi , 2m + V (
p2

1 d
p
d
x
=
;
dt
m dt

d
p
d2 x

= V (x) m 2 = V (x)
dt
dt

This result confirms the Ehrenfest Theorem. However, this result only holds for certain systems as
complications to the Hamiltonian can cause differences to arise. This also leads to a useful result
in respect to calculating how expectation values vary in time. Consider some operator A,
d
d
1
h| A |i = h| A |i = h| [A, H] |i
dt
dt
ih

43

dA
dt


= h|

1
[A, H] |i
ih

13

Quantum Simple Harmonic Oscillator


2

2
Consider the motion of a particle in a potential defined as V (x) = m
2 x . For such a system, the
Hamiltonian is:

H=

p2
m 2 2
+
x
2m
2

In order to work through this problem, well define operators a and a which are the lowering/annihilation and raising/creation operators respectively:
r
a=

m
2h



i
x+
p ;
m

a =

m
2h



i
x
p
m

The coefficients are rigged such that [a, a ] = 1 which is an important result. Next, we need to
write the above Hamiltonian in terms of these two operators instead of the position and momentum
operators. First, we can note that the position and momentum operators can be written in terms
of the raising and lowering operators as:
r
x=



h
a + a ;
2m

p=i


mh 
a a
2

We could plug this into the above Hamiltonian and simplify it, however, we can also check what
the value of a a is and find that it is quite useful...
m
a a=
2h


x2 +

p2
i
i
+
xp
px
2
2
m
m
m


=

m 2
p2
1
x +

2
h
2mh 2

Multiply this result by h and it is clear that:




1

H=h
a a +
2
If we define some new operator N a a which we refer to as the number operator, then we can
note a few things about the system:
N = N;

[N, a] = a;

[N, a ] = a ;

[H, N ] = 0

The first of these two results shows that N is an Hermitian operator, meaning that its eigenvalues
n are real. Beyond that, we can show that for some ket |i = a |ni the inner product has to be
positive. Thus:
h|i 0; h|i = hn| a a |ni = hn| N |ni = n n 0
44

The last result from above shows that because N and the Hamiltonian commute, the eigenvalues
are related. Lets check the relation


1
|ni
N |ni = n |ni ; H |ni = h
N +
2




1
1
H |ni = h
n +
|ni = En |ni En = h
n +
2
2
Finally, we can derive exactly what a |ni is. Firstly though, consider:
[a, N ] = [a, a a] = a [a, a] + [a, a ]a = a
Thus, we can write that N acting on a |ni is:
N (a |ni) = aN |ni a |ni = (n 1)a |ni a |ni = cn |n 1i
hn| a a |ni = |cn |2 hn 1|n 1i cn =

The same analysis can be done for the second operator and this gives the results:
a |ni =

n |n 1i ;

a |ni =

n + 1 |n + 1i

With this result we can check the valid values of n. Consider starting with values of n = 3 and
n = 23 and operating with a to lower the indice.
a |3i |2i ; a |2i |1i ; a |1i |0i ; a |0i = 0
 
 
3
1
1
1
a

; a

2
2
2
2
The result that a |0i is 0, means that the lowest value that n can have is n = 0. Further, because
the operators can drive |ni to negative values if n is not an integer means that n must be a whole
number. Thus the known result is found that the index n is an integer greater than or equal to
0. Now, consider that the ground state |0i is known. We can use the raising operator to find the
higher order states. Consider the first few and we will generalize the result:
a |0i =
a |1i =

a |0i
0 + 1 |1i |1i =
1

a |1i
(a )2 |0i
1 + 1 |2i |2i = =
2
2

45

a |2i =

a |2i
(a )3 |0i
2 + 1 |2i |2i = =
3
23

|ni =

(a )n |0i

n!

Wednesday - 10/8

13.1

Matrix Elements of Operators and Expectation Values

Recall that the matrix elements of an operator as defined as haj | A |ai i in some terms of base kets
|ai i. If we consider the energy eigenkets of the harmonic system to the a set of base kets, then we
can write the matrix components of the operators a and a as:

hn0 | a |ni = nhn0 |n 1i = nn0 ,n1

hn0 | a |ni = n1 hn0 |n + 1i = n + 1n0 ,n+1


Note that in the case of n0 = n both matrix elements are zero. We can use these results to write
the matrix elements of the position and momentum operators.
r
0

hn | x |ni =
r
0

hn | p |ni = i

 r h


 0
h
0
hn | a |ni + hn | a |ni =
nn0 ,n1 + n + 1n0 ,n+1
2m
2m

r



m
h  0
mh
0
hn | a |ni hn | a |ni = i
n + 1n0 ,n+1 nn0 ,n1
2
2

hn0 | x2 |ni =


 0 2
h
hn | a |ni + hn0 | a a |ni + hn0 | aa |ni + hn0 | a2 |ni
2m

Note that when n = n0 we obtain the expectation of the operator for a system in the En energy
state. Consider a system in the ground state (n = 0). The expectation value of both the position
and momentum are 0. Consider the last equation though,
h0| x2 |0i =



h
h0| a2 |0i + h0| a a |0i + h0| aa |0i + h0| a2 |0i
2m

The first and last terms go to zero since we know that a |0i = 0 and therefore also h0| a = 0. This
leaves
h0| x2 |0i =



h
h0| a a |0i + h0| aa |0i
2m

46

Now, in order to get a result for the remaining terms, consider the Hamiltonian for this system and
the fact that aa a a = 1.

H=

1
a a+
2


h =



1
h 

a a+
aa a a h H =
a a + a a
2
2

And so, the expectation value of x2 for the ground state is then:
h0| x2 |0i =

h
2H
h
h0|
|0i =
2m

h
2m

Repeating this for the momentum yields a result of:


h0| p2 |0i =

mh
2

Plugging this into the uncertainty relation gives that for the ground state of a quantum harmonic
oscillator,
D

(x)2

ED
E
(p)2

n=0

2
h
4

For homework, we are to show that the result for any energy state En the uncertainty relation
becomes:

13.2


ED
E 
1 2 2
2
(x)
(p) = n +

h
2
2

Deriving the Wave Function

In order to write the wave function for the quantum harmonic oscillator it usually involves building
solutions out of polynomial expansions. Using operator theory we can build the solutions without
solving a complex differential equation. Consider firstly that the wave function for the ground state
is defined as:
n (x) = hx|ni 0 (x) = hx|0i
And since we know that a |0i = 0 we can set up the differential equation:
r
hx| a |0i = hx|

m 
p 
x+i
|0i = 0
2h
m

Given the relations that:


47

hx| x |0i = x0 (x);

hx| p |0i = ih

d
0 (x)
dx

The above becomes a differential equation of the form:




x2
d
+
x0
dx

x20 =

0 (x) = 0;

h
m

This has a solution of the form:

0 (x) = N e

x2
2x2
0

And noting that a normalized Gaussian distribution has the form:


f (x) =

(xm)2
1
e 2
2

We know that |0 (x)|2 should be normalized as so. Therefore the solution for the ground state is:

0 (x) =

1
1/4

x0

x2
2x2
0

Given the relation derived at the end of last class, we should be able to write the wave function for
any energy level in terms of this ground state. Consider:

n
n1
a
a
a
hx|ni = hx| |0i = hx| p
|0i
n
n!
(n 1)!

a
hx|ni = hx| |n 1i
n
This can be written in terms of wave functions as:
1
n (x) =
2nx0

1
1
n (x) =
n
2 n! xn0

x20


x

48

d
dx

x20

d
dx

n1 (x)
n
0 (x)

13.3

Time Evolution

From what weve derived so far about time evolution, we can quickly write that for some state |i
made up of a linear combination of stationary state, the time evolved state will look like:
|, ti =

Cn e h En t |, t0 i =

Cn ei(n+ 2 )t |, t0 i

Alternately we could look at the operators themselves (via the Heisenberg method) and analyze their
time evolution. However, instead of plugging the operators x and p into the Heisenberg relation,
lets use the raising and lowering operators.

i
h

da
= [a, H] a(t) = a0 eit ;
dt

ih

da
= [a , H] a (t) = a0 eit
dt

a (t) = a0 eit

a(t) = a0 eit ;

Plugging these into the above relations for position and momentum gives results of:
p0
sin (t)
m
p(t) = p0 cos (t) x0 m sin (t)
x(t) = x0 cos (t) +

With these operators defined we can check the expectation values. We find that both hxi and hpi
return 0. However it is worth noting that if a state is build correctly from the stationary states, then
it is possible to construct a state |i such that the expectation values follow the classical values.
Such states are called coherent states. They are of interest for quantum optics and laser physics.
Friday - 10/10 - NO CLASS
Monday - 10/13
The results of plugging in the time evolution to the position and momentum leads to a result that
hxi = hn| x(t) |ni = 0. This is inconsistent with Ehrenfests theorem that expectation values will
follow their classical counterparts behavior. However, there is a second method to derive the above
which we should note:
A(t) = U A(0)U ;

U = e h Ht x(t) = e h Ht x(0)e h Ht

After some algebra it is noted that this gives the same form for x(t) as the previous derivation.
However, we can make a state of energy levels well call |i for which the expectation value of the
position and momentum follow the behavior of the classical position and momentum. Such a state
turns out to be an eigenket of the lowering operator. That is:
h| x(t) |i = A cos (t) + B sin (t) ;

a |i = 0 |i

One interesting thing to note (which will be shown explicitly in the homework) is that there are no
eigenkets for the operator a .
49

13.4

Review of Quantum Simple Harmonic Oscillator

Definitions of operators:
Hamiltonian




h 
p2
m 2 2
1

=
H=
+
x =h
a a +
a a + aa
2m
2
2
2

Lowering (Annihilating), Raising (Creation), and N :


r
r




m
i
m
i

a=
x+
x
p ; a =
p
2h
m
2h
m
Position and Momentum:
r
x=



h
a + a ;
2m

p=i


mh 
a a
2

States:

|ni =
2

x2
1
0 (x) = 1/4 e 2x0 ;

x0

(a )n |0i

n!

1
1
n (x) =
2n n! xn0

n

2 d
0 (x)
x x0
dx

Uncertainty Relation:
D


ED
E 
1 2 2
2
(x)
(p) = n +

h
2
2

Commutation Relations:
[a, a ] = 1;
[N, a] = a,

[a , a] = 1

[N, a ] = a ,

[N, H] = 0

Time Evolution of operators:


ih
i
h

da
= [a, H] a(t) = a0 eit
dt

da
= [a , H] a (t) = a0 eit
dt

p0
sin (t)
m
p(t) = p0 cos (t) x0 m sin (t)
x(t) = x0 cos (t) +

50

14

Stationary States - Bound and Continuous Solutions

Consider the time dependent Schrodinger equation and how it simplifies for stationary states (as is
done in undergraduate quantum mechanics).

hx | ih |, ti = hx0 |
t
0




p2

h2 2
0
0
+ V (x) |, ti ih (x , t) =
+ V (x ) (x0 , t)
2m
t
2m

For stationary states this becomes the familiar starting point of many problems:



2 2
h

+ V (x) (x) = E (x);


2m

(x, t) = e h Et (x)

Redefining the above wave function as u(x) we can write this equation in the form:
d2 u
= 2 u(x);
dx2

2 =

2m
[V (x) E]
h2

Consider a region in space where V (x) > E and therefore 2 > 0. In this case, the solutions of the
above take the form:
u(x) ex
In order for our solution to be physical, it cannot blow up for |x| . This means that of the
above, only the exponentially decaying solution survives. Which sign is picked is then decided to
where along the x axis the region exists. Such states are called bound states and mathematically
such eigenfunctions always have discrete eigenvalues. That means that some function f (x) made of
several of these states can be written in the form:
f (x) =

Cn en x

Alternately, we could consider a region of space where V (x) < E and therefore 2 < 0. In this case,
the solutions take the form:
u(x) eix
These solutions simply oscillate at a rate dependent on . The exact rate of oscillation is dependent
therefore on the energy and potential energy in the region. Mathematically speaking, the eigenvalues
of these eigenfunctions form a continuous spectrum of values and instead of a summation of accepted
states an integral transform is used. In this case, the combination would cover some region of energy
values:

51

dk C(k)ei(k)x

f (x) =

Beginning next class, well be looking at the Delta Function Potential given as V (x) = V0 (x).
That means that we need to solve the equation
d2 u
2m
= 2 [V0 (x) E] u(x)
2
dx

h
From this we can see that the only bound states 2 < 0 occur if the energy has a value below V0 .
In other cases, the solutions are continuous states.
Wednesday - 10/15
As an aside, consider a particle free to move in three dimensions in a potential that is radially
varying. For such a case,



2 2
h

+ V (r) (
x) = E(
x)
2m

l,m (
x) = R(r)Yl,m (, );

R(r) = ru(r)

The remaining radial equation is:


2m
d2 u
= 2 (Vef f (r) E) u(r);
dr2

Vef f (r) = V (r) +

l (l + 1) h
2
2m

So in this case we return to the form:


d2 u
= 2 u(x)
dx2

14.1

Dirac Delta Function Potential

Consider the one dimensional problem of a potential defined as V (x) = V0 (x). For values of
energy less than 0, the resulting states are bound in the delta functions well. This can be see when
x 6= 0:
d2 u
= 2 u(x);
dx2

2 =

2mE
h2

Assume that the energy is negative and solutions are:



u(x) =

Aex
Be+x
52


x>0
x<0

where the continuity condition at x = 0 means that A = B. To get a value of we can integrate
the original equation about the point where the delta function contributes. That is:
Z

0+  2
d u

0

0+

0


2m
= 2 (V0 (x) E) u(x) dx
dx2
h

d2 u
2m
dx = 2
dx2
h

Z

2 =

0+

0+

V0 (x)dx


Eu(x)dx

0

0

2m
m 2
V0
2 V0 E =
h
2
h2

Next, consider the continuous solutions with energy greater than zero. If we consider a wave initially
coming in from the x direction, then we can write the solution as:
uI (x) = eikx + Reikx

uII (x) = T eikx

x<0

x>0

Continuity at x = 0 requires that T = 1 + R. Again integrating at x = 0 gives the condition that:


2mV

0
2mV0
h2

T ik ik (1 R) = 2 R =
0
h
2ik + 2mV
h2

0
Note that this reflection coefficient has a pole at that value 2ik = 2mV
. This corresponds to the
h2

m
2
bound energy state Epole = 2h2 V0 . In many experiments this is used as confirmation of a bound
state for a system. As an example, refer to the bound state of a proton and neutron in a Rutherford
scattering type experiment. The data near the energy of the Deuteriums formation energy shows
a pole.

15

The Feynmann Path Integral & The Propagator

Consider two points A and B. To find the path a particle takes between those two points, classically
we would find the Lagrangian of the system and then apply it to the action integral to find the path
of least action. However, quantum mechanically it is found that many different paths are possible
outcomes. According to Feynmann, all paths are possible trajectories, but each has a probability
associated with it. Consider the relation of an initial state to its final state if it is expressed in
terms of energy eigenkets,
i

hx00 |, ti = hx00 | e h H(tt0 ) |, t0 i

53

(x00 , t) = hx00 | e h H(tt0 )

|a0 i ha0 |, t0 i

a0

(x00 , t) =

hx00 |a0 ie h Ea0 (tt0 ) ha0 |, t0 i

a0

Expanding out the last term as:


Z

ha |, t0 i =

3 0

d x ha |x ihx |, t0 i =

d3 x0 Ua0 (x0 ) (x0 , t0 )

And plugging this into the above, we have

(x00 , t) =

"
d3 x0

#
X

Ua0 (x00 )Ua0 (x0 )e

i
h
E (tt0 )
a0

(x0 , t0 )

a0

The quantity in brackets is known as the Propagator since with it and some initial state (x, t0 ),
the wave function at any other time and location can be derived by the equation:

00

(x , t) =

d3 x0 K(x00 , t, x0 , t0 ) (x0 , t0 );

K(x00 , t, x0 , t0 ) =

Ua0 (x00 )Ua0 (x0 )e h Ea0 (tt0 )

a0

Friday - 10/17
It can be shown that for a initial wave function (x, t0 ) = (
xx
0 ), the wave function at any later
time is simply the propagation operator (x, t) = K(
x, t, x
0 , t0 ). Further, one finds that the above
can be written alternately as:
i

K = hx00 | e h Ea0 (tt0 ) |x0 i


expanding the above in terms of eigenkets |a0 i and |a00 i one finds the exact definition of K stated
above.

16

Potentials and Gauge Transformations in Quantum Mechanics

Consider a free particle moving in a potential V (


x). For such a particle, the force acting on the
particle is defined to be F = V (
x). However, classically, one could add some constant value to the
potential and not change the physics of the system. Lets considering doing the same for a quantum
mechanical system and check the results. Consider two systems described by the Hamiltonian:

H=

p2
+ V (x);
2m

2
= p + V (x) + V0
H
2m

54

If the systems start in the same initial state |, t0 i, then after some time t, the systems will be in
= H + V0 the states can be written as:
states described by new time evolved kets. Noting that H
i

|
, ti = e h H(tt0 ) |, t0 i = |, ti e h V0 (tt0 )

|, ti = e h H(tt0 ) |, t0 i ;

Thus the physics are unchanged except for a phase factor to the system with higher potential over
all. However, there are a number of experiments that can be done to confirm and show this is the
case.

16.1

Gravitational Gauge Transform

Consider an interferometer which can be tilted up into the air at some angle . The two paths (which
we will denote as I and II) are then only different by having different gravitational potential. The
incident beam of neutrons is split into two beams one of which (I) travels horizontal to the ground
to a reflector and then up an inclined plane to a second beam splitter at the other corner of the
device. The second beam (II) travels first up the incline and then across a horizontal path of equal
length to the first, but at a different height. The potentials of each beam along the paths are:
VI = 0;

VII = mgl2 sin()

The difference in states at the screen is then given by:


i

I = e h VI T 0 ;

II = e h VII T 0

Where weve defined T = t t0 as the travel time through that arm of the detector. Note that this
can be written in bracket notation in terms of inner products between positions A (the initial beam
splitter), B (the mirror on track II), C (the mirror on track I), and D (the recombining point).
I = hC, tc |A, ta ihD, td |C, tc i;

II = hB, tb |A, ta ihD, td |B, tb i

hD, td |C, tc i = hB, tb |A, ta i


Combining all of this we can see that the two resulting beams that recombine are:
i

II = e h (VII VI )T I
Further, recalling the definitions for VI and VII and the relations:
mv = p =

h
l1
ml1
T =
=

v
h

Then we can write the above in terms of a general phase factor:


II = ei I ;

() =
55

m2 l1 l2 sin()
h2

And finally, the magnitude of the resulting beam is:


2

|tot | = |I | | 1 + e

16.2

 

|I |2
| = 4 cos
2
2

Electromagnetic Gauge Transforms

and B
can be more
Consider the equations of classical Electromagnetism. The physical fields E
easily derived and defined in terms of the vector potential A and scalar potential where:

= 1 A ;
E
c t

= A
B

From these definitions, it can be seen that the following Gauge Transformations are allowed without
altering the resulting physical fields.
A = A0 + ;

= 0

1
c t

Monday - 10/20
Consider the Hamiltonian for a charged particle moving in an electromagnetic field. Classically, one
would write:
Hcl =

1 
e 2
p A + e(
x)
2m
c

This is found from the Hamiltonian equations which yield the Lorentz force on a particle:
pi
H
=
;
t
xi
If we define the canonical momentum pi =
Lorentz force as:

L
xi

H
xi
=
t
pi

and the quantity i = mx i then we can write the




d
1 A v
= e
+ A
dt
c t
c





d
v  e
A

= e A +

V A
dt
c
c
t
it can be shown that the relation
Noting that the last term is actually the total derivative of A,
between p and is:
+ e A
p =
c
56

And therefore the Lorentz force is:




d
p
e
= e v A
dt
c
From all of this we can finally continue with the Hamiltonian for a particle in an EM field,

Hcl =

1 
e 2
x)
p A + e(
2m
c

Consider the commutator relations and how they are altered. Of particular interest is the commu
tator of different components of :
e
e
e
e
e2
[i , j ] = [pi Ai , pj Aj ] = [pi , pj ] [Ai , pj ] [pi , Aj ] + 2 [Ai , Aj ]
c
c
c
c
c
x), and that the first and last term are then necessarily zero, we can express
Noting that A = A(
this as:
e
Ai
e Aj
[i , j ] = (ih)()
(ih)
c
xj
c xi


e Ai Aj
[i , j ] = ih

c xj
xi
The derivative term is actually the k component of A which is the magnetic field, so the final
result is that:

[i , j ] = ih

eX
ijk Bk
c
k

Wednesday - 10/22
G
Note on the homework: In proving that [xi , G(
p)] = i
h p
, the series is not truncated.
i

The higher order terms give the series expansion of the first derivative term multiplied
by the factor pi . This gives the expected result exactly.

Recall that we derived the Hamiltonian for a particle in an electromagnetic field as:
H=

e 2
1 
p A + e
2m
c

= p e A.
These definitions allow us to compare the force
And the kinetic momentum as
c
on a particle in both classical and quantum models of Electromagnetism. Consider the classical
definition:
57




d
+ v B

=e E
dt
c
And consider the quantum derivation:

ih

d
= [Pi, H]
dt

Consider only the i component of the kinetic momentum. We can the write this as:

X
di
1
i
h
= i ,
j j + e
dt
2m
j

The second term is easiest to deal with,


h
e i

[i , e] = pi A,
e = [pi , e] = ihe
c
xi
The first term is more involved, but can be expanded easily,

1
i ,
2m

j j =

1
[i , j ]
2m

X
j

j +

j [i , j ]

X
X
X
X
X
X
e
ihe
e
1
i
h
ijk Bk j +
ikj Bk j +
j ijk Bk
j
ih ijk Bk =
=
2m
c
c
2mc
j

j,k

j,k



i
he 

+
B

B
i
i
2mc

= m
Noting that
v , we can write the above in terms of velocities and get the final result:

i
h

e 



d
B
v e
= [Pi, H] = ih
v B
dt
2c


d
e 
B
v
= e +
v B
dt
2c

= B
v but this relation does not
Note that classically one could reduce the above since v B
hold in Quantum Mechanics since these quantities are represented by operators. Next, consider a
case where the potential is zero and a gauge transformation of the form:
A0 A0 +

58

= B0 z ). In this
Assume that the magnetic field is uniform and constant along some direction (B
case it is simple to see that the magnetic field resulting from both A = A0 and A = A0 + are the
same field. The question we need to answer is whether the additional component of A changes the
physics according to quantum mechanics. For this case, assume that changing the vector potential
also changes the state from |i to some related state |
i. We can then write the Schrodinger
equation for each system as:
1 
d
e 2
p A0 |i = ih |i
2m
c
dt


2

d
1
e
|
i = ih |
A0 +
p
i
2m
c
dt
As a homework assignment to work with the Levi Civita function ijk we are to show that for
r. Consider though, that we can show
= B0 z, the appropriate vector potential is A = 1 B
B
2
is invariant. We can
that although p is not invariant in the gauge transform, we can show that
show that for the states above, there exists some operator G which transforms |i into |
i. The
transform generates a relation of the form:
|
i = ei( ) |i
And the following relations can be used to find properties of the transform.
h|i = h
|
i G G = 1
h| x |i = h
| x |
i G xG = x [x, G] = 0

|i = h
|
h|
|
i ;

= (A0 );

59

= (A + )

Friday - 10/24
NOTE: Relation needed to do homework. Baker-Hausdroff Formula: For operators A
and B, the following relation holds
ff

1
[A, [A, B]] = 0
eA eB = eA+B+ 2 [A,B] if
[B, [A, B]] = 0
As an example, consider the quantum harmonic oscillator. The operators a and a satisfy
the relations
[a, a ] = 1;

[a, [a, a ]] = 0;

[a , [a, a ]] = 0 ea ea = ea+a

1
+2

Recall that we defined the relation between |i and |


i to be:
|
i = G |i
Such that:
G G = 1;

G xG = x;

G G

From the second condition (G commuting with the position operator) we can assume that this
operator will be a function of the position. So, summarizing what weve found: G is a function of
position of the form ei( ) . The third relation requires that:


e
e
e
G pi Ai i G = pi Ai
c
c
c
The solution to the above is for the operator to be defined as:
ie

G = e h c (x)
And thus we can further know,
2 G = G
i GG
i G = 2
G
i
i
Plugging this relation into the definition for the Hamiltonian shows that this solution does indeed
make the appropriate transform:


1 
e 2
d
1 
e 2
d
h |i G
G G |i = ih G |i
p A0 |i = i
p A0
2m
c
dt
2m
c
dt
A A + ;

ie

|, ti e h c (x) |, ti

60

16.3

Alternate Approach to Gauge Transforms

An alternate way to derive this result is to assume that some transform occurs which takes a ket
|i and performs the transform.
ie

|, ti |
i = e h c (x) |, ti
It turns out that the only equation which keeps this transform invariant is:


17


d
1 
e 2
p A0 + e |i = ih |i
2m
c
dt

The Aharonov-Bohm Effect

An experiment performed by Aharonov and Bohm involves observing the behavior of a free electron
on the exterior of a cylinder containing a magnetic field. As there is no magnetic field in the region,
there should (classically) be no change to the behavior of the electron outside the cylinder. However,
it can be shown that electrons moving around the cylinder interact with the vector potential in the
region. Consider two parts of a complete path around the cylinder, the wave function for an electron
along these paths can be written as:
i

i = e h

R
i

x
(
p ec A)d

Then, if we consider the relation between the two paths, there is an observed interference pattern
dependent on the magnetic field.
R
R
H
ie
ie
1

= e h c [ 1 (A)dx 2 (A)dx] = e h c (A)dx


2

Note that we can write this integral instead as:


I

d
(A)
x=

dS
B

So then the above relation can be written in terms of the phase which will be dependent on the
field, dimensions of the cylinder, and location of the electron. That is:
B

1 = 2 ei(B,a ,,...)

61

Part III

THE THEORY OF ANGULAR


MOMENTUM
18

Classical Rotations

Monday - 10/27
We already are aware of the spin operators Sx , Sy , Sz and in previous classes weve seen the orbital
angular momentum Lx , Ly , Lz . In this chapter well be working with the total angular momentum
J. However, well begin by considering classical rotations. Consider some point initially at (x, y, z).
If the system is rotated about the origin, then the point will be at a new location (x0 , y 0 , z 0 ) which
is given by some rotation matrix:
0
x
y 0 =
z0


x
y
z

By requiring that the distance to the point remain the same, we get the condition on R that:

x0

y0

0
x
z y 0 = x
z0

0


x
z y RRT = 1
z


Consider this quantum mechanically. We have some state defined as |i which undergoes a system
rotation and transforms into some new ket described by |iR . It is necessary to find a method to
compute the relation between these two states.

18.1

Finite Rotations



Consider a point undergoing the rotations Rz 2 Rx 2 . If the particle is initially at some point
along the x axis some distance a from the origin. In this case, changing the order or rotations gives
the results of:
(a, 0, 0) Rz
(a, 0, 0) Rx

 
2
 
2

(0, a, 0) Rx
(a, 0, 0) Rz

 
2
 
2

(0, 0, a)
(0, a, 0)

Clearly the finite rotations in many cases dont commute. We can define rotation matrices for
rotation of some angle about each axis as:

62


1
Rx = 0
0

18.2

0
cos
sin

0
cos
sin ; Ry = 0
cos
sin

sin
cos
0 ; Rz = sin
cos
0

0
1
0

sin
cos
0

0
0
1

Infinitesimal Rotations

Consider if the angle is taken to be some small amount . In this case we can use the expansion
form of the above matrices and drop terms higher than O(2 ). This gives the infinitesimal rotation
matrices:

Rx () 0
0

0
2
1 2


2
0
1 2

 ; Ry () 0
2
1 2


0
1
0

2

1 2

0 ; Rz () 
2
1 2
0


2
1 2
0

0
1

From these matrices, we can check if infinitesimal rotations commute. Consider Rx ()Ry ()
Ry ()Rx ().

0
0

0
2
1 2


2
0
1 2

 0
2

1 2

0
1
0

0
= 2
0


2

1 2

0 0
2
1 2

2
0
0

0
1
0

1


0 0
2
1 2
0

0
2
1 2



2
1 2

0
0 = Rz (2 ) 1
0

So, this leads to an important result that:


Rx ()Ry () Ry ()Rx () = Rz (2 ) I
Similar results exist for commutators of the other combinations of Ri and Rj .

18.3

Overview of Group Theory and Group Representation

Consider some set of elements {a, b, c . . .} = G. This set is considered a group if it has the following
four properties for some binary operator :
Closure: a b = c, c G
Associative: (a b) c = a (b c)
63

Identity: a e = e a = a
Inverse: a a1 = a1 a = e
It can easily be seen that for the group of rotations these conditions are true, therefore the rotational
matrices form a complete group {R1 , . . .}. For the rotational group, some representation group can
be defined as:
{R1 , . . .} {D(R1 ), . . .}
This representation group can consist of operators, numbers, other matrices, or other mathematical
objects. A few important qualities of the group are:
RI = IR = I D(R)D(I) = D(I)D(R) = D(I)
RR1 = R1 R = I D(R)D(R1 ) = D(R1 )D(R) = D(I) D(R1 ) = [D(R)]1
Rx ()Ry ()Ry ()Rx () = Rz (2 )I D(Rx ())D(Ry ())D(Ry ())D(Rx ()) = D(Rz (2 ))D(I)
As well soon find, the solution to the above question relating two kets is |iR = D(R) |i where
D(R) satisfies the last of the above conditions.
Wednesday - 10/29 - NO CLASS
Friday - 10/31
Last time we looked at finite rotations and infinitesimal rotations. If we think back to when we
derived the momentum operator, we can use a similar form to define the rotational operator:
D(Rn ()) = I

i
J n

h

in this manner, the angular momentum J is the generator of rotation. From this definition we can
find the transform for finite rotations. Consider some rotation made up of an infinite number of
infinitesimal rotations d. We can write this as = limN N d. Plugging this into the above
we can expand the series as:




i
N
i
D(Rn ()) = lim I J n

= exp J n

N
h
N
h
Using this definition, we can expand the relation derived in the previous class (keeping only terms
of order up to 2 ):
D(Rx ())D(Ry ()) D(Ry ())D(Rx ()) = D(Rz (2 )) D(I)

64

i Jx  Jx2 2
1

h
h
2h

# "
#
"

i Jy  Jy2 2
i Jy  Jy2 2
i Jx  Jx2 2
i
1
1
1
=

Jz 2
h

h
2h
h h

2
h
h h
2h
h

i
2
Jz 2 [Jx , Jy ] = ihJz
2 [Jx , Jy ] =
h
h

This can be generalized to the expected result of:


[Ji , Jj ] = ih

ijk Jk

NOTE: There is an alternate way to derive this using the fact that L = x p and the positionmomentum operator. Thus two different approaches result in the same relation.

19

Evolution of Spin Operators

It can be shown that the expectation values of Si transform as vector relations. This can be shown
using the spin operators or more generally with the angular momentum operators. Consider the
expectation value < Jx >= hR | Jx |R i. We can derive this form the relation that:
eG AeG = A + [G, A] +

[G, [G, A]] + . . .


2!

For the expectation value above, = hi , G = JZ , and A = Jx . Therefore, we can write that:
< Jx >=



X
i n
n=0

[Jz , [Jz , [Jz , Jx ] ]]

Here weve shortened the notation, but each term contains n commutators of Jz with itself and Jx .
Considering just the first few terms of this expansion, one can see the pattern,
n = 0 Jx
n = 1 [Jz , Jx ] = ihJy
n = 2 [Jz , [Jz , Jx ]] = h2 Jx
n = 3 [Jz , [Jz , [Jz , Jx ]]] = ih3 Jy
In general we can see that this is:

< Jx > (|z ) = Jx




2
3
1
+ . . . + Jy +
+ . . . = Jx cos() Jy sin()
2
3!

65

19.1

Spin 1/2 System

Consider some system in the state:


|i = C+ |+i + C |i
We can write the rotated ket as:
i

|R() i = e h Sz [C+ |+i + C |i]


i

|R() i = C+ e 2 |+i + C e 2 |i
Consider the case that a rotation of 2 is made. Classically we expect to get same resulting system
state. However, the above generates:
|R(2) i = C+ ei |+i + C ei |i = |i
This phenomenon has been demonstrated using interference patterns from wave packets initially
constructively interfering but after one has been rotated by 2 the interference becomes destructive.
Monday - 11/3

20

The Rotation Operator D(R)

Last time we showed that for some ket alpha, the rotated ket can be written in terms of an operator
D(R). The operator is written as:
i

D(R()) = e h Jn
Further, it was seen that the expectation values of operators transformed like vectors in that:
hR | Si |R i = Rij h| Sj |i
Consider the manner in which this operator alters some state |i. In a spin 1/2 system, any state
can be express as |i = C+ |+i + C |i. Under a time evolution in a magnetic field, this state
would be:
i

|, ti = U (t) |, t = 0i = e h Sz t

66

However, this can also be though of as a rotation about the z axis by some angle = t. Thus any
spin precession evolution can also be thought of as a rotation problem instead. One merely has to
switch between the above relations (that is, t).
As an aside, consider the magnetic moment for a charged particle. If the particle is orbiting some
other particle(s), then the path it follows forms a circuit loop and the magnetic moment of a loop
With some analysis, it can be shown that this quantity is actually
of current is simply
= ci A.
e
2mc L. The addition of the intrinsic spin of the particle leads to


e
+ g S
L
2mc

The extra factor introduced is dependent on the particle involved and is known as the g factor.
For an electron, it is roughly 2.

20.1

Experiment - Neutron Interferometer

Consider a neutron interferometer with one leg of the device passing through a uniform magnetic
i
field. In this case, the state of the system passing through the field will differ by a factor e 2 T
where T is the amount of time the particles spent in the field. If we assume that the initial state
consisted of only spin up particles, when the two parts recombine they form an interference pattern
since the new state is:


i
i
|i = |+i + e 2 T |+i = 1 + e 2 T |+i

|h|i| =

21


2 + 2 cos

T
2

2
h+|+i = 4 cos

T
2

Spin 1/2 System - 2 Component Form

Consider the spin 1/2 system that we are so familiar with. If we express the states in vector form,
then we make the transitions of:
 
1
|+i =
= + ; h+| 1
0
 
0
|i
= ; h| 0
1


0 = +

1 =

|i = C+ |+i + C |i = C+ + + C



h| = C+
h+| + C
h| = C+
+ + C

67

Si

h
i ;
2

x =


0
1

h| Si |i =


1
;
0
X

y =


0
i


i
;
0

h|ai ha| Si |a0 i ha0 |i

a,a0

z =


1
0


0
1

h
i
2

And finally, some useful properties of the


matrices are:
[i , j ] = 2i

ijk k

i2

=1

{i , j } = 2ij
iT = i
T r(i ) = 0
Det(i ) = 1


a3
a1 ia2

a
=
a1 ia2
a3


(
a
)
b = a
b + i
a
b
Wednesday - 11/5
Note that in the special case of a
= b, the last relation simplifies to:
(
a
) (
a
) = a
a
+ i
(
aa
)
(
a
)2 = a
a

If we are using the 2 component description for a spin 1/2 system, then we can write the state
as:
 
   
1
0
C+
= C+
+ C
=
0
1
C
However, weve seen that we can use vector and matrix relations for these rotations, therefore we
expect that we can find some matrix operator such that:
 R 
C+
R =
C



C+
C

We can find the form of this matrix by expanding the operator D(R()) into matrix form. Note

that for a spin system, J = h2 S.


68


n
2

i
1
= I n
+
2
2!

i
2

2

1
( n
) +
3!
2

i
2

3

( n
)3 + . . .

We can immediately note from the relation above that:


(
n
)2 = n
n
=1
(
n
)3 =
n

And thus for any component (


n
)n the result is 1 if n is even, and
n
if n is odd. From this, we
can write the above as:


n
2





 3 1
1  2

+ . . . I i
n

+ ...
= 1
2! 2
2
2 3!

This is simply the expansion of sine and cosine:


i

e 2 n = cos


I i
n
sin
2
2



As a homework assignment, select some axis to rotate about by some angle and compute what
D(R()), R , and hSk i are.
As an example, recall the spin ket |S n
+i which we derived in a homework. This can be found by
the rotation method by simply performing the following rotations: Start from spin in the z direction
either up or down, rotate around the y axis by an angle and then about the z axis by some angle
. This can be written as:
R
= D(Rz ())D(Ry ())
 
 
h

 i 

= cos
I iz sin
cos
I iy sin

2
2
2
2

 
 





cos

sin
0
cos 2 i sin 2
2
2
R






0
cos 2 + i sin 2
sin
cos
2

 i
e 2
R
=
0

ei 2

 
cos 2
 
sin 2

 
sin 2
 
cos 2

This gives the results (confirming the alternate method) of:

cos

 

R
+ =
ei sin

2


 ;

 
i sin
e

 2
R
=
cos 2

69

Next class we will start looking at the Euler angles and how they are used in Quantum Mechanics.
As background, recall that for a 3 3 matrix, there are a total of 9 components. However, if we
require that such a matrix satisfy RRT = 1 then the number of independent elements is reduced to
3. These three correspond to the Euler angles of rotation.
From the above, we can define two different types of rotations. Since the determinant of RRT can
be expanded to give:


Det RRT = Det RT Det (R) = [Det (R)]2 = 1 Det (R) = 1
We define those rotations with positive determinants to be proper rotations and they form a
group known as the Special Orthogonal Group in 3 Dimensions or SO(3).
An example of an improper rotation would be:
0
1
x
y 0 = 0
0
z0


x
0
0 y
z
1

0
1
0

Friday - 11/7
Weve seen how a rotation operator Rn () was a member of the SO(3) group. It can be further
shown that the set of operators {D(Rn ()} forms a second group known as SU (2) since they form
a group of special unitary 2 2 matrix operators (in the spin 1/2 system). In general, one of these
matrices will have the form:

a
U=
c

b
d

Since each element can be complex, this is a total of 8 elements of this matrix. However, when we
require that U be unitary, we get the result that:

U=

a
b

b
a

Which reduces the number of independent elements to 4. This combined with the condition of
a proper rotation (|a|2 + |b|2 = 1) leaves only 3 independent elements. This is exactly what we
expected to get. To check this result with our previous work, recall the operators defined by D(R)),
i

D(Rn ()) = e 2 n = cos

cos


2

inz sin

D(Rn ()) =
(inx + ny ) sin

70




I i
n
sin
2
2


2

2

(inx ny ) sin
cos


2

+ inz sin


2

2


a
b
and satisfied
b
a
|a|2 + |b|2 = 1. Consider next, the rotations defined by the classical Euler angles. We can write
these as:

This result agrees with both statements above. It is of the form U =

1. Rotate about the z axis by an angle


2. Rotate about the y 0 axis by an angle
3. Rotate about the new z 0 axis by an angle
Alternately, we can simply write this as:
D(R(, , )) = D(Rz 0 ()) D(Ry0 ()) D(Rz ())
However, we need to write these rotations all in terms of the initial coordinate transformations since
that is the orientation used in the matrices
. A little geometry analysis yields the results that:

D(Ry0 ()) D(Rz ()) = D(Rz ()) D(Ry ()) D(Ry0 ()) = D(Rz ()) D(Ry ()) D(Rz ())1

D(Rz 0 ()) D(Ry0 ()) = D(Ry0 ()) D(Rz ())


So, combining these with the above definition, we can write that the Euler Angle rotation can be
made by:


D(R(, , )) = D(Rz 0 ()) D(Ry0 ()) D(Rz ()) = D(Ry0 ()) D(Rz ()) D(Rz ())

D(R(, , )) = D(Rz ()) D(Ry ()) D(Rz ())1 D(Rz ()) D(Rz ())
D(R(, , )) = D(Rz ()) D(Ry ()) D(Rz ())
And plugging in our spin matrices:

D(R(, , )) =

i
2

ei 2

D(R(, , )) =

 
cos 2
 
sin 2

 

ei 2
2
 
sin 2 ei 2

cos

71

 
i 2
sin 2
  e
0
cos 2
 

ei 2
2
  +
cos 2 ei 2

sin

ei 2

Monday - 11/10

22

The Density Matrix

Consider a system which is perfectly unpolarized. The net spin of the entire system is zero, meaning
that half the particles will be measured spin up or down for any direction of spin. How can be this
be written? We cant write this as |i = 12 |+i + 12 |i since this is not a polarized state, it is the
spin up ket in the x direction. We have to introduce a new notation for a system in such a state.
Consider a system with a fraction i in the state |i i. We can write the state of the system as:
{1 |1 i , 2 |2 i , . . .}
This will accurately describe the system but we must require that:
X

i = 1

We can next define the Ensemble Average for an operator A. If we measure the system and get
some result Ai corresponding to a state |i i, ni times out of ntot total measurements, then we can
write the average value of the operator as:
[A] =

1 X
n i Ai
ntot
i

We can write this in terms of the expectation value of A:


[A] =

i hi | A |i i

We can write the above result in terms of elements of an operator if we insert kets |bi and |b0 i,
[A] =

i hi |bi hb| A |b0 i hb0 |i i

i,b,b0

Therefore, the operator can be written as:


[A] =

b,b0 Ab0 ,b ;

b,b0 =

b,b0

i hb|i ihi |b0 i

Thus, the operator corresponding to the density matrix is:

72

i |i i hi |

and we can then write a few properties of the operator. Firstly, the trace of the density matrix is
1. This is easily shown:
T r() =

i hb|i ihi |bi =

i,b

i |hb|i i|2 =

i = 1

i,b

Further, because it corresponds to a physical observable, the operator must be Hermitian. Consider
the example of a spin 1/2 system. For a pure state such as |+i, we can write the above as simply:
 
1
1
= |+i h+| =
0


0 =

1
0

0
0

A second example would be |Sx +i,


1
= |Sx +i hSx +| =
2

 
1 1
1
1
2

1
1 =
2


1
1

1
1

But consider a purely unpolarized state, {50% |+i , 50% |i}.


1
1
1
1
= (|+i h+|) + (|i h|) = I =
2
2
2
2


1
0

0
1

This can be shown for any spin direction since the terms involved would include summations over
both spin up and spin down:
|Sn +i hSn +| + |Sn i hSn | =

|i i hi | = 1

An alternate way to say this is that [Si ] = 0 for all i if the system is purely unpolarized. Lastly,
lets consider how many free elements are in the density matrix for a spin 1/2 system. For a 2 2
set up, any matrix can be constructed by:
= aI +

bi i

Since the trace has be to equal to 1 for the density matrix, we can immediately see that a = 21 since
T r(i ) = 0 and T r(I) = 2 in this case. To determine the other allowed coefficients, consider the
ensemble average [Si ],

73

h
1
[Si ] = T r(Si ) = T r( i ) = T r I +
2
2
2

bj j i

Consider each term separately


= 0;
T r(I)

T r(j i ) =

1
1
(T r(j i ) + T r(j i )) = T r(2ij )
2
2

So the above gives that [Si ] = hbi . And we can write that:
X1
1
= I +
[Si ]i
2
h
i

Wednesday - 11/12

22.1

Density Matrix For Continuous Variables and Time Evolutions

Recall that we wrote the ensemble average of an operator A as:


[A] = T r (A)
We can expand this interns of pair of kets |a0 i and |a00 i and then transform that to a continuous
variable. This results in:
[A] =

00

00

ha | |a i ha | A |a i

d3 x0 d3 x00 hx0 | |x00 i hx00 | A |x0 i

a0 ,a00

i |i i hi | x0 ,x00 =

i hx0 |i ihi |x00 i =

i i (
x0 )i (
x00 )

Further, we can observe the time evolution by noting that:


X
X
(t = 0) =
i |i , t = 0i hi , t = 0| (t) =
i |i , ti hi , t|
i

i
h

|i , ti = H |i , ti ih hi , t| = hi , t| H
t
t

Thus, we can write:


 X
X 

ih (t) =
i i
h |i , ti hi , t| + ih |i , ti
hi , t| =
i (H |i , ti hi , t| |i , ti hi , t| H)
t
t
t
i

ih

= [H, ]
t
74

23

General Representation of Angular Momentum

From this point on, well be working primarily with the angular momentum operators Jx , Jy , Jz ,
and J 2 . For this set of operators, we need to find a set of two that satisfy [A, B] = 0 much like we
did for the spin operators with S 2 and Si . We can show that for J 2 = Jx2 + Jy2 + Jz2 , the relation
[J 2 , Jl ] = 0 holds for any Jl .
[J 2 , Jl ] =

[Ji Ji , Jl ] =

= i
h

Ji [Ji , Jl ] +

Ji

[Ji , Jl ]Ji

ilk Jk + ih

XX
i

ilk Jk Ji

In order to combine these terms we change ilk Jk Ji to kli Ji Jk and then moving indices changes:
kli = lik = ilk which gives:
[J 2 , Jl ] = ih

Ji Jk (ilk ilk ) = 0

i,k

Therefore we can define some base kets |a, bi for which,


J 2 |a, bi = a |a, bi

Jz |a, bi = b |a, bi

and then we can construct the ket itself. To do so, lets define a few general operators and quantities:
J+ Jx + iJy

J Jx iJy

From this, its easily seen that J+


= J and the combination of the two yields:

J+ J = Jx2 + Jy2 + i[Jy , Jx ] = Jx2 + Jy2 + hJz = J 2 Jz2 + hJz


J J+ = Jx2 + Jy2 hJz = J 2 Jz2 hJz
[J+ , J ] = 2
hJz

[Jz , J+ ] = h
J+

[Jz , J ] = h
J

Next, consider what operating on |a, bi with J does. Consider first the combination Jz J+ |a, bi.
Jz J+ |a, bi = (
hJ+ + J+ Jz ) |a, bi = (
hJ+ + J+ b) |a, bi
So this gives the result:
Jz J+ |a, bi = (b + h) J+ |a, bi

Jz J |a, bi = (b h) J |a, bi

75


=


J+ |a, bi = Ca,b |a, b + hi
J |a, bi = Da,b |a, b hi

So, these act like raising and lowering operators (much like the quantum mechanical harmonic
oscillator problem), so we should be able to find maximum and minimum bounds for b. From the
above definitions we can argue that:

ha, b| J 2 = Jx2 + Jy2 + Jz2 |a, bi ha, b| J 2 |a, bi = ha, b| Jx2 + Jy2 |a, bi + ha, b| Jz2 |a, bi
Since hJx2 + Jy2 i 0, we can state that b2 a. Meaning we have found bounds for b. Comparing
with previous results yields: a = bmax (bmax + h). Alternately, with the observations that J+ acting
on |a, bmax i and J acting on |a, bmin i should both return 0, we can write that:

J+ J |a, bmax i = J 2 Jz2 + hJz |a, bmax i a b2max hbmax = 0

J J+ |a, bmin i = J 2 Jz2 hJz |a, bmin i a b2min + hbmin = 0
Solving both for a, we get the resulting condition of
bmax (bmax + h) = bmin (bmin
h) (bmax + bmin ) (bmax bmin ) = h (bmax + bmin )
This leaves two possible solutions: bmax = bmin h or bmin = bmax The first is impossible since it
requires bmax < bmin which is incorrect. Therefore we have the limits and values on b as:
b = 0, 1, . . . , bmax
Friday - 11/14
Summarizing what weve found so far; for some ket |a, bi we can raise and lower the value of b by:
J+ |a, bi = Ca,b |a, b + hi

J |a, bi = Da,b |a, b hi

If we start with the minimum value for b, and use J+ to raise the value, well eventually get to a
point where we can no longer raise the value. This results in the equation: bmin + nh = bmax . Using
the result that bmin = bmax we can immediately see that:
bmax =
where weve defined j =
the value for a fixed j:

n
2

n
h = jh
2

= 12 , 1, 23 , 2, 25 , . . . as the index. Plugging this into our result for a gives

a = j(j + 1)h

76

So, with these new results, we can restate the eigenvalue equations in terms of some ket |j, mi:
J 2 |j, mi = j(j + 1)h |j, mi

23.1

Jz |j, mi = mh |j, mi

Matrix Representation of Angular Momentum

Just as with the spin 1/2 case, we can write each of the operators as a matrix in as many dimensions
as there are possible m values. Consider this as:

hj, mmax | Ji |j, mmax i

..
Ji =
.
hj, mmin | Ji |j, mmax i

...
..
.
...

hj, mmax | Ji |j, mmin i

..

.
hj, mmin | Ji |j, mmin i

Working out the components for J 2 and Jz are straightforward:


hj, m0 | J 2 |j, mi = j(j + 1)hhj, m0 |j, mi = j(j + 1)hm,m0
hj, m0 | Jz |j, mi = mhhj, m0 |j, mi = mhm,m0
Both matrices are diagonal since theyre dependent on m,m0 . Finding the x and y operators is done

by using the operators J . It is straightforward to show that since J+


= J ,

J+ |j, mi = C+ |j, m + 1i hj, m| J+


= hj, m + 1| C+
hj, m| J = hj, m + 1| C+

And if we assume that C+ is real, then this gives the condition that:
2
hj, m| J J+ |j, mi = hj, m| J 2 Jz2 hJz |j, mi = j(j + 1)h2 m2 h2 mh2 = C+

Solving for C+ and noting the symmetry between J+ J and J J+ gives that:
C = h

p
(j m)(j m + 1)

So, this gives that:


p
hj, m0 | J |j, mi = h
(j m)(j m + 1)m0 ,m1
Combining this with the definitions allows us to write the x and y components in terms J . So,
summarizing the matrix elements:
hj, m0 | J 2 |j, mi = j(j + 1)hm,m0
77

hj, m0 | Jz |j, mi = mhm,m0

1
1
(J+ + J )
Jy =
(J+ J )
2
2i
p
hj, m0 | J |j, mi = h
(j m)(j m + 1)m0 ,m1
Jx =

23.2

Rotation Operator for Generalized Angular Momentum

Recall that we defined the rotation operator as:

D(R) = e h n J
And from this we can change any ket |i into a rotated state |iR = D(R) |i. It can be shown
that for some initial state |j, mi, we can expand in terms of the eigenkets for all value of m and get
the rotated ket as:
|j, miR = D(R) |j, mi =

|j, m0 i hj, m0 | D(R) |j, mi

m0

Defining the expectation value of the rotation as the Wigner Functions, we can express the rotated
ket as:
|j, miR =

j
0
Dm,m
0 |j, m i

j
0
Dm,m
0 = hj, m | D(R) |j, mi

m0

Referencing the spin 1/2 system in some orientation defined by the angle and as we have seen
before, we can quickly see that:
|1/2, 1/2iR =

1/2

1/2

D1/2,m0 = h1/2, m0 | D(R) |1/2, 1/2i

D1/2,m0 |1/2, m0 i

m0

But m0 can only be 21 .


1/2

1/2

|1/2, 1/2iR = D1/2,1/2 |+i + D1/2,1/2 |i


1/2

D1/2,1/2 = cos

1/2

D1/2,1/2 = ei sin

We already know the values of two of the Wigner Functions! Other can be looked up in reference
books.

23.3

Orbital Angular Momentum

Weve been working in the case that J = S + L. If we assume that either S or L is zero, then it is
easy to show that any of these three operators satisfy the commutation relation:
78

[Ai , Aj ] = i

ijk Ak

A = S, L, J

Suppose that the spin of a particle can be neglected. In this case J = L and we can change notation
for a state |j, mi to |l, mi. We now have the relations:
L2 |l, mi = l(l + 1)h2 |l, mi

Lz |l, mi = mh |l, mi

We can imagine some initial state (r, , ) = hr, , |i which is rotated some infinitesimal
amount about the z axis. In this case we can write:
iLz
+ . . .
h
|r, , iR = D(R) |r, , i = |r, , + i
D(R) = 1 +

Monday - 11/17
We can find the position space representation of the rotation operators similar to how we found the
momentum operator representation. Consider an infinitesimal rotation by some amount about
the z axis. Applying h| to this, we can expand in terms of and get the resulting equation:
i
h| Lz |r, , i
h

i
h|r, , i + h|r, , i = h|r, , i h| Lz |r, , i

h
h|r, , + i = h|r, , i

Dropping the matching terms and writing the braket as the position wave function gives the expected
result:

hr, , | Lz |i = ih

(r, , )

This can be repeated for Lx and Ly and the results are similar:


hr, , | Lx |i = i
h sin
cot cos
(r, , )

hr, , | Ly |i = ih cos
cot sin
(r, , )

Combining the above with the definition that L2 = L2x + L2y + L2z = L2z +
the result:

79

1
2

(L+ L + L L+ ) gives

hr, , | L2 |i =
h2

1
2
1
2 2 + sin
sin




sin
(r, , )

One immediately sees that this is the angular portions of the 2 operator. We can denote this as:

1
1
2

2 2 + sin
sin




sin

In the case that a potential is only dependent on the radial distance of a particle, we can write the
Hamiltonian operator as:

H=

p2
+ V (r)
2m
2

Where some examples of this type of potential would be the Hydrogen problem V (r) = Ke
r or
1
2
the 3 dimensional harmonic oscillator V (r) = 2 Kr . Because the Hamiltonian commutes with both
Lz and L2 , the three operators share eigenkets.
H |En,l,m i = En |En,l,m i ;

LZ |En,l,m i = mh |En,l,m i ;

L2 |En,l,m i = l(l + 1)h2 |En,l,m i

The solutions of such a potential can be expanded in the form:


(r, , ) = R(r)Yl,m (, )
where the angular functions turn out to be the spherical harmonics.

23.4

Quantum Mechanical Treatment of the Spherical Harmonics

If we consider some position ket determining only the angular direction of a point, we can write the
state as:
|
ni = |, i
and from this definition we can write the spherical harmonics as:
Yl,m (, ) h, |l, mi
We can observe the behavior of the spherical harmonics from the above relations:

hh|l, mi
h, | LZ |l, mi = m

hr, , | Lz |i = ih
(r, , )
80

Yl,m = imYl,m Yl,m (, ) eim

From this result we can derive the allowed values of l, since m can range from l to +l, and the
function must be periodic on = 0, 2, we require:
Yl,l (, ) = Yl,m (, ) eil = eil eil2
Therefore l must be an integer. Note that this is different from the total angular momentum index
j which could have half integer values. Consider that any direction defined by |, i can be derived
by Euler rotation of = and = (no third rotation is necessary). We can then see that:
|, i = D(R( = , = , = 0)) |
zi
Expanding this in terms of some other set of states l0 and m0 , this becomes:

(, ) = hl, m|, i = hl, m| D(R( = , = , = 0)) |


zi
Yl,m
X
=
hl, m| D(R( = , = , = 0)) |l0 , m0 i hl0 , m0 |
zi
l0 ,m0

The last term picks out only the m0 = 0 case which leaves us with the result of:
r

(, )
Yl,m

2l + 1 l
Dm,0
4

Thus the spherical harmonics can be thought of as representations of rotations of states.


Wednesday - 11/19

24
24.1

Addition of Angular Momentum


Overview - New Notation

In order to describe systems with both position and spin, well need to modify some of our notation.
We can more explicitly write that:
|xi |x, si ;

+ S J = L
I + I S
J = L

|xi |si + |xi S |si


J |x, si L
Thus the angular momentum operator acts only on the position part of the ket and the spin only
on the spin ket. We can denote this in position space as:
(x) space spin
81

24.2

A System of 2 Spin 1/2 Particles

Consider a simple system of two spin 1/2 particles with their spins coupled. In this case, the state
of the system will be denoted as:


1 1 1 1

|s1 , s2 ; m1 , m2 i = , ; ,
2 2 2 2
Thus for this system, there are a total of 4 states.
Consider the overall spin S this system. We can define this as S = S1 I + I S2 . Further, we can
show by expanding S = S1 + S2 that in this new system the over all spin operators behave exactly
like this individual counterparts.

!
[Si , Sj ] = S1i I + I S2i , S1j I + I S2j = ih


ijk S1k +

ijk S2k

= ih

ijk Sk

In this new system, we still have the familiar eigenkets for the spin operators:
S2 |s, mi = h
s(s + 1) |s, mi ;

24.3

Sz |s, mi = mh |s, mi

Clebsch-Gordon Coefficients

In general, we can represent this system in one of two ways, |S1 , S2 ; s, mi or as stated above
|S1 , S2 ; m1 , m2 i. These two notations are related by the Clebsch-Gordon coefficients.
|S1 , S2 ; S, mi =

|S1 , S2 ; m1 , m2 i hS1 , S2 ; m1 , m2 |S1 , S2 ; S, mi

m1 ,m2

where the inner product is the C-G coefficient. We can shorten this notation by dropping the Si
terms and this becomes simply:
|S, mi =

|m1 , m2 i hm1 , m2 |S, mi

m1 ,m2

Returning to our example system of two spin 1/2 particles, we can see that the total spin of the
system S can be either 1 or 0. For the system in state S = 1 there are three possible m values:
{m = 1, m = 0, m = 1}. These account for 3 of the 4 possible states. This set of S = 1 states is
a triplet state. The remaining state S = 0, m = 0 is a singlet state. In order to find the ClebschGordon coefficients well start with a state we already know the coefficient for (the maximal spin
state) since we can show that:

82

Sz = S1z + S2z hS, m| Sz S1z S2z |m1 , m2 i = 0


hS, m| (m
h m1
h m2 h) |m1 , m2 i = 0 m = m1 + m2


From this result we can immediately see that |S = 1, m = 1i = m1 = 21 , m2 = 12 . To get the other
relations well apply the S operator to the ket and recall that this results in a state exactly like
that of the J operator:
S |S, mi = h

p
(S + m)(S m + 1) |s, m 1i

So, applying this to the above results in:






1 1
1 1


S |1, 1i = S ,
= (S1 + S2 ) ,
2 2
2 2
s


 s


p
1 1
1 1
1 1
1 1
1 1

+
+ 1 ,
+
h
+
+1
h (1 + 1)(1 1 + 1) |1, 0i = h
2 2
2 2
2 2
2 2
2 2



1
, 1
2
2





1 1 1
1 1
1
|1, 0i = ,
+ ,
2 2
2
2
2 2
Repeating this process results in


1
1

|1, 1i = ,
2
2
And from this information, we know that the remaining state must be orthogonal to these three
and thus it has the form:




1 1
1
1
|0, 0i = b ,
+ c ,
2 2
2
2
Requiring that h0, 0|1, 0i = 0 results in:




1 1 1
1 1
1
|0, 0i = ,
,
2
2 2 2
2 2

24.4

Interaction Energy - Splitting of Energy Levels

Lastly, in the case that there in an interaction energy of the form H = AS1 S2 , we can immediately
see that since:

83

1 2 2 2 
S2 = S12 + S22 + 2S1 S2 S1 S2 =
S S1 S2
2
From this we can see that the interaction energy for S = 0, 1 are:
2
h
S1 S2 |1, mi =
|1, mi
4
3h
S1 S2 |0, mi = |0, mi
4
Thus there is an increase in energy for a triplet state and a decrease in energy for a singlet state.
This splitting of energy is important in many areas of physics. That is, if the Hamiltonian is of the
form H = H0 + AS1 S2 then the energy levels will be shifted by:
ES=1 = E0 +

h
4

ES=0 = E0

3h
4

Friday - 11/21 - No Class


Monday - 12/1
Weve seen how the Clebsch-Gordon coefficients related the base kets |m1 , m2 i and |j, mi. Consider
the most general case of two spins j1 and j2 , the ranges of these indices are:
m1 = {j1 , j1 + 1 . . . , j1 };

m2 = {j2 , j2 + 1 . . . , j2 }

m = {j, j + 1 . . . , j}
j = {j1 j2 , j1 j2 + 1 . . . , j1 + j2 };

j1 > j2

It can be shown that the total number of states in either notation is the same. In the first case
there is a total of (2j1 + 1) (2j2 + 1) states and the sum of states for the second notation is:
jX
max
jmin

(2j + 1) =

jX
1 +j2

(2j + 1) = (2j1 + 1)(2j2 + 1)

j1 j2

We can then note a few things about these base kets:


For any given m value, various combinations of m1 and m2 can generate m1 + m2 = m.
For each value of m, multiple values of j can exist.
For any given m, the number of possible combinations of m1 and m2 is the same number as
the possible j values. Thus the max possible states for a single m is (2j2 + 1).

84

This can be seen by listing the values possible for a given m value:
mmax = j1 + j2 ; {m1 , m2 } = {j1 , j2 }; j = {j1 + j2 }
m = j1 + j2 1; {m1 , m2 } = {j1 1, j2 }, {j1 , j2 1}; j = {j1 + j2 }, {j1 + j2 1}
..
.
mmin = j1 j2 ; {m1 , m2 } = {j1 , j2 }, {j1 1, j2 + 1} . . . {j1 2j2 , j2 };
j2 1}, . . . , {j1 j2 }

j = {j1 + j2 }, {j1 +

The case for j1 = 2 and j2 = 1 is worked out in Table 1 .


Finally, we can find a general method to determine the C-G coefficients for a given set of states.
Start from the maximum m value since the coefficient with this state will be one.
Repeatedly apply J = J1 + J2 to both sides of the equation to find the lower values of
ms states.
For alternate values of j, simply expand in terms of all possible states that generate a given
m value and require orthogonality with other states with the same m but different j values.

|j, mi
|m1 , m2 i
..
.

Table 1: Table of possible m and j combinations for j1 = 1, j2 = 2


|3, 3i |{3, 2}, 2i |{3, 2, 1}, 1i |{3, 2, 1}, 0i |{3, 2, 1}, 1i |{3, 2}, 2i
|2, 1i
|2, 0i
|2, 1i
|1, 1i
|2, 1i
|2, 0i
|1, 1i

|1, 0i
|0, 1i

|0, 0i
|1, 1i

|1, 0i
|0, 1i

|3, 3i
|2, 1i

|1, 1i

Lastly, we can consider the case of an electron in an atom, in this case, we have J = L + S and the
possible values of the indices are:
ml = l, l + 1, . . . , l;

ms =

1
2

Thus the possible j values are j = l 12 . That means we can write the C-G coefficients for a given
l as:





l + 1 , m = A m


2





l 1 , m = A0 m


2




1 1

+ B m + ,
2 2




1 1
1 1
0
,
+ B m + ,
2 2
2 2
1 1
,
2 2

And we require that A2 + B 2 = A02 + B 02 = 1 to normalize the states, and also that AA0 + BB 0 = 0
so that the states are orthogonal. This results in the conditions:

85

s
l + m + 12
l m + 12
A=
; B=
2l + 1
2l + 1
s
s
l m + 12
l + m + 12
A0 =
; B0 =
2l + 1
2l + 1
This gives another description using both spin states and orbital angular momentum states. The
spherical harmonics are combined with the spin states to give:
Yl,m Yl+ 1 ,m = A Yl,m 1 + + B Yl,m+ 1
2

Wednesday - 12/3

25

Tensors in Quantum Mechanics

Recall that in classical mechanics, a 3-vector v can be defined as an object with 3 elements that
transforms like a vector. This vector is actually a tensor of rank 1. This can be seen as:
vi0 =

Rij vj

Where as tensors of rank 2 can be represented by a combination of two such vectors:


Tij ui vj Tij0 =

Ril Rjm Tlm

lm

And (just for reference) a rank 3 tensor has the definition that:
0
Tijk
=

Ril Rjm Rkn Tlmn

lmn

25.1

Reducible and Irreducible Tensors

Tensors are reducible in the sense that we can write any rank 2 tensor as a sum of several other
tensors:


 

Tij Tji
1

Tij =
(Tij + Tji ) ij T r[T ] +
+ ij T r[T]
2
2
And then defining the other tensors as:

86

Sij

1
(Tij + Tji ) ij T r[T];
2

Aij

Tij Tji
;
2

c T r[T]

Where we can immediately note that Sij is a symmetric tensor with 5 independent elements (6 in
the two tensors and then one less from the trace term) Aij is a antisymmetric tensor and therefore
has 3 independent elements. Lastly, the final term is a constant times the delta function. This gives
a total of 9 independent elements for Tij which are independent of each other. That is, the rank 2
tensor Tij is made up of 9 elements, 5 of which are dependent on one another, another 3, and finally
a single remaining constant term. This can be seen in the transform below:

0
S11

.. (55)
T11
.
0 M1
..

.
A11 = 0
..
0
.
0
c

0
T11
Rij
.. ..
. = .

...
..
.

0
(33)
M2

S11
.
..
0

A
0
11

(11) .
M3
..
c

(55)

Where M1
denotes a 5 5 matrix of independent elements within the 9 9 matrix which is the
entire transform. The same is true for the other elements. The tensors S, A, and c are referred to
as irreducible tensors since they transform independently.

25.2

Cartesian Vectors in Classical Mechanics

P
Classically, any vector that behaves as vi0 = j Rij vj under a rotation where RRT = 1 is referred
to as a Cartesian Vector. We can consider then the infinitesimal transformation given by:
k
Rij
= ij ijk

This can easily be checked as it correctly (to first


as a rotation of about the axis denoted by k.
z
order) generates the rotation matrix R :

1
0

1
Rz =
0

25.3

0
0
1

Cartesian Vectors in Quantum Mechanics

In quantum mechanics, the state |i rotates into |R i by the application of the rotation operator
D(R). So, using the above relations, we should be able to define a quantum mechanical cartesian
vector as:
vi0 = hR | vi |R i ;
87

vi = h| vi |i

And the above transformation gives the definition;


hR | vi |R i = h| D(R) vi D(R) |i = Rij h| vj |i D(R) vi D(R) = Rij vj
Plugging in the definitions for D(R) and the above classical definition for an infinitesimal rotation,
we get:

 
 X
i
i
1 + Jk vi 1 Jk =
ij ijk vj
h

h
j

Which works out to be the formal definition for a Quantum Mechanical Cartesian Vector:
[vi , Jj ] = ihijk vk

25.4

Spherical Tensors

It can be shown that the spherical harmonics form a basis of spherical (irreducible) tensors. Consider
the spherical harmonics of order l = 1:


Y11 , Y10

x iy z

,
r
r


{ (Ux iUy ) , Uz }

Similarly for l = 2 using the notation that r = (x iy),


 2 1 0
Y2 , Y2 , Y2

2
r
r z 2r+ r + 2z 2
,
,
r2 r2
r2



= U2 , U Uz , 2U+ U + 2Uz

Friday - 12/5

25.5

Transformation of Spherical Tensors

Now that we know how the spherical harmonics form a group of spherical tensors, consider how
these tensors transform among themselves. Consider the transition |
ni = |, i |0 , 0 i = |
n0 i.
The spherical harmonics are defined to be the inner product of h
n|l, mi, so the rotation is:
|
ni |
n0 i = D(R) |
ni
The corresponding bra would be h
n0 | = h
n| D(R) . So then we can write the spherical harmonics
and expand them in terms of possible indices:
Ylm (, ) = h
n|l, mi Ylm (0 , 0 ) = h
n0 |l, mi
88

Ylm (0 , 0 ) = h
n0 |l, mi = h
n| D(R) |l, mi =

h
n|l0 , m0 i hl0 , m0 | D(R) |l, mi

m0 ,l0

Since the rotation doesnt change the index l, the resulting relation is:
Ylm (0 , 0 ) =

Ylm (, ) h, m0 | D(R) |l, mi

m0
l
But the matrix element is simply the coefficient Dm,m
0 , so the above gives the transformation as:

Ylm (0 , 0 ) =

l
Ylm (, )Dm,m
0

m0

25.6

Definition of Tensor Operators

From the above relation we can define the behavior of general tensor operators. Consider some
operator Tkq which we equate to a spherical tensor operator. From the above it is clear that we
require the operator to obey:
hTkq iR = h|R Tkq |iR =

q
k
Dq,q
0 Tk

q0

Which must hold for any state ket, therefore by definition:


D(R) Tkq D(R) =

q
k
Dq,q
0 Tk

q0

 + O(2 ), we can expand the


Applying the definition for an infinitesimal rotation D(R) = 1 hi J n
above and find the resulting relation:


 X q0
J n
, Tkq =
Tk hk, q 0 | J n
|k, qi
q0

As an example, consider the result if n


= z. The result is similar to the relation with an angular
momentum state |l, mi,


Jz , Tkq = h
q Tkq
Jz |l, mi = mh |l, mi
From this result, we can immediately infer that since:

hj, m0 | J |j, mi = h

p
p


(j m)(j m + 1)m0 ,m1 J , Tkq = h
(j m)(j m + 1)Tkq1
89

25.7

Constructing Higher Rank Tensors

Consider two tensors: Xkq11 and Zkq22 . We can use these two to construct some other higher rank
tensor Tkq where k = k1 + k2 and q = q1 + q2 for the possible values of each index. The analysis
above indicates that as we wrote that:
X

|l, mi =

hm1 , m2 |l, mi |m1 , m2 i

m1 ,m2

We should be able to construct a higher rank tensor operator by similar methods. The resulting
equation is:
Tkq =

hq1 , q2 |k, qiXkq11 Zkq22

q1 ,q2

26

Wigner-Eckart Theorem

The Wigner-Eckart Theorem states that for an object of the form,


X = h0 , j 0 , m0 | Tkq |, j, mi
The exact result is dependent on the integral:
Z
X=

(, )zRn (r)Yjm (, )d3 x


Rn (r) Yjm
0

however, we can immediately state that if the condition m + q = m0 is not met, then the integral is
zero. That is,
X 6= 0 if m + q = m0
The second part of the Wigner-Eckart Theorem states that the above relation can be expressed in
terms of reduced matrix elements,

h0 , j 0 , m0 | Tkq |, j, mi = hj, k; m, q|j, k, j 0 , m0 i

90

h0 , j 0 | Tk |, ji

2j + 1

Part IV

MATHEMATICAL REVIEW OF
QUANTUM MECHANICS
Thursday - 1/22

26.1

Review of the Concepts of Quantum Mechanics

Quantum Mechanics (as all physics) is a mathematical representation of the time evolution of a
system. The goal of quantum mechanics is to take information about the state of a system at
some time t0 and determine how the system evolves for all time. The equation that describes this
evolution is the Schr
odinger Equation:
i
h

(t, x
) = [H] (t, x
) ;
t

(t0 , x
) = (
x)

where weve defined the quantity [H] as the resulting function of time and space when the Hamiltonian operator acts on the wave function. An alternate way to generate this evolution for a given
Hamiltonian operator is:
h i
i
(t, x
) = e h Ht (t0 , x
) (t, x
)
i

The operator e h Ht which we can refer to as U (t) is the time evolution operator discussed in Section
2.1 of Sakurai. It must obey the relations:
U (t1 + t2 ) = U (t1 )U (t2 )
U (0) = I
U (t)U (t) = I
The exponential form above is one possible form for U (t) for Hamiltonians which are self-adjoint
and time-independent.

26.2

The Propagator

Taking the definition above for (t, x


) and expanding it in terms of some second variable y generates
the relation:
Z
(t, x
) =

hx| e h Ht |yi hy|id3 y =

91

hx| e h Ht |yi (
y )d3 y

The term defined as hx| e h Ht |yi is the propagator seen in the previous semester. Lets consider the
case of a free particle and determine the exact form of the propagator.
26.2.1

Example - Propagator for Free Particle

For a free particle moving in one dimension in a region with no potential, the Hamiltonian function
p2
h2 d2

is defined as H = 2m
= 2m
. There are a number of ways to solve this problem, but the most
dx2
useful is to perform an eigenfunction expansion. This involves solving:
2 d2
h
i
(x) = (x) (x) = ae
2m dx2

2m
x
h
2

+ be

2m
x
h
2

In order for the solutions to be finite as x , we require that be real and positive. These
h2 2

solutions oscillate like sine and cosine. If we redefine the variable = 2m


k then the solution can
be more easily written as:
1
k = eikx ;
2

k = . . .

Because the eigenfunctions form a complete set, we can expand any function (x) in terms of them.
This can be written as:
Z
(x) =

(k)
k (x)dk;

(k)
=

k (x) 0 (x)dx

Because we are using the eigenfunctions of the operator H, when we expand the wave function in
terms of them, it simply turns the operator into a number. That is:

Hk =

i
2 2
i h
2 2
h
k k e h Ht k = e h 2m k t k
2m

So, we can apply this to our evolution operator and find that:
Z

1 eikx dk
(k)
2



Z Z
Z
Z
1 i h 2 k2 t ikx
1
1 i h 2 2

iky

=
k (y) 0 (y)dy e h 2m e dk =
e
0 (y)dy e h 2m k t eikx dk
2
2
2
Z Z
2
i h

1
2
=
eik(xy) e h 2m k t 0 (y)dkdy
2
h

i
Ht
h

(t, x) = e

i
h
Ht

(t, x) = e

From this we can complete the square, evaluate the integral in y, and find the exact form for the
Propagator:
hx| e

i
Ht
h

1
|yi =
2

92

i h
2

eik(xy) e h 2m k t dk

Then,

h
t; b = x y
2m
r

2
Z
Z
2
i
i i b22
1
1
1
2 k 2 +ibk
ia2 k+ b2 i b 2
Ht
ia
2a
4a dk =
hx| e h |yi =
e
dk =
e
e 4a
2
2
2 2a2
a2 =

And plugging back in the values of a and b gives the final result of:

hx| e

i
Ht
h

r
|yi =

mi i m (xy)2
e 2h t
8ht

Tuesday - 1/27
What behavior can we expect as t 0? According to what we previously derived,
Z
(t, x
) =

hx| e h Ht |yi (
y )d3 y;

(y) = (t = 0, y)

From this definition, it is obvious that in the limit of t going to zero, the propagator should behave
as:
Z
(t 0, x
) =

K(x, y)(
y )d3 y

and immediately one can claim that K(x, y) = (x y) since the delta function is a solution of the
above.

26.3

Qualitative Results of the Propagator

Consider a system where the initial state is given as (x) which we assume to be some localized,
smooth distribution (lets say a Gaussian) which is zero outside some region |x| > L. The time
evolution of this state is then given as:
Z r
(t, x) =

mi i m (xy)2
e 2h t (y)dy
ht

There are several things to note here. First, note that at t = 0, the result is simply (0, x) = (x)
2
as expected since the integral term goes as eiA(xy) (y) and as A the exponential essentially
becomes a delta function, pulling out only the contribution at x = y. Also, note that far from where
the particle initially is located (|x|  L), that the exponential can be approximated as:
im

e 2ht (x

2 2xy+y 2

im
x2 2xy )
) e 2
ht (

93

And evaluating the integral, we find:

(t, x)
=

Z r

mi im (x2 2xy)
(y)dy =
e 2ht

ht

mi im x2
e 2ht
ht

Z
e

m
ih
xy
t

r
(y)dy =

mi im x2 h  mx i
e 2ht
2
ht
ht

Where the remaining integral has been replaced by the Fourier Transform of the initial state. This
gives the result that far from the initial distribution,

(t, x)
=

2mi im x2  mx 
e 2ht
ht
ht

Note that for any point x, after some amount of time has passed, the probability of the particle
being there is non-zero. However, after more time passes, the probability decreases as the wave
function fills all space (the wave function diffuses into all allowed states).

26.4
26.4.1

Examples Using the Propagator


Particle in an Infinite Well

Now that weve considered an unbound state, consider the basic quantum mechanics of a particle
in an infinite well of width L. The problem is summarized as:
i
h

(t, x) = H(t, x); (t = 0, x) = (x)


t
(t, 0) = (t, L) = 0

To find the evolution, we need to determine the form of the propagator. We can again use an
eigenfunction expansion to turn the operator H into some number. The solution is the same as the
free particle (exponentials or sines and cosines) but we have the additional concern of boundary
conditions. Choosing to use the exponential solution, the result is:
2 2
h
(x) = (x)
2m x2
!
!

2m
2m
(x) = A sin
x + B cos
x
h
h

The condition at x = 0 requires dropping the cosine term. The condition at x = L gives the allowed

1
n
h 2
values of n = 2m
. Plugging these in gives:
L
r
n (x) =

 n 
2
sin
x ,
L
L
94

n =

1
2m

nh
L

2

Now we can evaluate the propagator as we did with the free particle.

(t, x) =

i
Ht
h

Z
cn n (x);

cn =

n (y)(y)dy = hn |i

n=1

The eigenfunctions turn the operator in the exponential into a number and the rest is straight
forward:

(t, x) =

2
i n2 2 h
t
h
2mL2

"Z r

n=1

hx| e

i
Ht
h

 n 
2
sin
y (y)dy
L
L

#r

 n 
2
sin
x
L
L

 n 
 n 
2 X hi n2 2 h22 t
|yi =
y sin
x
e 2mL sin
L
L
L
n=1

Wednesday - 1/28
26.4.2

Particle in a Half Infinite Well

Consider as another example a particle in a potential defined as:



V (x) =

0 x>0
x0

in this potential, we have the same Hamiltonian operator weve been working with but with the
added condition that |x=0 = 0. So, using the eigenfunction expansion we again know that the
solutions are either exponentials or sines and cosines. Using the sine and cosine solutions, the
boundary condition at x = 0 requires dropping the cosine term. This leaves the solution as a sine
transform:
r

2 d2
h

(x) = (x) with


2m dx2

(0) = 0 k (x) =

2
sin (kx) ;

where  ranges from zero to infinity. From this we can immediately construct:

hx| e

i
Ht
h

2
|yi =

i h
2 k2
t
2m

e h

sin (kx) sin (ky) dk

And with some initial state (x), the system will evolve into a state:
2
(t, x) =

2 k2
i h
t
2m

e h
0

95

sin (kx) sin (ky) (y)dk dy

k2 =

2m

h2

26.4.3

Impenetrable Wall With Finite Potential Well

Consider a particle in a potential of the form:

0 x>L
V (x) = D 0 < x < L

x0
This problem requires finding solutions in both regions and then matching up the solutions. For
x > 0, the solution has the familiar form of:

(>) = ei

2m
x
h

+ ei

2m
x
h

Because of the potential well, the allowed values of  range from D to . It is therefore necessary
to note that in the case of  < 0, the second term blows up exponentially, therefore that solution
requires = 0. If  > 0, then there are no constraints on the coefficients.
The solution inside the well is easily shown to be:
2 d2 (<)
h

(x) = ( + D) (<) (x)


2m dx2

with

(<) (0) = 0 (<)


 (x) = sin

2m
x
h

Lastly, the solution and its first derivative must be continuous across the point x = L. Consider
first  > 0, then the condition can be more easily written in terms of sine and cosine functions as:
!
!
!
p

2m ( + D)
2m
2m
sin
L = A sin
L + B cos
L
h

h
h
!
!
p
p

2m ( + D)
2m ( + D)
2m
2m
2m
cos
L =
A cos
L
B sin
h
h

h
h
h

!
2m
L
h

These conditions completely determine the values of A and B which leaves a continuum of values
(integral transform) for . This is expected since these are the states which have escaped from the
well.
Alternately, for  < 0, the condition can be written:
!
p

2m
2m ( + D)
sin
L = ei h L
h
!
p
p

2m ( + D)
2m ( + D)
2m i 2m L
cos
L =i
e h
h

h
h
These conditions determine the coefficient and the allowed values of  in terms of the dimensions
of the problem (that is, (D, L)). The exact condition on  can be determined by dividing the two
equations and replacing  with a positive variable = ||.
96

p
tan

! r
2m (D )

L =
h
D

this can be solved graphically (thats part of the homework). Solutions only exist in the region from
0 < < D, so it is possible to have no solution or multiple solutions.
From all of this (assuming the eigenfunctions can be normalized), it is possible to write the Propagator for the system in terms of the bound states n (x) and the continuum solutions  :

hx| e

i
Ht
h

|yi =

n
X

i
 t
h
j

j (x)j (y)

Z
+

e h t  (x) (y)d

j=1

Monday - 2/2

27

Gauge Transforms - An Electron in a Magnetic Field

Consider the problem of an electron (or other charged particle) in a magnetic field. We will treat
the magnetic field as the classical object, which means that our Hamiltonian is written in terms of
the momentum operator and the vector potential:
H=

1 h
e i2
ih A + V (x)
2m
c

we find the Hamiltonian can


Expanding this out and using the product rule on the operator A,
be written as:

H=



ihe 
e2 2
2 2
h

A + 2A +
|A| + V (x)
2m
2mc
2mc2

is determined from B
= A
Gauge transforms arise from the fact that the physical quantity B
and thus the vector potential can be altered by some quantity which will not contribute to the
physical field and give the same result, the general statement is that a change of
A0 = A + (
x)
will not change the resulting magnetic field. Under such a gauge transformation, the Hamiltonian
above is altered.
H0 =

i2
1 h
e
e
ih A () + V (x)
2m
c
c

However, some simply analysis reveals that:


97



i e
e  ie
e
e
i
h A e h c = e h c ih A
c
c
c

And thus,
i e

i e

He h c = e h c H 0
Adding in the time term,


27.1
27.1.1




i e
i e

0
h

c
i
h H =e
ih H e h c
t
t

Example - Electron in Uniform Magnetic Field


The Symmetry Gauge

= B0 z. Using what is known


Consider an electron in some region with a uniform magnetic field B
as the symmetry gauge, one can write the vector potential from:
1
1
B0
A = B
x
= B0 z x
=
(y
x + x
y)
2
2
2
with this choice of vector potential, the term A is zero and the Hamiltonian is written as:



2 2
h
eB0

e2 B02 2
H=
i
h
2 y
x
+
x + y2
2
2m
4mc
x
y
8mc

However, the middle term is the angular derivative


in cylindrical coordinates and the last term
is the radial separation. Using these simplifications and writing 2 in cylindrical coordinates makes
this a bit simpler to solve:

@
h
H=
2m

2
1
1 2
2
+
+
+
r2 r r r2 2 z 2

Finally, using the definition B =


2
h
H=
2m

|e|B0
mc ,


ih

eB0
e2 B02 2
+
r
2mc 8mc2

this simplifies to:

2
1
1 2
2
+
+
+
r2 r r r2 2 z 2


+

ih

1
2 2
B
+ mB
r
2
8

This can be solved using separation of variables and then eigenfunction expansions in some dimensions to reduce the number of variables:

98

H = ;

= (r)T ()Z(z)

The z component is simply a Fourier transform while the angular component requires periodicity.
This gives:
Z(z) = eikz ;

T () = eij

where k is a continuous spectra ranging from . . . and the index j takes integer values over
the same range. Plugging these definitions in reduces the problem to:


2
h

2m

2
1
j2
+
k2

r2 r r r2


ihB
1
2 2
+
(ij) + mB r (r) = (r)
2
8

Gathering the constant terms with  gives the exact differential equation to solve:
 2






h2

1
j2
1
2 k 2 hj
h
2 2
+

+ mB r (r) = 
+ B (r)

2m r2 r r r2
8
2m
2
the solutions of this radial equation are Confluent Hypergeometric Functions and deriving the
solution is left as a homework problem.
27.1.2

The Landau Gauge

Consider if instead of the Symmetry Gauge, a choice was made to use the vector potential A =
B0 y
x for this problem. This choice of vector potential is called the Landau Gauge. In this case,
the Hamiltonian becomes:

H=

2 2
h
eB0

e2 B02 2
ih
2y
+
y
2m
2mc x 2mc2

Expanding out the operator and again using B ,


H=

2 2
h
1
h2 2
h2 2

2 2

ihB y
+ mB
y
2
2
2
2m x
2m y
2m z
x 2

Again we can use separation of variables and then perform eigenfunction expansions to reduce the
number of variables:
H = ;

= X(x)Y (y)Z(z)

Z(z) = eikz z ;

X(x) = eikx x
99

Plugging these in leaves us with the equation:





h2 kx2
h2 kz2

h2 2
1
2 2
+

ihB y (ikx ) + mB y Y (y) = Y (y)


2m
2m y 2
2m
2

Again collecting terms,







h2 2

h2
1
2 2
2
2

+ hkx B y + mB y Y (y) =
k + kz +  Y (y)
2m y 2
2
2m x
completing the square on the two terms gives:


1
hkx 2 h2 kx2
1
2 2
2
mB y + hkx b y = mB y +

2
2
mB
2m
The additional kinetic energy term in the x direction cancels with the other.
"

2 #




h
k
2 2
h
1
2 kz2
h
x
2

+ mB y +
Y (y) = 
Y (y)
2m y 2 2
mB
2m

this result is strange in a way because the resulting equation for y is a simple harmonic oscillator
with spring constant mB but with a displacement along the axis proportional to kBx .
Wednesday - 2/4
Consider a change variables in the above form. It is easily seen that this is the Simple Harmonic
hkz

Oscillator of some variable y 0 = y + m


and therefore the solution for Y (y) is:
B

 m
2
h
kx
kz
h
2hB y+ m
B
Y (y) = An HN y +
e
mB
Where the coefficient is the normalizing factor and Hn (x) is the nth Hermite Polynomial. So,
combining this with our other results, the wave function and energy eigenvalues (often called the
Landau Levels) of the electron in the magnetic field can be written as:

n (kx , kz , x, y, z) = An e

ikx x ikz z


Hn

kz
h
y+
mB



2 kz2
h
1
n (kz ) =
+ hB n +
2m
2

100

mB
2
h

h
kx
y+ m

28
28.1

The Simplistic Hydrogen Atom


Background - Setting Up The Problem

Now we can examine the simplest atom (a single proton and electron) in a simplistic way. By this
we mean that we ignore the finite size of the proton and treat it as a point like source. Also we take
the case of a stationary proton (mp ) and ignore spin of the particles. Ignoring the spin of the
particles is the biggest simplification that we are making. The Hamiltonian for the system can be
written using the Coulomb interaction between the particles as:

H=

2 2 e2
h

2m
|x|

Next, we need to determine which coordinate system will be best to solve the problem in since this
will determine the exact form of the operator 2 . For this problem, well use spherical coordinates.
In order to proceed well need to solve the eigenfunction problem:

 2


h2

2
1
e2
H(r, , ) =
+
+ L
(r, , ) = E(r, , )
2m r2 r r r2
r
where weve shortened notation using L as the angular differential operator. We can immediately
simplify this by expanding the solution in spherical harmonics since LYl,m (, ) = l (l + 1) Yl,m (, ).
So, this leaves


h2
2m

d2
2 d
l(l + 1)
+
+
dr2 r dr
r2


e2
R(r) = ER(r);
r

(r, , ) = R(r)Yl,m (, )

Consider first the case that l = 0. In this case there is no repulsive term and the potential is simply
2
V (x) = er . For energies less than zero we expect to find bound states held close to the proton but
there is no way of telling how many bound states we will find. If the energy is positive, then we
expect to find a continuum of free states.
Next, consider if l 6= 0. In this case there is an additional term in the potential due to the angular
2
momentum of the electron. The potential is then Vef f (x) = l(l+1)
er . We again expect to find
r2
bound states for energies less than zero, but they will likely be shifted away from the origin by the
additional term in the potential. The energies above zero will again have a continuum of values as
there is nothing keeping them bound.
Before we begin solving the problem, its convenient to convert the scale of the problem to atomic
units. If we introduce the variable = r where the factor is to be determined, then the
Hamiltonian becomes:
2 2
h

H=
2m

d2
2 d
l(l + 1)

2
d
d
2

101

e2
r

We can choose such that the two terms are the same order (that is, we expect solutions to be
when the Coulomb attraction is roughly the same scale as the quantum diffusion that causes the
wave packet to spread through all available space). This gives:
2 2
h
me2
= e2 = 2
m

h
l (r) = R
l () shows that the length scale for our
Transforming the radial solution Rl (r) R
1
solutions is r = . This changes the problem into:


d2
2 d
l(l + 1)

d2 d
2


1
2
h

E R()
R() =

me4
2

Thus we can also infer that the energy scale of our problem is given by me
4 . Our solutions will
be in terms of the unitless variable and eigenvalues El . Those are then scaled by the quantities
derived in this process to get the physical values.

Monday - 2/9

28.2

The Hydrogen Atom Wavefunction

At this stage, we can make the additional simplification of R()


=
derivative term,


1
()

to eliminate the single


1 l (l + 1)
1 d2

+
() = ()
2 d2
22

From this equation, one finds solutions of the form:


n =

1
;
n2

nl+1

s 
 
 
2 3 (n l 1)! 2 l 2l+1 2

n
R() =
e
Ln+1
n
n
n
2n ((n + l)!)3
where Lji (x) is notation for the Laguerre Polynomials. These polynomials can be derived by finding
the large and small asymptotics of the solution and then determining the solutions from them.
Consider for  1 and assume that for these values, the solution is dominated by a factor of p
where p is some power which we will find by an indicial equation.
r
1
l(l + 1)
1
1
p (p 1) p +
=0p=
+ l(l + 1)
2
2
2
4


1
1
p= l+
p = l, l 1
2
2
102

1
l and R
l+1
so, for small values of , we expect solutions of the form R
. Of these two results,

only the first is a viable answer since we require square integrability. For large values, the resulting
equation is:

1 d2


+

R()
=
0

R()

e
2 d2

and given the result that only values of  = n12 generate solutions, we can immediately write that
the solution should look like:

R()
= Nn,l l e n fn,l ()


1 d2
1 d
1 l (l + 1)
1

=0

+
+ 2 R()
2 d2 d
22
2n

where each allowed value of n and l will generate a polynomial fn,l and then normalization can be
determined by integration.
With this result, the final solution can be written in terms of the atomic radius a0 =
Ryberg constant R =

and the

2
h
,
me4
3

n,l,m (r, , ) = Yl,m (, )R(r)a


0;

En,l,m = R

1
n2

Further, we can make a few notes and comments on the result:


There is a degeneracy of states since the energy is only dependent on the principle quantum
number n. Given the allowed values of each index:
n = 1, 2, . . .

l = 0, 1, . . . n 1

m = l, l + 1, . . . , l 1, l

2 fold degeneracy for each state. This is shown since the nth state can be
there is found an nP
2
a total number of n1
j=0 (2j + 1) states. Gauss formula quickly shows that this is n .

Physical Chemists and Atomic Physicists refer to the various states in terms of orbital letters.
A few examples are given below:
n, l = 1, 0 1s state

n, l = 2, 0 2s state

n, l = 2, 1 2p state

n, l = 3, 2 3d state

The behavior of the energies En n12 leads to an accumulation of eigenvalues near n = 0.


However, all bound states are contained in the region below E = 0. This result is important
as the region of E > 0 contains the continuous spectra.

103

There is a ground state with lowest energy which coincides with n = 1. The solution is of the
form:
3

1,0,0 (r) = 2a02 e

ar

one could think of this ground state existing because any additional containment of the wavefunction (deeper in the potential well) would result in increased kinetic energy.

Wednesday - 2/11

28.3

Continuous Spectrum of Hydrogen

For energies above zero, the resulting equation describing the simplistic hydrogen atom is:
k,l,m = Yl,m (, )Rk,l (r)


1 d2
1 d
l(l + 1)
1
2

+
k
Rk,l (r) = 0
2 dr2 r dr
2r2
r
where the variable k is real and positive since energies are positive. The solutions of this problem
are more hypergeometric functions:
1 ikr
Rl (k, r) = cl (k)
e F
2kr

i
+ l + 1, 2l + 2, 2ikr
k

and for large values of r this behaves as:


Rl (k, r)
=



2
1
l
sin kr + ln (2kr)
+ l
r
2
2

 

1
l = arg l + 1
k

where arg(z) denotes the complex phase of the argument. The propagator for the hydrogen atom
can then be built form these eigenfunctions by summing over the discrete (bound state) solutions
and integrating over k = 0 . . . for the continuous spectrum (free states).

104

Part V

PERTURBATION THEORY IN QUANTUM


MECHANICS
29

Bound State Perturbation Theory

Consider the case that for some Hamiltonian operator H0 , the eigenfunctions and eigenvalues are
known. If a new operator is built by taking H = H0 + V for some suitably small addition V , then
one can assume that eigenfunctions and eigenvalues of this new operator can be approximated by
adding small corrections to the solutions from H0 .
(0)

(0)

(0)

(0)

(0)

Let the original operator have discrete spectra {0 , 1 , . . . n } with i < j for i < j (that is,
no degenerate states and eigenvalues increase away from the ground state n = 0). Assume that
(0)
i < 0 and each of these discrete eigenvalues corresponds to an eigenfunction i (
x).
Also, assume that there is a continuous spectra of (0) = 0 . . . . The assumption that the discrete
spectra are all less than zero simplifies the problem as it assures that there are no discrete eigenvalues
within the continuous spectra. The continuous spectra correspond to the eigenfunction solution
denoted as (0) (, x
).
So, our task is now to solve the eigenvalue problem:
(H ) = 0
but we can simplify this if we assume that the new spectra and eigenfunctions can be written as
(1)
(2)
n = (0)
n + n + n + . . .

n (
x) = n(0) (
x) + n(1) (
x) + n(2) (
x) + . . .
From these assumptions we can write the above in terms of the correction terms and then consider
the resulting equation for the different order corrections:



(1)
(2)
(0)
(1)
(2)
(H ) = H0 + V (0)


+
.
.
.

(
x
)
+

(
x
)
+

(
x
)
+
.
.
.
=0
n
n
n
n
n
n
h


i h



i
(0)
(0)
(1)
(1)
(0)
H0 (0)

+
H

+
V

0
n
n
n
n
n
n
h



i
(0)
+ H0 (0)
n(2) + V (1)
n(1) (2)
+ ... = 0
n
n
n n
Here weve separated the zero-th, first, and second order terms into brackets and omitted higher
(0)
order corrections. If we consider only the part of the first order correction that is k to n by taking
the inner product of the term,
105





hn(0) | H0 (0)
n(1) + V (1)
n(0) i = 0
n
n




(1)
(0)
(1)
hn(0) | H0 (0)

i
+
h
|
V


n(0) i = 0
n
n
n
n

(0)
Operating H0 on hn |, the first term is zero since the resulting value is (0) (0) . The second
term leaves the first order correction to the eigenvalue:
(0)
hn(0) |V n(0) i hn(0) |(1)
n n i = 0
(0)
(0)
(1)
n = hn |V n i
(0)

Consider now the part of the first order correction which is k to n . The first order correction can
be written as:
n(1) = cn(0) + n ;

n(0)

This gives the definition of n as:


n = n(0) (1 + c) + n + (2) + . . .
however the extra coefficient c is redundant, so it can be taken as zero. This means that the first
order correction is perpendicular to the zero-th order solution.
hn(0) |n(1) i = 0 n(0) n(1)
(1)

with this result we can write the first order correction term with n as an eigenfunction expansion
in terms of the zero-th order solution:





(1)
(1)
H0 (0)


n(0)
n
n
n
X (1) (0) Z
(1)
n =
cj j + c(1) () (0) (, x
)d
j6=n
(0)

Then, taking the inner product with j


(1)

cj
(0)

the inner product of j

the resulting equation is:




(0)
(0)
j (0)
= hj | V (1)
n(0) i
n
n

(0)

and n is zero, so we are left with:

106

(1)
cj

(0)

(0)

(0)

(0)

hj |V n i
j n

repeating for the integral term yields the relation for c(1) () as:
(0)

c(1) () =

h (0) (, x
)|V n i
(0)

 n

and so finally we have the form for the first order correction:
(0)

n(1)

X
j6=n

(0)

hj |V n i
(0)
j

(0)
n

(0)
j

(0)

h (0) (, x
)|V n i


(0)
n

(0) (, x
)d

Tuesday - 2/17

29.1

Normalizing the Solution

It is possible to derive some useful information about the corrections to the solution by requiring
that it be normalized.
(0)

(1)

(2)

(0)

(1)

(2)

||j ||2 = 1 hj (
x) + j (
x) + j (
x) + . . . |j (
x) + j (
x) + j (
x) + . . .i = 1
The inner product can be expanded in powers proportional to V . So, writing out the expansion:
(0)

(0)

||j ||2 = hj |j i
(0)

(1)

(1)

(0)

+hj |j i + hj |j i
(0)

(2)

(1)

(1)

(2)

(0)

+hj |j i + hj |j i + hj |j i + . . .
The zero-th order term is identically 1, each remaining term must be individually zero.
(0)

(1)

The first order term is zero if 2Rehj |j i = 0. Thus the first order correction must be
orthogonal to the zero-th order eigenfunction.
(1)

(0)

(0)

(2)

(1)

(1)

The second order term can be made to be zero by requiring 2Rehj |j i = hj |j i.


Since the solutions are assumed to be real, this can be satisfied by choosing:
1 (1) (1)
(0) (2)
hj |j i = hj |j i
2
107

The higher order terms can be determined in the same manner, though it becomes much more
complicated.

30

Degenerate Eigenvalue Perturbation Theory

30.1

First Order Corrections For Degenerate Eigenvalues

The previous sections derived methods to determine first order corrections if there were no discrete
eigenvalues embedded in the continuous spectrum and if none of the discrete eigenvalues were
degenerate. Consider now if there are a set of values {jn } = {j, j + 1, j + 2, . . . j + n} which are
degenerate. In this case, the system has an n + 1 fold degeneracy. Because of this degeneracy, the
previous method of determining the corrections doesnt work as it generates undefined coefficients.
However, it is a good starting point for this analysis.
Recall that last time the first order correction was expanded in terms of the zero-th order solution
and orthogonality was used to find the coefficients. One can repeat that process, but separate the
degenerate eigenvalues from the rest of the sum:
(1)
j (x)

(1) (0)
ck k (x)

c(1) () (0) (, x)d +

k6={jn }

(1)

(0)

ck k (x)

k={jn }

If we plug this into the eigenvalue equation,

X 

(0)
k

(0)
j

(1) (0)
ck k (x)

Z 

(0)

 j


 X
(1)
(1) (0)
c(1) () (0) (, x)d = V j
ck k
k={jn }

k6={jn }

(0)

Then, taking the inner product with {jn } (x) sets the first two terms to zero and leaves the relation:
X



(1)
(0)
(1)
(0)
ck h{jn } | V j k i = 0

k={jn }

This is actually a set of n + 1 equations for the degenerate eigenvalues. Written in matrix form, the
system of equations looks like:

(1)
hj |V j i j

hj+1 |V j i

..

hj+n |V j i

hj |V j+1 i

...

hj |V j+n i

(1)

hj+1 |V j+1 i j

..

hj |V j i

.
(1)

hj+n |V j+n i j

...
108

(1)

cj

(1)
cj+1

=0

..
.

(1)

cj+n

In order for there to be a non-trivial solution, the determinant of the matrix must be zero. This
(1)
generates an n + 1 degree polynomial in j which can have at most n + 1 roots. These solutions
are the first order corrections. One interesting thing to note is that depending on the symmetries
of the original problem and the applied perturbing potential, the degeneracy could be lifted or not
at all effected.
Wednesday - 2/18

30.2

Example - Particle in a Symmetric Box in 2 Dimensions

Consider the two dimensional problem of a particle confined to the region described by x =
{1, 1}; y = {1, 1}. One can scale the variables so that the zero-th order Hamiltonian is

H0 =

1
2

2
2
+ 2
2
x
y

Then, let there be a small perturbation potential of the form:


V = e

(xx0 )2 +(yy0 )2

Note that this potential can be approximated by delta functions if the energy is low and the value of
is small (as we are assuming it is). This potential is then approximated V
= (x x0 ) (y y0 ).
The first step is to find the zero-th order solutions and their corresponding eigenvalues. It can
immediately be seen that these are:

(0)

j,k (x, y)


= sin




k
j
(x + 1) sin
(y + 1)
2
2

(0)

j,k =


2 2
j + k2
4

(0)

j, k = 1, 2, . . .

(0)

Note that the eigenvalues nn are non-degenerate, while the eigenvalues nm are doubly degenerate
(0)
(0)
(0)
(0)
since (for example) 12 = 21 . Next, using the notation Vj,k;j 0 ,k0 = hj,k |V j 0 ,k0 i we can find the
matrix elements of the perturbation.
Z
Vj,k;j 0 ,k0 =

= sin

(0)

(0)

j,k (x, y)(x x0 )(y y0 )j 0 ,k0 (x, y)dxdy




 0

 0

j
k
j
k
(x0 + 1) sin
(y0 + 1) sin
(x0 + 1) sin
(y0 + 1)
2
2
2
2

With this information we can write the eigenfunction equation as a linear algebra problem. Consider
first H0 , we can order the eigenvalues by magnitude:

109


(0)

1,1
0

H0 = 0

..
.

2,1

(0)

(0)
1,2

0
..
.

(0)
2,2

0
..
.

...

0
. . .
2
0
. . .
=

4 0

. . .
..
..
.
.

..
.

0
5
0
0
..
.

0
0
5
0
..
.

0
0
0
8
..
.

...
. . .

. . .

. . .

..
.

With this representation, we can write the relation (H0 + V ) = 0 as:

2
2

 + V1,1;1,1
V1,2;1,1

V1,1;1,2
5 2
4

 + V1,2;1,2
5 2
4

V1,1;2,1

V1,1;2,2

V1,2;2,1

V1,2;2,2

 + V2,1;2,1

V2,1;1,1

V2,1;1,2

V2,2;1,1

V2,2;1,2

V2,2;2,1

2 2  + V2,2;2,2

..
.

..
.

..
.

..
.

V2,1;2,2

...


c1,1

. . .
c1,2

. . .

2,1
=0

. . .

c2,2

..
..
.
.

Where weve expanded the solution as:


(n,m) (x, y) =

(n,m)

cj,k

(0)

j,k (x, y)

j,k

In order to solve the problem, weve expanded the energies and coefficients as:
(0)

(1)

(2)

j,k = j,k + j,k + 2 j,k + . . .


(n,m)

cj,k

(0)(n,m)

= cj,k

(1)(n,m)

+ dj,k

(1)(n,m)

+ 2 dj,k

+ ...

Consider a first order perturbation of the ground state. In this case we have that j = k = 1 and
the state is non-degenerate. We know that:
(0)

1,1 =

2
2

1,1 =
(1)

(0)

(1)

(1)

1,1 = 1,1

2
(1)
+ 1,1 + . . .
2

(0)

and we expect to find that 1,1 = h1,1 |V 1,1 i. Plugging this in and dropping terms which will be
of order larger than turns the above relation into:

110


(1)
V1,1;1,1 1,1 + . . .

V1,2;1,1

..

V1,1;1,2
3 2
4

...
(1)

+ V1,2;1,2 1,1 . . .

...

..
.


c1,1



c1,2 = 0


..
..
.
.

Finally, we can note that to order , the vector c(0)(1,1) is simply:

c(0)(1,1)


1
0
=
..
.

So, plugging this into the above relation:


0 + V  c1,1
H

(1)
V1,1;1,1 1,1


2
0
=
+O
..
.

this which means that as expected,


(1)

1,1 = V1,1;1,1
Consider next the case that the system is in one of the degenerate states 1,2 and 2,1 . We can
look for a solution of (H0 + V ) = 0 for which in the limit of 0, is an eigenstate with
2
energy  = 54 . To accomplish this, we can again expand the solution and energy as:
(x, y) =

(0)

cj,k j,k (x, y)

j,k

=

5 2
+ (1) + . . .
4

However, unlike the non-degenerate case, the vector c doesnt collapse as it did for c(1)(0.0) . Instead,
we have that:


0
c1,1
0
cos

c1,2

c2,1

+ O(2 ) = sin + O(2 )
0
0
c2,2



..
..
..
.
.
.

Where weve introduce the mixing angle . Plugging all of this in for the matrix yields:
111

V1,1;1,1

3 2
4

(1)

V1,1;1,2

V1,1;2,1

V1,1;2,2

V1,2;1,1

V1,2;1,2 (1)

V1,2;2,1

V1,2;2,2

V2,1;1,1

V2,1;1,2

V2,1;2,1 (1)

V2,1;2,2

V2,2;1,1

V2,2;1,2

V2,2;2,1

..
.

..
.

..
.

V2,2;2,2 +

3 2
4

(1)

..
.

...

. . .

. . .
+O(2 )

. . .

..
.

But noting the zero elements of the vector c we can write this as a simpler problem:

V1,2;1,2 (1)

V2,1;1,2

V1,2;2,1

cos


+ O 2 = 0
(1)
V2,1;2,1
sin

this result is the secular equation for the degenerate states 1,2 and 2, 1. Noting the symmetry of
V1,2;2,1 = V2,1;1,2 , this can be solved as:

a (1)
b

c (1)

cos

=0

sin

a = V1,2;1,2
b = V1,2;2,1

c = V2,1;2,1

in order for there to be a non-trivial solution to this problem, the determinant of the matrix must
be zero. This condition determines the allowed values of (1) . The resulting equation is:


a (1)



c (1) b2 = 0 (1)2 (a + c) (1) b2 + ac = 0

(1)

1
=
2



q
2
2
a + c (a c) + 4b

As long as the term under the square root is non-zero, then the degeneracy of the two states is
broken by the perturbating potential and the energy level will split. The mixing angle can be
determined by plugging the above result back in.
Monday - 2/23

30.3

More on Degenerate State Perturbations

Consider some Hamiltonian H = H0 + V where H0 has a 2-fold degenerate eigenvalue (0) . In this
case, the matrix representation of H0 is:
112

(0)
1

H0 = ...

0
..
.

...
(0)
0

0
(0)
..

And in this case the perturbation would be written as:

V1,1

V2,1

V = ...

V1,2
..
.

...

a
b

b
c
..

The goal is to find the leading order solution of (H0 + V ()) = 0 where we require  (0)
in the limit of = 0. In this case, it is sufficient to expand the eigenvalue as  = (0) + + . . ..
Plugging this expansion into the original equation gives the new matrix:
(0)
1 (0)

..
(H0 ) =
.

0
..
.

...
(0) (0)
0

(0)

0
(0)
..

(0)
1 (0)

..
=
.

0
..
.

...

..

To leading order the problem is now:



+ a
b
(H0 + V ) =


b
+ c
V0

V0

(H0 + V )

(0)

where V0 is the 2 matrix connecting the zeroth order degenerate states to the rest of the states
and (H0 + V ) is the orthogonal components to the degenerate states. In this case we can write
the wave function as:
113


1
= 2

which combines with the above:

+ a

b
(H0 + V ) =

b
+ c

1
V0
2

 
1
2

V0


 =0
(0)

(H0 + V ) 

From this result, it is clear that is of order since the orthogonal component term is invertible
and therefore,
 
1
= O ()
= [(H0 + V ) ]1 V0
2
Given this result, only the 2 2 matrix is relevant if we are only looking for the first order perturbation correction. So, the solution is as we saw last class:


+ a
b

 
1
b
=0
+ c
2

 


1
+ a
b
= O 2
b
+ c
2

Which has non-trivial solutions if the determinant is zero, thus we find two possible solutions:


q
1
2
2
=
a + c (a c) + 4b
2
30.3.1

Simple Cases

Consider two simple cases of the above:


b=0
In this case, the values are found to be
=

1
=a
[a + c (a c)] +
= c
2

Plugging in, we find the possible solutions are:


(+)

1
(+)
2

 
1
=
0
114

()

1
()
2

!
=

 
0
1

b 6= 0, a = c
In this case, takes on values of:
=

1
=a+b
(2a 2b) +
= a b
2

and again, plugging in gives the result:


(+)

1
(+)
2

30.4

()

 
1
1

1
=
2

1
()
2

1
=
2

1
1

Time Evolution of Degenerate States

Consider the time evolution given by:


i

= H
t

and let |t=0 = where we can define in terms of the eigenvectors () as:
(+)

= c+

1
(+)
2

()

+ c

1
()
2

Then, the evolution can be written as:


0

eiHt
=e

30.4.1

i(0) t

i@

+ a
b

b A
t
+ c

"
=e

i(0) t

(+)

c+ ei+ t

1
(+)
2

()

+ c ei t

1
()
2

!#

Simple Cases

Recall that we worked through two simple cases in the previous section. Lets consider a system
initially in the state:
 
1
=
0
and find the time evolution of the system for these special cases.
b=0
In this case the initial state can be written as;

115

= c+

 
 
0
1
+ c
1
0

And clearly c+ = 1 while c = 0. This then gives the time evolution as:
 
i(+a)t 1

(t) = e
0
So there is no actual evolution, the state is given an additional phase however.
b 6= 0, a = c
1
= c+
2

 
 
1
1
1
+ c
1
2 1
1 .
2

And again it is simple to find: c+ = c =


1
(0)
(t) = ei t
2

This gives the time evolution as:

 
 
1 i(ab)t 1
1 i(a+b)t 1
e
+ e
1
1
2
2

In this case, the state evolves and oscillates between the two states in a manner dependent on
the values of a and b.

Wednesday - 2/25

31

Time Dependent Perturbations

Consider the time dependent problem given by:


i

(t, x
) = (H0 + V ) (t, x
)
t

For the case that the eigenfunctions and eigenvalue of H0 are well known and understood. The
solution of the perturbation problem can then be built out of the zeroth order solution.
(t, x
) =

cn (t)n(0) (
x)

Substituting this in and collecting the zeroth order terms gives the equation:

X
X
(0)
(0)
i n cn (t)n(0) =
cn0 (t)V n0
t
0
n
n

At this point it is useful to change the operator as:


116



(0)

(0)
(0)
i n
= ein t i ein t
t
t
Substituting in and multiplying to eliminate the extra exponential gives the above as:
X (0)
(0) X
(0)
cn0 (t)V n0
i ein t cn (t)n(0) = ein t
t
0
n
n

(0)

Taking the inner product with n to eliminate the sum on the left gives:
i

X (0)
i(0)
ein t cn0 (t)Vn0 n
e n t cn (t) =
t
0
n

Where weve defined Vn0 n = hn0 |V n i. This equation for cn (t) can be solved more easily if we
(0)

consider the function an (t) ein t cn (t). Note that this function gives the time evolution as:
X

an (t)n(0) = eiH0 t

cn (t)n(0)

The differential equation for cn (t) becomes an equation for an (t):


X i

i an (t) =
e
t
0

(0)

(0)

n n0

an0 (t)Vnn0

Consider the approximate solution for cn :


(0)

cn (t) = jn eij

+ dn (t) + O V 2

where dn (t) is of order V . To leading order (that is, when V = 0), the system is in a stationary
state described as:
(0)

(0)

= eij t j

So, to find the first order correction, consider the expansion of an (t):
an (t) = j,n + n (t) + O V 2

where n (t) is of order V and related to the above coefficient dn (t) by an exponential term. Substituting this in gives an equation for the first order correction:
X i

e
i [j,n + n (t) + . . .] =
t
0

(0)

(0)

n n0

117

t


j,n0 + n0 (t) + . . . Vnn0


i
i n (t) + . . . = e
t

(0)

(0)

n j

Vnj +

(0)
(0)
i n n0 t

n0 (t)Vnn0 + . . .

n0

The higher order terms are negligible and additionally n0 (t)Vn,n0 V 2 and therefore it can be
neglected as well. Therefore the first order correction is given as:

i
n (t) = e
t

(0)
(0)
n j t

Vnj

Consider what each term of this solution represents. The correction n (t) is the time dependent
(0)
correction of the unperturbed wavefunctions evolution. j is the stationary unperturbed state
(0)

that the wavefunction is initially known to be in, while n is some other state that the wavefunction
(0)
(0)
is leaking into. The term Vnj = hn |V j i represents the amount of the wavefunction which is
leaked (which is dependent on the symmetries and characteristics of the perturbing potential V ).
The above result can be solved for by integrating if Vnj is known:

(0)
(0)
i n j t0

Z
n (t) = i

31.0.2

Vnj dt0

The Sudden Approximation

Consider some potential which is constant for some amount of time T . The integral for n (t) is
then given as:
n (t) = i

Vnj
(0)

(0)

 (0) (0)

i  
t
e n j
1

0<t<T

n j

n (t) = i

Vnj
(0)

(0)

(0)
(0)
i n j T


1

tT

n j

While the perturbation is left on, the evolution coefficients change as energy is added or removed
from the system. After the perturbation is turned off, the system no longer evolves. This is seen
since for any value of t > T the coefficient n (t) is independent of t. One example of this kind of
process is a decay. If the positive charge of the nucleus increases by a decay, then there will be
a perturbing potential change switched on (almost) instantaneously. In this case the perturbation
is never turned off, so the first of the above solutions is the appropriate correction.
31.0.3

The Adiabatic Approximation

Consider some perturbation which is turned on slowly from time t = t0 and reaches a constant
value at t = 0. If the potential remains constant after that then the correction can be found by
integrating by parts:

118

(0)
(0)
i n j t0

n (t) = i

Vnj dt0

Vnj

e
= i 
(0)
(0)
i n j

(0)
(0)
i n j t

t
+

1
(0)
n

(0)
(0)
i n j t0 Vnj

(0)
j

dt0

The second term is dependent on the rate of change of Vnj which weve defined to be slowly varying.
Therefore this term can be neglected. This leaves only the solution as approximately:

n (t) = 

(0)
(0)
i n j t

Vnj
(0)

(0)

n j

e


+O

Vnj
t

Monday - 3/2

31.1

Time Dependent Perturbation Theory

Consider the case of a time varying perturbation V (t, x


). We can employ the same methods used
previously to determine corrections to the energy and eigenfunctions. Let the problem consist of:



) = 0
i H0 V (t, x
t
(0)

Where the operator H0 is well known and understood (that is, its eigenvalues i and eigenfunctions
(0)
i are known). As before, lets consider building a corrected solution out of the zeroth order
solution.
(t, x
) =

(0)

(0)

dj (t)ei0 t j

Substituting this into the Schr


odinger Equation and taking the inner product to get an equation in
terms of dk gives:

X (0)

i
(0)
dk (t) =
hk |V j idj (t)e
t

(0)
(0)
k j t

This solution can be taken in various orders to determine the terms of the approximation. Consider
each separately,
Zero-th Order
In the case of V = 0 the equation above generates:
119

dk (t) = 0 dk (t) = constant


t

And we want the expansion to recover the unperturbed solution, therefore the first order term
needs to pick out only the initial state:
dk (t) = ik + O (V )
First Order

To get first order corrections, we let dk (t) = ik + k (t) + O V 2 where weve assumed that
is of order V . Substituting this in gives the equation:

(0)
(0) i
i dk (t) = hk |V i ie
t

(0)
(0)
k i
t

+O V2

This can be solved by the integral equation:


Z

dk (t) = ij i
t0

(0)
hk |V

(t

(0)
(0) 0
t
(0) i k i
)i ie
dt0

+O V2

Where t0 is some initial time that is determined by the potential (usually t0 will be either
0 or ). This first order solution is the commonly used expression for approximating the
behavior of a time dependent perturbation.
Second Order
Repeating the process can generate higher order corrections, though the multiple integrals
involved quickly make the process difficult. Consider the second order correction:

i
i dk (t) = Vki (t)e
t

(0)
(0)
k i
t

Z t
(0)
(0)
i k j t

Vkj (t)e

Vkj (t )e

(0)
(0)
i k j t0

dt0 + O V 3

t0

j
(0)

(0)

Where weve used the notation Vki (t) = hk |V (t)i i.


31.1.1

Notation and Interpretations

In the physical sense, the magnitude of the coefficients are the probabilities that a transition has
occurred. This is simple to see at t = 0 since the coefficients collapse to delta functions at the initial
states. As time evolves the probability of finding the system in some other state changes depending
on various parameters. Put simply,
|dk (t)|2 = Pik (t)

120

31.1.2

Example - Infinite Square Well With Time Dependent Perturbation

Consider the problem weve seen multiple times of a particle in a 1-dimensional infinite square well
d2
from x = 1 to x = 1. Weve seen that for H0 = 12 dx
2 the resulting eigenfunctions and eigenvalues
are:


(0)
j (x)

= sin

j
(x + 1)
2

(0)
j


=

j
2

2

t2

If there is a time dependent perturbation V (x, t) = (x)e 2 then we can find the resulting state
in terms of the unperturbed states. We can use the above result for the first order correction to
find (assuming k 6= i):

Z

dk (t) = i

sin




 (0) (0)



t2
k 0
i 0
i  
t0 0
0 2
0
x + 1 (x )e
sin
x + 1 dx e k i
dt + O V 2
2
2


= i sin

k
2


sin

i
2

Z

02
(0)
(0) 0
t
t 2 i k i

dt0 + O V 2

This can be solved by completing the square in the exponential




t02
(0)
(0)

i



t0 =
i
k
2

2 2 

t
 (0)
(0)
(0)
(0) 2
i
k i
+
k i

2
4

Then,


dk (t) = i sin

k
2


sin

i
2

(0)
(0) 2
k i

(0)
(0)
ik (0)i (0) e 4

(0)

(0)
(0) 2
k i

(0)

Note the behavior of this result. First, if one of the eigenfunctions k or i have a zero at
x = 0, then the perturbation is zero and there is no transition. Further, depending on the value of
(0)
(0)
k i , there is some preferred for the states to transition (assuming the transition is possible).

31.2

Periodic Perturbation Theory

Consider some potential V (t) = v(


x)eit . In this case, the first order correction can be separated
as:
Z

dk (t) = ij i
t0

(0)

(0)

hk |
v (
x)i ie

121

(0)
(0)
i k i t0

dt0 + O V 2

The integral in time is now evaluated as:


 (0) (0) t
(0)
(0)
h |
v (
x)i i
i   t0
 e k i
dk (t) = i  k
(0)
(0)
t0
i k i
And for finite times, this can be simplified to:

dk (t) =

(0)
(0)
hk |
v (
x)i i

(0)
(0)
i k i t0
(0)

(0)

 (0) (0)

i k i (tt0 )
e
1

k i
(0)

(0)

Consider the behavior of this result for t t0 small and k i , then the above result can
be approximated as:
(0)

(0)

dk (t) hk |
v (
x)i i (i(t t0 ) + . . .)
Alternately, if t0 and t , then the integral takes values over all time. The behavior of
dk (t) then becomes that of a delta function,


(0)
(0)
(0)
(0)
dk = 2ihk |
v (
x)i i k i
This means that if the perturbation is left on indefinitely, then after long times only transitions with
(0)
(0)
energy differences equal to the frequency are probable. That is, nothing happens if 6= k i .
31.2.1

Transition Probability for Period Perturbations

For a given transition, weve seen that the probability of occurrence is Pki (t) = |dk (t)|2 . To
(0)
(0)
simplify notation, well use the definition k i =
ki . Then we can write the probability
as:


1  iki (tt0 )
i
ki (tt0 )

1
e

1
e
2

ki
(0)
(0)
(0)
(0)



|hk |
v (
x)i i|2 
|hk |
v (
x)i i|2
i
ki (tt0 )
i
ki (tt0 )
=
1 e
+e
+1 =2
[1 cos (
ki (t0 t))]
2
2

ki

ki
(0)

(0)

|dk (t)|2 = |hk |


v (
x)i i|2

(0)

(0)

|h |
v (
x)i i|2
2
|dk (t)| = 4  k
2 sin
(0)
(0)
k i
2

 


1 (0)
(0)
 i (t0 t)
2 k

Wednesday - 3/4
Consider the behavior of the probability as the perturbation covers all time. In this case, it is useful
to determine the behavior of the sine function for large arguments. Consider the definition:
122

sin2 (t)
= ()
t t2
lim

this can be shown by writing noting that for 6= 0, the above limit is zero and for = 0 the value
goes off to infinity. Stated more exactly,
sin2 (t)
lim
0 t2


lim


=

Further, the integral over all values of can be determined to be:


Z

sin2 (t)
1
d =
2
t

sin2 (x)
dx = 1
x2

These qualities are sufficient to show that this limit is indeed a delta function. From this, we can
determine that in the limit of t t0 , the sine function in the probability becomes:
sin2

 
1
2



(0)
(0)
 

k i (t t0 ) 1 (t t0 )
1
1 (0)
(0)
2


(t t0 )
 i
1
(0)
(0)
2
2 k
k i
2 (t t0 )


(0)
(0)
= (t t0 ) k i

Then, the probability asymptotically approaches behavior described as:


(0)

(0)

|h |
v (
x)i i|2
2
|dk (t)| = 4  k
2 sin
(0)
(0)
k i
2

 


1 (0)
(0)
 i (t0 t)
2 k



(0)
(0)
(0)
(0)
4|hk |
v (
x)i i|2 (t t0 ) k i
And then, the rate of change in the probability is written as:


dPik
(0)
(0) 2
(0)
(0)
4|hk |
v (
x)i i| k i
dt
This result is known as Fermis Golden Rule and describes the change in probability of a transition
over time. Note that if the frequency of the perturbation does not match exactly the energy
different, the transition will not occur ever. Also note however, that this assumes the perturbation
is left on for an extended amount of time. For a perturbation applied for a short time scale, these
approximations are not valid.

123

Part VI

THE QUASI-CLASSICAL CASE


32

The WKB Approximation

Consider the case that the energies and geometry of a problem allow the assumption that h
is
relatively small. In this case, an expansion in order of h generates what is known as the QuasiClassical solution to the Schr
odinger Equation. Consider writing the wave function as:



2 d2
h

+ V (x)  = 0
2m dx2

= e h (x)

Substituting this into the Schro


odinger equation, we obtain a non-linear equation for (x),
2
h

2m

1
2
h

d
dx

d
dx

2
ih

2

i d2
+
h dx2

!
+ V (x)  = 0

d2
+ 2m (V (x) ) = 0
dx2

At this point, we can exploit the assumption that the scale of our problem causes h to be small
relative to the energy scale. We can expand as:
(x) = (0) (x) + h (1) (x) + +h2 (2) (x) + . . .
Plugging this in gives:
d (0)
dx

!2
+ 2h


d (0) d (1)
d2 (0)
ih
+ 2m (V (x) ) + O h
2 = 0
2
dx dx
dx

We can then assume that individual powers of h must individually go to zero, this gives the zeroth
and first order terms of (x):
Zero-th Order:
Consider the terms with powers of h0 :
d (0)
dx

!2
+ 2m (V (x) ) = 0

124

p
d (0)
= 2m ( V (x))
dx

This can be solved easily by simply integrating against x. Note that the difference between
the total and potential energy is the kinetic energy and the factor of 2m combined with that
gives the magnitude of the classical momentum. Therefore,

(0)

Z p
Z
(x) =
2m ( V (x))dx = |
p(x)|dx

First Order:
The terms of order h1 require:
2 (0)

d (1)
d (0) d (1)
d2 (0)
i d dx
i d
d (0)
2
=
0

2
i
=
=
ln
dx dx
dx2
dx
2 d(0)
2 dx
dx
dx
!

d (0)
i p
i
(1)
= ln
= ln
2m ( V (x)) = ln (|
p(x)|)
dx
2
2

Plugging these back into our definition for gives the approximation as:

=e

i
h

i
2m(V (x))dx+ + h
2m(V (x)) +O(
h2 )
ln
2

And simplifying the exponential,

(x) =

[2m ( V (x))]

1 e

2m(V (x))dx

And in general, the solution will take the form of a linear combination of the two above approximations. That is, (x) c1 + (x) + c2 (x).
Consider the behavior of these solutions for possible relations of  and V (x). In the case of  > V (x),
the square root term is real and the solutions oscillate over the space. In the case that  < V (x),
the square root term is pure imaginary and the solutions grow and decay exponentially in either
direction. Depending on the region of interest, one of the solutions may be dropped since we
require (x) be integrable as x . Lastly, the points where  V (x) are problematic for
this approximation therefore (as well see in the next section), additional work needs to be done to
determine behavior near these classical turning points.
Monday - 3/9

32.1

Connecting The Quasi-Classical Solutions Around A Turning Point


Airy Function Asymptotics Analysis

The behavior of our solutions for regions of (V (x) ) < 0 and (V (x) ) > 0 are drastically
different. Consider some potential with a value of V (a) =  where V (x > a) >  and V (x < a) < .
The solution in either region can be written as:
125

(x) =

1 e

2m(V (x0 ))dx0

xa

[( V (x))] 4
A1

(x) =

1 e

2m(V (x0 ))dx0

A2

[( V (x))] 4

1 e

2m(V (x0 ))dx0

xa

[( V (x))] 4

But as noted above, these solutions are valid only for x far from the turning point a. So what
happens at the turning points? Lets consider the relations among the coefficients B, A1 , and A2 .
Near the turning point, the potential can be approximated by its Taylor Series expansion.
V (x) V (a) + V 0 (a) (x a) + . . . =  + V 0 (a) (x a) + . . .
Plugging this into the original differential equation,





2 d2
h
h2 d2
0

+ V (x) 
+  + V (a) (x a) + . . . 
2m dx2
2m dx2


h2 d2

0
+ V (a) (x a) (x) = 0

2m dx2

 2

d
This resulting equation can be scaled into the Airy DE dz
2 z f (z) = 0. By defining z = (x a)
and choosing such that the coefficients of the two terms are equivalent,


 

V 0 (a)
z
2 2 d2
h

+
z
+a =0

2m dz 2


1
3
2m 0
h2 2 V 0 (a)

=
=
V (a)
2
2m

Therefore the resulting equation to solve is:


!



 13
d2
h2
2 +z
z+a =0
dz
2mV 0 (a)
The solutions of this differential equation can be written as a linear combination of the functions
Ai(z) and Bi(z). Consider though, the large z behavior of these solutions,
2 3
1
Ai(x)|x+ 1 e 3 x 2
2 x 4

Bi(x)|x+

1
x

1
4

e3x2

As our solution cannot grow exponentially, the Bi(z) term must be neglected. Consider in addition
to the above, the large negative z behavior of Ai(z):

126

Ai(x)|x

1
1

sin

x 4

2 3
x2 +
3
4

From this, we can immediately see that our solution should have the asymptotic behaviors:

1 1
(x) 2
2
(x)

12

"

"

2mV 0 (a)
h2

0 (a)

2mV
h2

1
3

# 41

1
3

e 3h

(x a)

2mV 0 (a)(xa) 2

xa

# 14  

3
2p

0
2
sin
2mV (a) (x a) +
(x a)
3h
4

xa

So, given these asymptotics, what is the connection with the WKB Approximation weve been
dealing with? Consider using the same Taylor Expansion for the WKB Solution. The integral in
the exponent becomes:
Z

xp

2m (V

(x0 )

xp

2mV 0 (a) (x0 a)dx0 =

)dx

3
2p
2mV 0 (a) (x a) 2
3

Therefore, for x just slightly larger than a, the WKB Approximation behaves as:
(x)

[V 0 (a) (x a)]

e 3h

1
4

2mV 0 (a)(xa) 2

But this matches the Airy asymptotic nearly exactly, it gives the relation:

 1
1
2mV 0 (a) 12

1 =
2
h2
[V 0 (a)] 4
B

For the asymptotics of x < a, the WKB Approximation behaves as:


(x)

A1

i 3h
1 e

2mV 0 (a)(xa) 2

[V 0 (a) (x a)] 4

A2

i 3h
1 e

2mV 0 (a)(xa) 2

[V 0 (a) (x a)] 4

Separating the coefficients and accounting for the added phase in the Airy function asymptotics,
the comparison generates the condition:






3
3
2p
2p
i 4
i 4
0
0
2
2
A1 e i sin
2mV (a) (x a)
+ A2 e
i sin
2mV (a) (x a)
1
3h
3h
[V 0 (a) (x a)] 4
1

21

2mV 0 (a)
h2

 1

12

i sin

127

3
2p
2mV 0 (a) (x a) 2
3
h

A similar relation holds for the real part of the exponentials, but both of the resulting relations
give:


A1 ei 4 + A2 ei 4

= 2B

This gives the connection formula by the Airy Function asymptotics as:
1

1 e

2 (. . .) 4

23 (...) 2

sin

(. . .) 4

2
(. . .) 2 +
3
4

(. . .) 4

=


(...) 2
3

1 e

cos

(. . .) 4

1
1

cos

(. . .) 4

3
2

(. . .) 2 +
3
4

2
(. . .) 2
3
4

(For an example of how these formulas work in a problem. See the final exam)
Wednesday - 3/11

32.2

Connecting The Quasi-Classical Solutions Around A Turning Point


Complex Analysis

As an alternate method to what was done in the previous section, consider tracing the solutions
through the complex plane. If x is sufficiently close to a, then weve seen that the solution can be
approximated as:
(x)
(x)

A1
[V 0 (a) (x a)]

1
4

3h
1 e

[V 0 (a) (x a)] 4

3
2
0

ei 3h

2mV (a)(xa) 2

2mV 0 (a)(xa) 2

A2

x>a
2

[V 0 (a) (x a)]

1
4

ei 3h

2mV 0 (a)(xa) 2

x<a

Consider tracing the solution for x in the complex plane and instead of going through the singular
point at x = a, we trace a semicircle in the upper half imaginary space. The path can be written
as:
x ei ;

= |x a|

To go from the point (a+x) to (ax) on the other side, it is necessary to have = 0 . . . . Consider
first the connection of the two terms dependent on x term:


ei

3
2

3i

| = 2 e 2 = i 2

ei

 1

| = 4 e 4

From this analysis we find that the exponential switches from real to imaginary and the coefficient
picks up a phase of 4 . Repeating for the second term, if we consider moving in the lower half of
the imaginary space, the value of changes from 0 . . . . This gives the connection as:
128

3

3i

3
2

| = e

= i

3
2

 1
4

| = 4 e 4

This gives the final result (which matches the Airy function analysis) that:

A2 = Bei 4

A1 = Bei 4
Which gives the connection as:
B

(x)

[(V (x) )]

e h

2B

(x)

1
4

cos

[( V (x))] 4

Rx
a

2m(V (x0 ))dx0

x>a

2m

0
0
2 ( V (x ))dx + 4
h

!
x<a

in the case that the turning point is on the reverse side (that is, V (x) >  for x < a), the limits
of integration must be reversed and the phase should be chosen appropriately. Denoting the result
with the above, this is:
(x)

1 x
h
2m(V (x0 ))dx0
a

 < V (x)
1 e
[(V (x) )] 4

Z r
!

x 2m
2B


0
0
(x)
 > V (x)

1 cos
2 ( V (x ))dx 4

h
4
a
[( V (x))]
This solution confirms the Airy function asymptotics solutions, however (at least in my opinion),
the Airy function results are easier to use in working problems.

32.3

Bohr-Sommerfeld Quantization

Consider some well in a region of a potential V (x) from x = b to x = a ( a > b). What can be
said about the bound state energies form the WKB Approximation? Since we require the solutions
match up on either side of the well and the connection generate a phase of 2 at each barrier point,
it can be seen that the requirement of:
Z

ap

 V (x0 )dx0 +

= n
2

will leave the wavefunction unchanged at the turning points. This gives a method of determining
approximate energy levels inside the well.

129

33

Penetration Through a Potential Barrier - Quantum Tunneling

Monday - 3/23

33.1

Example - Rigid Wall With A Finite Barrier

Consider a system with an infinite potential at x 0 and a barrier of finite width and height
centered at some location x = R. In the limit that the height of the barrier is very large, the
behavior of the system becomes that of an infinite square well with an infinite series of bound
states. In the limit of the width growing very large, the system becomes a finite well with several
bound states and a continuum of free energies.
Consider another limit in which the barrier is very tall but the width approaches a very small value.
In this system, the potential can be approximated by a delta function. The system is then described
by:


d2
2 + (x R) E (x) = 0 with
dx

(0) = 0

The solutions away from the delta function are easily determined:
(x) = c1 sin
(x) = aei

Ex


Ex

+ bi

0<x<R
Ex

x>R

Here weve denoted the incoming and outgoing wave functions beyond the potential barrier as a and
b respectively. Fixing the coefficients is done by satisfying boundary conditions. The wavefunction
must be continuous over all space and the first derivative must be continuous except at locations
which have an infinite potential (that is x = 0 and x = R). Integrating Schrodingers Equation over
the delta function contribution generates this second condition:
R+ 



d2
2 + (x R) E (x) dx = 0
dx
R
Z R+
d
R+
(x)|R + (R) + E
(x)dx = 0
dx
R

Evaluating this result in the limit of  0 gives the relation for the discontinuity of the derivative.
The two conditions are then:
< (R) = > (R)



0 R + 0+ + 0 R 0+ + (R) = 0

Plugging in the above definitions for the solutions on either side of the barrier,

130

ER = aei ER + bei ER







ER = c1 sin
ER
i E aei ER bei ER c1 E cos
c1 sin

these equations can be written in matrix form as:

ei ER

i Eei ER

! 

ei ER
a
i ER
b
i Ee




sin
ER




= c1
E cos
ER + sin
ER

The 4 4 matrix is easily inverted to give the solution as:

 
c1
a

=
b
2i E

i Eei ER

i Eei ER

ER
ei

ei ER




sin
ER




E cos
ER + sin
ER

with this solution, it is possible to graphically sweep through different energies and observe the
behavior of the coefficients a and b. It is expected that at certain energies the values of the escaping and incoming components of the wavefunction will be very small compared to the magnitude
between the rigid wall and the barrier. These energies correspond to semi-bound states or resonances of the potential. They exist as remnants of the behavior of the other limits of the square wells.
Consider if instead of the above definition, one defined the solution in each region as:
(x) = c1 sin (kx)

0<x<R

(x) = sin (kx + ) x > R


where weve denoted

E = k. With these definitions, the two conditions become:

c1 sin (kR) = sin (kR + )

c1 k cos (kR) + c1 sin (kR) = k cos (kR + )

Dividing these two sets of equations gives the phase of the solution outside the barrier:
sin (kR)
1
= tan (kR + ) = tan1
k cos (kR) + sin (kR)
k

sin (kR)
cos (kR) + k sin (kR)

!
kR

Squaring both equations and dividing the second by k 2 gives the condition on the coefficients c1
and :
c1 2 sin2 (kR) +


c1 2
(k cos (kR) + sin (kR))2 = 2 sin2 (kR + ) + cos2 (kR + )
2
k
131

c1 2 sin2 (kR) +


c1 2 2
2
2
2
k
cos
(kR)
+
2k
sin
(kR)
cos
(kR)
+

sin
(kR)
= 2
k2

c1 2 sin2 (kR) + c1 2 cos2 (kR) + 2c1 2

2
sin (kR) cos (kR) + c1 2 2 sin2 (kR) = 2
k
k

Solving for the amplitude outside the barrier,


r
= c1

2
1 + 2 sin (kR) cos (kR) + 2 sin2 (kR)
k
k

This can easily be plotted for various values of k and by varying and R it is possible to see how
varying strengths and locations for the barrier change the number and strength of the semi-bound
states. Consider the limiting case of , in this limit the above solution becomes:
c1

sin (kR)
k

Thus for a nearly infinite barrier at x = R, the behavior of an infinite square well is nearly recovered.
Wednesday - 3/25
Continuing on with this same tunneling problem, consider the result if the potential barrier has an
infinite strength. That is, if . In this case, the solution is singular and becomes a decoupled
set of bound states and continuum states.
(q
n (x) =
(
k (x) = q 2

2
R

sin
0

nx
R

0<x<R
R<x

0<x<R

sin (k (x R))

R<x

Where weve determined the normalization of the continuous spectrum by referencing back to the
Sine Transforms coefficient and noting the symmetry of this eigenfunction and the sine transform.
Note the behavior of this limiting case. If some initial state 0 (x) is non-zero only in one of the
two regions, the wave function will never enter the second region. Further, whatever the ratio of
the state is initially in each region will remain the same. (That is, if the initial state has equal
probability of being in either region, after the system evolves the probability of the particle being
in the region 0 < x < R is always 21 ).
Consider the limiting case of being very large, but not infinite. In this case the regions influence
and overlap to a small extent. Because the wavefunction is able to penetrate the barrier for all
energies, there is no longer any discrete spectrum however, the behavior and physics of the system
should contain certain characteristics of the decoupled limit still. Consider the solutions in either
region:

132


(x) =

A sin (kx)
B sin (k (x R) + )

0<x<R
R<x

As before, it is possible to determine the coefficients and phase by applying the two conditions:
< (R) = > (R)



0 R + 0+ + 0 R 0+ + (R) = 0

B sin = A sin (kR)

Bk cos = A (k cos (kR) + sin (kR))

And again, we find the relations to be:


sin (kR)
tan =
cos (kR) + k sin (kR)

B =A

1 + 2 sin (kR) cos (kR) + 2 sin2 (kR)


k
k

The above completely determines the solution except for an over all normalization. However, from
the same arguments
q made about the coefficient of the limiting case when , we can make the
claim that B =

and therefore:

"

sin (kR)
= arctan
cos (kR) + k sin (kR)

#
A2 =

2
1

1 + 2 sin (kR) cos (kR) +


k

2
k2

sin2 (kR)

From these definitions it is easy to see that in the limit of , the value of the phase approaches 0
unless the quantity sin (kR) = 0. In this case, the phase is still zero, however some other interesting
results occur. Consider first if is very large, and k is such that sin (kR) is also very large ( that
is, kR is not close to any value of n). In this case, we can expand the above results and find the
small corrections:

 2 
k
k
1 cot (kR) + O

2

 2 
2 k2
k
k
2
2
A
sin (kR) 1 2 cot (kR) + O
2

2
k
tan

Alternately, in the case that is very large, but k is such that sin (kR) is not (that is, kR is close
to some integer value of ), then it is possible to assume:
 
 
1
1
1
sin (kR) O

cos (kR) + k

So the correction to the phase is small still, however, consider the correction to the coefficient A.
To determine this, let kR = n + where is some small correction term. Then, what are the
2
roots of the equation 1 + 2 k sin (kR) cos (kR) + k2 sin2 (kR) when is very large? (Note that in the
133

limit of the roots converge to kR = n)


If this function is plotted for finite values of , one finds that there are no real roots. This implies
that the roots have moved into the complex plane. Consider if = n + in is the actual root in
the complex plane. Then we can write that:

h
i

2
1 + 2 sin (kR) cos (kR) + 2 sin2 (kR) = c ( n in ) ( + i) = c ( n )2 + n 2
k
k
Using this information, the coefficient in this limit can be written in terms of k as:
A2

21
1
2 1
=
c (kR n n )2 + n 2
cR2 k

1
n
R


n 2
R

n 2
R2

Monday - 3/30
33.1.1

Time Evolution

Given some initial form (x, t = 0) = 0 , consider how the wavefunction will evolve. In general the
evolution is given by:
(x, t) = eiHt 0
Consider the case that 0 is one of the bound eigenfunctions of the limit for the problem.
In this case we can write the initial state:
(q
0 = n =

2
R

sin
0

n
Rx

0<x<R
R<x

For reference, well denote the continuum eigenvalues from this limiting case as

(0)

(
= q

0
2

0<x<R

sin (k [x R])

R<x

Consider expanding (x, t) in terms of these eigenfunctions,

(x, t) =

Z
cj (t)j (x) +

c(k, t) (0) (k, x)dk

j=1

For the initial case considered here,


cj (0) = n,j

c(k, 0) = 0
134

From this form, we would expect that the evolution would include an exponential decay of the
bound state and that certain values of c(k, t) would then begin to become significantly larger than
others. We do not expect any other cj s to become significant (that is, other bound states wont
show up) and we expect certain energies of the continuum to be dominant. Lets consider this
evolution in the limit of very large but not infinite.
The evolution of the coefficient is described by cj (t) = hj | eiHt |0 i = hj | eiHt |n i. This can be
evaluated by expanding it in terms of the k (x) eigenstates of the actual potential.
Z
cj (t) =

hj |k ihk |n ieik t dk

Noting that i is nonzero only on the segment 0 < x < R, the braket terms can be determined by
plugging in the definitions for k ,
Z
cj (t) =
0

2
A (k)

Z

 Z

j (x) sin (kx) dx eik t dk

j (x) sin (kx) dx


0

At this point we have made no assumptions about the problem and this relation is exact. Consider
though, since is large, the form of the coefficient A2 (k). We have shown that it has the form:

Because of this, we can approximated some function integrated against such a function as:
Z
0

0
X  n0  Z nR +n0
f (k)A (k)dk
f
A2 (k)dk
n0
R
n0
0

where n0 is large enough to just cover the contribution at k =


can be written as:

n0
R .

Using this definition, the above

Z R
 0   Z R
 0   Z n + 0
n
R
2X
n
n
2
cj (t) =
j (x) sin
dx
j (x) sin
dx
A2 (k)eik t dk
0
n
0
R
R
0
0
n0
n

Each of the integrals on x give a factor of

2
R

and delta functions, so this becomes:

135

RX
j,n0 n,n0
cj (t) =
0

n0
+n0
R

n0
n0
R

A2 (k)eik t dk

We saw last time that in this limit, A2 (k) can be approximated by the form:
1
(k n )2 + n 2

A2 (k)
So, the coefficient is then written as:
R
cj (t) = j,n

n
+n
R

eik t
dk
(k n )2 + n 2

n
n
R

Due to the asymptotic behavior weve seen previously, it is possible to redefine the limits of integration for this relation to cover all values of k, doing so and expanding the denominator gives:
R
cj (t) = j,n

eik t
dk
(k n in ) (k n + in )

This integral can be evaluated using the definition for the Principle value,
1
k n in

= P + i (k n )

Alternately, one could modify the contour to avoid the poles and in crossing over the pole add
the contribution there. Then, neglecting the contribution along the contour (since its negligible
compared to the value at the pole), the result is the same as the above method,
cj (t)

2
2
2
2
1
R
R
j,n ei(n n +2in n )t
= ei(n n )t
j,n e2n n t

2in
2in

note that this result decays as we would expect with a rate proportional to
lifetime of the bound state n in the limit of large but not infinite.

1
2n n .

This gives the

Consider the rest of the wave function. What does the escaping wavefunction look like and how does
it evolve? The above process can be repeated with several changes. Recall that weve expanded our
wavefunction as
(x, t) =

Z
cj (t)j (t) +

c(k, t) (0) (k, x)dk

From this definition we can immediately write that the coefficients of the escaped wavefunction are:
136

Z
dx

c(k, t) =
Z

Z
dx

Z
dy

dy (0) (k, x) hx| eiHt |yi n (y)

02

dk 0 (0) (k, x)k0 (x)eik t k0 (y)n (y)

137

Wednesday - 4/1
An aside on complex analysis - Principle Value
Last class we looked at a method of analysis using the Principle Value of an integral. Lets
clarify what this analysis involves. Consider the integral:
Z
f (x)
dx
x

where we assume that f (x) is continuous and f (0) 6= 0. If we consider breaking the integral into
pieces around the singularity at x = 0, we get:
Z 1 Z 2 Z
Z
f (x)
f (x)
+
+
dx =
dx
x
x
1

2

The integrals over the non-singular regions can be computed since it is assumed that f (x) is
analytical. The remaining integral can be determined by taking 1 , 2 and assuming they go to
zero. This leaves:
Z 1 Z
Z
Z 2
f (x)
f (x)
dx
+
dx =
dx + f (0)
x
x

2
1 x
The last term is then,
Z

2
1

dx
2
= ln(x)|
1
x

However, this limit is not well defined for 1 or 2 going to zero since the natural logarithm blows
up at that value. What can be done to evaluate this integral is to move the contour away from
the singularity at x = 0 by instead integrating over a semicircle of radius . On such a contour,
x ei and ranges from to 0. The integral then becomes:
Z  Z
Z 0
f (ei ) i
f (x)
dx +
ie d
+
x
ei


Then, if we cancel the factors in the last integral and let the value of  go to zero,
Z  Z
Z
Z 0
f (x)
f (x)
dx = lim
dx + if (0)
+
d
0
x
x


Changing notation, the integral over the non-singular regions can be called the principle value
and the above becomes:
Z
Z
f (x)
f (x)
dx = P
dx if (0)
x
x

but this must hold for any function f (x) therefore weve determined a method to write
distribution:
1
1
= P i(x)
x
x

138

1
x

as a

34

WKB Approximation for a Finite Potential Barrier

**See final problem for this example**


Wednesday - 4/15

Part VII

SCATTERING THEORY
35

Potential Scattering

Consider a potential localized to a region in space such that:



V (
x) =

rR
else

f (r, , )
0

For simplicity, well consider only elastic scattering at low energies. The actual interaction of
an incoming wave packet and the potential can be complicated and in some cases uninteresting.
Consider instead if we use the time independent Schrodinger equation to determine eigenfunctions
and eigenenergies of the system far from the potential. That is, we can write the eigenfunction
expansion far from the potential source as a linear combination of incoming and scattered states,

x
k (
x) = eikx + Sc (k,
)

x
Sc (k,
) =

(2)
l,m (
al,m (k)Y
r)hl (|k|r)

l,m
(2)

Where r denotes only the directionality of r (that is, and ), and hl is the spherically outgoing
Hankel function. This solution follows from the Schrodinger equation at large distances can be
written with the potential zero.






2 2
h
h2 2

+ V (
x) |
x|  R

2m
2m

For a potential which is non-zero at large distances (such as a Coulomb potential), the above
relations would have to be replaced with the incoming plane wave and outgoing spherical distribution
from the Hydrogen atom solutions. Since we are considering the large radius asymptotics, we can
(2)
replace hl with the its asymptotic form to give:

k (
x) eikx +

e
l,m (
al,m (k)Y
r)

l,m

ikr

r) e
= eikx + fk (k,

ikr

As it is useful to know in this section, the expansion form of the plane wave along the z axis is:
139

eixz kz =

im (2m + 1) jm (kr) Pm (cos )

m=0

In general, the propagation in any direction is given by:

eikx = 4

Yl,m (
il jl (kr) Yl,m (k)
r)

l,m

35.0.2

Example - Rigid Spherical Potential

Consider a potential of the form:



V (
x) =

|
x| R
else

This potential requires that the wavefunction go to zero on the surface of the sphere. Plugging in
the appropriate solution from above:
h
i


x
eikx + Sc (k,
)

|
x|=R

=0

this requires that:




eikx

|
x|=R



x
= Sc (k,
)

|
x|=R



(2)
l,m (
al,m (k)Y
r)hl (kr)

l,m

|
x|=R

Looking up the expansion for eikx in spherical harmonics,

eikx = 4

Yl,m (
il jl (kr) Yl,m (k)
r)

l,m

Combining these and solving for al,m gives the amplitude scattered in the direction Yl,m (
r),

al,m = 4il

35.1

jl (kR)

Yl,m (k)
(2)
hl (kR)

Constant Energy Wave Packet Scattering

Consider a wave packet of constant energy

h2

2
2m |k| .

140

We can expand the incoming wavefunction as:

Z
in =

2 k
eikx F (k)d

Further, we can expand the entire wavefunction at large ranges in this manner to give:

2 k =
k F (k)d

x
ik

X
l,m

eikr 2
al,m Yl,m (
r)
F (k)d k
r

Since we are working in the case of r , we can simplify the first term in the integral by noting
that for large r it oscillates between the limits of cos kr = 1 . . . + 1. We can write this term as:
Z

x
ik

2 k =
F (k)d

Z Z
e

ikr cos kr

F (k)d
kr sin kr dkr =

Z
0

+1

eikr F (k)dd
kr

We can integrate this by parts and keep only terms up to order 1r , this leaves:

Z
e

x
ik

 
 
+1
i
1
2i h ikr
1
1
2
ikr
ikr

F (k)d k = 2
F (k)e
=
e F (
r) e
F (
r) + O
+O

2
ikr
r
kr
r2
=1

So finally, we can write the wave function in terms of the incoming wave packet and outgoing waves
as:
i eikr
2i h ikr
=
e F (
r) eikr F (
r) +
kr
r

k (k,
r)d2 k
F (k)f

This result can be written alternately as:




eikr
eikr  
= 2i
F (
r)
SF (
r)
kr
kr

Z

r)F (k)d
2 k + 2iF (

SF (
r) = kfk (k,
r)

Where S is the scattering matrix for the potential.


Monday - 4/20

36

Scattering Theory - A More General Approach

Consider approaching the scattering problem as a perturbation approximation. If we can write the
Hamiltonian of the system as:
H = H0 + V (
x)

141

Where H0 is known and understood (that is, its eigenfunctions and eigenvalues are determined) then
we can use Greens Function methods to determine the overall wavefunction and then get corrections
to the eigenfunctions of H0 from the potential. Consider the problem given by:
[H E] = 0 with

(0)

[H0 E] E = 0

Given the above definition for the Hamiltonian, this becomes:


[H0 + V (
x) E] = 0 [H0 E] = V (
x)
This is a more general formulation of the scattering problem. It is easily seen that this is the same
as the above statement, one can act (H0 E) on both sides of the equation to obtain the expected
result. From this result, we can reformulate the solution as:
(0)

= E + (H0 E)1 V
In integral form, this can then be written:

(
x) =

(0)
E

h
x| (H0 E)1 |
y i V (
y ) (
y ) d3 y

This is known as the Integral Scattering Equation and is useful for determining the approximate
solution. However, note that the integral is dependent on (
y ), which makes this difficult to solve
completely.
One potential issue with this solution exists if E is in the continuous spectrum of H0 (In the case
that E is a discrete eigenvalue, this formulation is invalid entirely, so we consider only continuous
spectra). Since it is possible (and likely) that E will be somewhere in the continuous spectrum,
there will exist two Greens Functions for the operator. These can be written as:
G() (
x, y) = h
x| H0 E i0+

1

|
yi

From this one can determine approximations starting with the original Hamiltonian. Consider:
(0)

zeroth order
1
(0) H0 E i0+
V (0)
f irst order
1
1
1
(0) H0 E i0+
V (0) + H0 E i0+
V H0 E i0+
V (0)

second order

The full extension of this approximation is known as the Born Series and can be written in the
form:

142

h
X

H0 E i0+

1

ij

(0)

j=0

If this series converges for the given potential and H0 eigenfunctions, then the wavefunction is
known. In the case we are considering, H0 = 2 then the Greens Functions are known:
h
x| (H0 E)1 |
y i = G() (
x, y) =

1 eik|xy|
4 |
x y|

where weve replaced E = k 2 . Using this Greens Function, lets consider an actual problem.

36.1

Scattering from a Localized Potential


The Born Formula

Consider some potential V (


x) which is localized in space. In such a case, we can assume that
outside of some radius R, the potential is always zero. The exact solution can be written using the
Integral Scattering Equation as:

(
x) =

(0)
E

1
+
4

eik|xy|
V (
y ) (
y ) d3 y
|
x y|

In order to get something useful out of this, consider the case that |
x|  R. This means that in the
integral term, |
x| is always much greater than |
y |. Because of this, we can expand the separation
as:
s
|
x y| = |
x|



x
y |
y2|
x
y 1 |
y |2
1 2 2 + 2 |
x| 1
+
+ ...
|
x|
|
x|
|
x|2
2 |
x|2
|
x y| x x
y + . . .

Plugging these in and keeping only the necessary terms,

(
x) =

(0)
E

1
+
4

eik(xxy+...)
V (
y ) (
y ) d3 y
x + ...

Because we only require the outward going solution, we pick the solution with the positive exponential in x. Thus, for |
x|  R we have the approximation of:

(
x)

(0)
k (
x)

 Z

eikx 1
ik
x
y
3
+
e
V (
y ) (
y) d y
x 4

The bracketed term can be represented by some function f (k, x


). Lets consider the first order
solution,
143

(0)

1
(k, x
) =
4

Z
e

ik
x
y

(0)

(0)
(
y ) k (
y ) d3 y

1
(k, x
) =
4

1
=
4

eikxy eiky V (
y ) d3 y

ei[kkx]y V (
y ) d3 y

This first order approximation of the scattered wavefunction for a given potential V (
x) is known as
Borns Formula. Using the Born Series from above, the higher order approximations can be written
as:
f

(n)

1
(k, x
) =
4

eikxy V (
y)

nh

H0 k 2 i0+

1

in

(0)

d3 y

Alternately, we can simplify this result for a special case. Consider if the potential is spherically
symmetric (that is, V (
x) = V (r)). In this case, the above result will simplify to:

1
f (0) (k, x
) =
4

Z Z Z
e

x|r cos
i|kk

(0)

1
V (r)r2 dr d (cos ) d =
(2)
4

1

(k, x
) =
2 k k
x

1



k k
x r

!Z



rV (r) sin
k k
x r dr



rV (r) sin
k k
x r dr

And since k
x and k denote the incident and outgoing directions respectively, it is simple to show
that in this case:



k k
x = 2k sin

sc
2

Where sc is the scattering angle. Plugging this in, we find that:

36.1.1

(0)

1
(k, sc ) =
4k sin

Z
sc
2


  
sc
r dr
rV (r) sin 2k sin
2

Example - Gaussian Potential

Consider a potential of the form:


V (
x) = e|x|
The zeroth order scattering is described by:

144

(k, x
) =
4

(0)

ei[kkx]y e|y| d3 y
2

Completing the square and integrating the Gaussian results in:

(0)

 3
x|2
2 41 |kk
e

(k, x
) =
4

Tuesday - 4/21

36.2

Probability Flux Density

As an aside, lets review the probability flux density and then consider its use in scattering theory.
Consider some wave function satisfies the Schrodinger equation. That is:
i
h

= H
t

ih

If we consider the time derivative of the probability,


find:

[ ], and expand it using the above, we



d x
+
t
t
3

d x [ ] =
V


= H
t

And using the Schr


odinger Equation,



Z
i
i
i

d x H + H =
d3 x [ H H ]
h
h
h V

d x [ ] =
V

Plugging in a generic Hamiltonian, H = 2m


2 + V (
x), the terms V and V cancel,
leaving only:

Z
V

i
h2
d x [ ] =
h 2m

Z
V



ih
d x 2 2 =
2m
3

d3 x [ ]

Using the divergence theorem, this can be written as the integral over the boundary instead of the
whole volume,

Z
V

d3 x [ ] =

ih
2m

Z
V

This defines the probability flux density j as:

145

d2 x
n [ ]

j ih ( )
2m
This can be applied to scattering theory in a simple manner. Recall that our we were solving the
problem:


2 + V (
x) k = k 2 k
Where in the limit of |
x| getting very large, the solution can be written as:

k (
x) eikx + f (k, x
)

eikr
r

We can compute the probability density flux of the incoming and scattered terms of the wavefunction.


x

jin = i eik
eikx eikx eikx = 2k


 eikr
 eikr
 eikr
 eikr

x
x
x
x
jsc = i f k,

f k,

f k,

f k,

r
r
r
r
Taking only the radial component of the scattered wave, r =


 2
x
jsc r = i f k,

eikr eikr
eikr eikr

r r r
r r r

r ,

so this leaves:


 2
x

= i f k,



 

ik
1
ik
1

r2 r3
r2 r3


 2
x
jsc r = 2k f k,

2
r
1
This result explains both the spherical spreading ( the
 factor of r2 ) and the angular dependence of
x
the scattered wavefunction (the magnitude of f k,
). The differential cross section of the scatterer
can be determined by this result:


 2
d
x
= f k,

d
Integrating over all angles d gives the total scattering cross section. It is possible (and usually quite
useful) to use the Born approximation derived previously to determine this cross section, however,
this is only valid in the case of weak scattering, where the scattered wave is nearly identical to the
incident wave. In general, we can refer back to one of the first results of this section,

146

k (
x) eikx +

e
l,m (
al,m (k)Y
r)

l,m

ikr

r) e
= eikx + fk (k,

ikr

From this definition, we can obtain the exact differential cross section as:


 2 X
d
x
2 Yl,m (, ) Y (, )
al,m (k)
=
= f k,
l,m
d
l,m

If it is possible to exactly determine the coefficients al,m of the spreading wave (only very simple
potentials will allow this), then the exact differential and total cross section can be exactly determined. Using the orthogonality of the Spherical Harmonics, it is even possible to simplify the total
cross section as:
Z
=

X

d
2
al,m (k)
d =
d

(, ) d
Yl,m (, ) Yl,m

l,m

X

2
al,m (k)
l,m

4
2l + 1

An example of finding the cross section (both exact and approximated) is found in the final.

147

Part VIII

HOMEWORK PROBLEMS
37

Semester I

1. Show that for Hermitian operators X and Y , that (XY ) = Y X .


2. If X |i h|, then show that X = |i h|.
3. Given the definitions:
Sz |+i =

h
|+i
2

Show that:

h
Sz |i = |i S+ = h
|+i h|
2


h 1
0
Sz =
1
2 0

S = h
|i h+|

 
0
|i =
1

 
1
|+i =
0

And determine matrix forms for S+ and S .


4. Determine matrix representations of Sx , Sy , andSz from the braket notation:
1
|Sx , i = (|+i |i) ;
2

h
Sx |Sx , i = |Sx , i
2

1
|Sy , i = (|+i i |i) ;
2

h
Sy |Sy , i = |Sy , i
2

|Sz , i = |i ;

h
Sz |Sz , i = |Sz , i
2

5. Show that the following are true for several cases:


[Si , Sj ] = iijk
hSk where [Si , Sj ] Si Sj Sj Si .]
{Si , Sj } =

2
h
2 ij

where {Si , Sj } Si Sj + Sj Si .]


2
S 2 Sx2 + Sy2 + Sz2 = 3h4 and therefore S 2 , Si = 0.

6. Use the definition of:


2 
T = I ik d
x0 + O d
x
where k is some Hermitian operator to show that:
(a) T T = T T = 1
(b) T (d
x0 )T (d
x00 ) = T (d
x00 )T (d
x0 ) = T (d
x0 + d
x00 )
(c) T (d
x0 )1 = T (d
x0 )
148

(d) limdx0 0 T (d
x0 ) = 1
7. Use the definitions:
2 
x
T = I ik d
x0 + O d


i
T
x0 = e k (px)

to show that
(a) [T (
x, x
), T (y, y)] = 0
(b) [px , py ] = 0
8. Given the Gaussian wavepacket:

2
1
x
eikx e 2d2
d
1
4

Determine:
(a) hxi


(b) x2
D
E

(c) (x)2 = x2 hxi2
(d) hpi


(e) p2
D
E

(f) (p)2 = p2 hpi2
R ipx
1
(g) (p) 2
e h (x) dx
h
9. Show that for |, t = 0i = |Sx , +i, hSy i =

h
2

sin (t); hSz i = 0.

P
10. Show that there exists no state |i such that a |i = 0 |i where |i = n cn |ni. Where
|ni is an eigenstate of the quantum simple harmonic oscillator and a is the raising operator
of the quantum simple harmonic oscillator.
11. Show that Dfor anyEexcited
state of the quantum simple harmonic oscillator, the uncertainty
D
E
2
2
2
relation is (x)
(p) = n + 21 h2 .
12. Sakurai 1.5
(a) Consider |i , |i and suppose that ha0 |i, ha00 |i, . . . and ha0 |i, ha00 |i, . . . are well known
for given ha| , ha0 | , . . . which form a complete set of bras (or kets). Determine the matrix
representation of |i h| in terms of |an is.
(b) Consider instead the spin 21 system. Let |i |Sz , +i and |i |Sx , +i. Determine the
matrix representation of |i h| in terms of these new base kets.
13. Sakurai 1.6 - Suppose |ii and |ji are eigenkets of an Hermitian operator A. Under what
conditions can we conclude that |ii + |ji is also an eigenket of A?

149


14. Sakurai 1.9 - Construct the spin ket |S n
, +i such that S n
|S n
, +i = h2 |S n
, +i. Ex
press the result as a linear combination of |i. (The result should look like: |S n
, +i =
cos 2 |+i + sin 2 ei |i)
15. Sakurai 1.11 - Consider a 2 state system described by:
H = H11 |1i h1| + H22 |2i h2| + H12 [|1i h2| + |2i |1i]
Where H11 , H12 , H22 are real quantities with units of energy; and |1i and |2i are eigenvectors
of some other observable. Determine the energy eigenkets and eigenvalues.
16. Sakurai 1.13 - A beam of spin
as follow:
Only Sz =
Only Sn =

h
2
h

1
2

particles pass through a series of Stern-Gerlach measurements

pass, Sz = h2 rejected.
pass, Sn = h2 rejected (Sn is defined as in problem 1.9).

Only Sz = h2 pass, Sz =

h
2

rejected.

What is the intensity of the final beam relative to the intensity leaving the first device? What
angle maximizes the final intensity?
17. Sakurai 1.18 - Derive the Schwartz Inequality:
(a) Observe that (h| + h|) (|i + |i) 0 is true for any .
(b) Show that the inequality above is true if A |i = B |i for pure imaginary.
(c) Explicit calculations show that a Gaussian Wave Packet
2d2

hx|i
q

 1
4

ihpix
(xhxi)2

4d2

q
Satisfies the relation (x) (p)2 = h2 . Prove that hx| x |i = (imaginary number) hx| p |i
is satisfied for such a Gaussian Wave Packet.
2

18. Sakurai 1.29


G
F
(a) Verify that [xi , G (
p)] = i
h p
and [pi , F (
x)] = ih x
.
i
i
 2 2


(b) Evaluate x , p and compare with the classical Poisson Bracket quantity x2 , p2 P oisson .

19. Sakurai 2.1 - Consider the spin-precession


problem in the Heisenberg picture. Use the Hamil
eB
tonian defined as H = mc
Sz = Sz to determine the evolution of Sx , Sy , and Sz .
20. Sakurai 2.3 - Consider an electron subject to a uniform, time-independent magnetic field of
= B0 z. If at time t = 0 the electron is known to be in a state |Sn (, 0) , +i, then:
the form B
(a) Obtain the probability of finding the electron in |Sx , +i as a function of time.
(b) Determine hSx i as a function of time.
(c) Show that these results make physical sense for the case of = 0 and = 2 .
150

21. Sakurai 2.5- Consider some particle in 1-d whose Hamiltonian is given by H =
by Calculating [[H, x] , x] prove that the relation:

1 2
2m p

+ V (x).

X

2
ha00 | x |a0 i 2 (Ea0 Ea00 ) = h
2m
0
a

is true for H |a0 i = Ea0 |a0 i.


22. Sakurai 2.6 - Consider a particle in 3-d with H =
the relation:
d
h
x pi =
dt

p2
m

1 2

2m p

+ V (
x). Calculate [
x p, H] to obtain


h
x V (
x)i

To obtain the preceding relation with the quantum mechanical analogue of the virial theorem
it is essential that the left-hand-side be zero. Under what condition would this be true?
23. Sakurai 2.8 - Let |a0 i and |a0 0i be eigenstates of an Hermitian operator A with non-degenerate
eigenvalues a0 6= a00 . Consider a Hamiltonian given as:
H = |a0 i ha00 | + |a00 i |a0 i
Where is a real number.
(a) Write down the eigenstates of H and find Ea0 and Ea00 .
(b) Suppose that initially the system is in a state |, t = 0i = |a0 i. Determine |, ti.
(c) For the determined time evolution, find the probability that the system will be int he
state |a00 i as a function of time.
(d) Describe a physical system like the one in this problem.
24. Sakurai 2.18 - Coherent state of a 1-d quantum simple harmonic oscillator is give by:
a |i = |i
where is some complex number.
(a) Prove that
|i = e

||2
2

ea |0i

is a normalized coherent state.


(b) Prove the minimal uncertainty relation for such a state.
(c) Write ket as:
|i =

f (n) |ni

n=0

and show that |f (n)|2 is a Poisson distribution. Determine the most probable n (and
thus the most likely energy state).
151

(d) Show that a coherent state can be made by applying the translation operator e h pl to
|0i where p is the momentum operator and l is some length of displacement.
25. Sakurai 2.20 - Consider a particle in a potential of the form
1

2 kx

V (x) =

x>0
x<0

(a) Find the ground state energy.


(b) Find x2 for the ground state.
26. Sakurai 2.25 - consider an electron restricted to a cylindrical shell of radii a and b with a finite
length L. If we require vanishes on the boundaries of such a cavity, then:
(a) Determine the energy eigenfunctions and show that the eigenvalues are:

El,m,n =

2
h
2m

2 "

2
kk,m


+

l
L

2 #

where km,n is the nth root of:


Jm (km,n b) Ym (km,n a) Ym (km,n b) Jm (km,n a) = 0
= B0 z in the region
(b) Repeat the above derivation if there exists a magnetic field B

0 < < a and B = 0, > a.


27. Sakurai 2.35 - Consider the Hamiltonian for some spinless particle with charge e. for a static
Suppose that B
= B0 z and show
magnetic field, the interaction term looks like p p ec A.
e

that this leads to the correct interaction of


 orbital momentum 2mc L with B. Further, show
2
2
2
that there is an extra term B x + y and comment on its physical significance.
28. Sakurai 3.1 -Determine
the eigenvalues and eigenvectors of y . Suppose that an electron is


. If Sy is measured, what is the probability of measuring the spin up?


in the state

29. Sakurai 3.8 - Consider the matrix:

1
2

D (, ., ) = e

2 3

2 2

2 3

 
+
ei 2 cos 2

=

 
ei 2 sin 2

ei
ei

+
2

sin

cos

 

 

this is equivalent to a single rotation about some axis n


by an angle . Determine .
30. Sakurai 3.9
(a) Consider a pure ensemble of spin 21 systems. Suppose that hSx i, hSz i, and the sign of
hSy i are known. Show how to determine the state vector. Why do we only need the sign
of hSy i?
152

(b) Consider a mixed ensemble of spin 21 systems with [Sx ], [Sy ], and [Sz ] known. Show how
to construct the 2 2 density matrix .
31. Sakurai 3.15 Consider some spherically symmetric potential V (r). If the wavefunction for a
particle moving in such a potential is written as:
(
x) = (x + y + 3z) f (r)
(a) Is an eigenfunction of L2 ? If it is, what is the value of l? If it isnt, what possible
values of l might be measured?
(b) What are the probabilities of measuring the various l values?
(c) Suppose it is known that (
x) is an energy eigenfunction with energy E. Indicate how
to determine what V (r) is.
32. Sakurai 3.16 - A particle in a spherically symmetric potential is known to be in an eigenstate
of L2 and Lz with eigenvalues of h
2 l (l + 1) and mh respectively. Prove that for such a state
|l; mi,

2
2 1

Lx = Ly =
l (l + 1) h
2 m2 h2
2

hLx i = hLy i = 0

38

Semester II
i

33. Check the exact form for hx| e h Ht |yi from notes and generalize to 3 dimensions.
i

h
34. Prove that the statement i
h t
e Ht = He h Ht is true for some self adjoint operator H.

35. A particle is in a delta function potential described as:


H=

2 2
h
(x)
2m x2

If a system starts in the state:


1
(x) =
2a


1

a<x<a
0 otherwise

Find the probability that the particle is bound as a function of time.


36. Scale the variables of the Impenetrable Wall with Finite Well potential and use a plotting
program to find some bound states for chosen values of D and L.
x
37. Derive the Gauge transform between the Symmetric Gauge (A = 21 B
) and the Landau
Gauge (A = B0 y
x).
38. Complete the derivation of solution for the electron in a uniform magnetic field derived in
class from the step:

 2




h2

1
j2
1
h2 k 2 hj
2 2

+ mB r (r) = 
+ B (r)
2m r2 r r r2
8
2m
2
using Confluent Hypergeometric Functions.
153

39. Consider the one dimensional harmonic oscillator:


H=

2 d2
h
1
+ m 2 x2
2
2m dx
2

Determine the scale of the length and energy eigenvalues by changing the variable.
40. For the approximate form of the Hydrogen Atom states derived in class:


1 d2
1 d
1 l (l + 1)
1

=0

+
+ 2 R()
2 d2 d
22
2n

R()
= Nn,l l e n fn,l ()

determine f1,0 , f2,0 , and f2,1 and the normalization factor N for each case.
(2)

(2)

41. Derive the second order corrections j and j


degenerate states and no continuous spectrum.

for an operator H = H0 + V with no

42. Derive an expression for the first order corrections to the eigenfunction and eigenvalue of a
system with n + 1 degenerate states.
2

d
43. Let H0 = 21 dx
2 inside an infinite square well with boundaries at x = 1. Consider the
perturbation V = x for some small value of ,
(1)

(a) Find the first order corrections j

(1)

and j (x) in terms of the parameter .

(b) Solve the system explicitly using Airy Functions. Show that if the exact solution is
expanded in orders of , the result matches that found in (i). Hint: The Airy functions
are solutions of:
 2


d
Ai(x)
=0

x
Bi(x)
dx2
44. Consider the Particle in a Symmetric Box in Two Dimensions example. Let x0 = y0 =
find expressions for  and (x, y) for the degenerate states {k, j} = {2, 1}, {1, 2}.

1
2

and

45. For the Particle in a Symmetric Box In Two Dimensions example, find an expression for the
first order time evolution of a state initially in the state = 1,2 .
2

d
46. Consider the Hamiltonian H = 12 dx
Use the Bohm-Sommerfeld quantization to
2 + |x|.
determine approximate energies of the bound states.
2

d
47. Consider the Hamiltonian H = 12 dx
2 + V (x) where,


V (x) =

0
x


x<0
x0

(a) Determine the eigenfunctions in the WKB Approximation limit.


(b) Determine the exact expressions for the eigenfunctions. (The solution will involve Airy
Functions)
154

(c) Show that the large energy asymptotics of the exact solution approach the WKB approximation results.

155

Part IX

SEMESTER II FINAL EXAM


39

WKB Problem
2

h d

+ V (x) > 0.
Consider the potential shown below and the Hamiltonian H = 2m
dx2

1. Find the eigenfunction expansion for the bound and free states in the WKB Approximation.
2. Obtain an expression (in the WKB Approximation) for the number of bound states.
3. Determine the large asymptotics of part (b).

Solution:
(a) We can separate this problem into three regions by defining a value 0 that is the peak of the
potential (as noted in the attached sketch).
Region 1:  > 0
In this region, the energy is always greater than the potential and therefore the solution is a
superposition of oscillating WKB Approximations:
(I) (x) =

1 e

2m(V (x0 )dx0

[2m ( V (x))] 4

There are no turning classical turning points for these energies, so the integral is indefinite
and introduces a phase to each term. That phase can be absorbed into the coefficients to give:

156

c+

(I) (x) =

( V (x))

1
4

e h

2m(V (x0 ))dx0

( V (x))

1
4

e h

2m(V (x0 ))dx0

At large values of x we expect this solution to appear like a Fourier transform, therefore the
coefficients should reduce to 12 in that limit. That is:
c
1

4
1

(I) (x) =

4

1 e

1
4
= c =
2
2
1

2m(V (x0 ))dx0

2 ( V (x)) 4

4

1 e

2m(V (x0 ))dx0

2 ( V (x)) 4

This gives the normalized approximation for  > 0 .


Region 2: 0 <  < 0
In this region, the classical turning points at a() and b() become important. We would
expect on either side of the potential barrier to have a combination of sines and cosines with
exponentials decaying and growing inside the barrier where V (x) > . However, sine and
cosine connect to certain exponential forms in the WKB Approximation. Therefore well have
to reference the asymptotics of Airy Functions to determine the form:
2 3
1
Ai(x)|x+ 1 e 3 x 2
2 x 4


1
2 3
Ai(x)|x 1 sin
x2 +
3
4
x 4

Bi(x)|x+

Bi(x)|x

1
4

e3x2


1
1

x 4

cos

2 3
x2 +
3
4

Given this information, we can start from a known amplitude for sine or cosine on one side
of the barrier, connect to the correct exponential term, then use the above forms to get the
behavior across the second turning point. The difficulty in this is that only the exponentially
growing solution can be kept as we work across the potential. Well have to use two solutions.
Consider the left-hand-side wavefunctions:

(IIa)

(x) =

(x) =

2B
1

sin

[ V (x)] 4
(IIb)

a()

2A

a()

cos

[ V (x)] 4

2m

0
0
2 ( V (x ))dx + 4
h

2m

( V (x0 ))dx0 +
4
h2

x < a()

x < a()

Using the Airy function asymptotics, this connects to the forms:


(IIa) (x) =

1
h

1 e

Rx
a()

[| V (x)|] 4

157

2m|V (x)|dx0

a() < x < b()

2B

(IIb) (x) =

1
h

1 e

Rx
a()

2m|V (x)|dx0

a() < x < b()

[| V (x)|] 4

As we move from left to right through the well, one solution grows while the other decays off.
If we look from the other side of the well, we see the opposite behavior as the above. Again a
solution of sine and cosine should exists in the region x > b(), and they connect to the above
at x = b. However, the above switch behavior in this region. This is explained better visually
in the sketch. Regardless, passing through the barrier causes one term to grow and the other
to decay. At the other end of the well, the solution that behaved like Ai(y) at the first turning
point, now behaves like Bi(y) and similarly for the other term. Therefore, we have:

R
!
1 b()
Z x r
h
2m|V (x)|dx0
a()
Ae
2m

0
0
(IIa) (x) =
b() < x
cos
1
2 ( V (x ))dx + 4
h

4
b()
[ V (x)]

R
!
1 b()
Z x r
2m|V (x)|dx0
h

a()
2m

Be
0
(IIb)
sin
( V (x0 ))dx +
b() < x

(x) =
1
4
h2
b()
[ V (x)] 4
Where the additional constants account for the growth or decay in the barrier.
At large values of x, the above should look like Sine and Cosine transforms
r Z
2
f(t) sin (t) dt
f () =
0

r Z
2
f () =
f(t) cos (t) dt
0

Therefore (since V (x) is zero for large x) we have the condition that:
1

2
2A
2B
4
= 1 = 1 A=B=

2
4
4
This gives the two eigenfunctions in this region 0 <  < 0 as:

(IIa)

(x) =

(x) =

4
1

sin

R

[V (x)] 4

a()
x

(IIb)

1
h
4

1
1 e
2 [|V (x)|] 4
R b()
1 1
2m|V (x)|dx0
h
a()
1
[V (x)] 4

4 e

4
[V

1
(x)] 4

cos

R

a()
x

4
1
2 [|V (x)|] 41
1

1 R b() 2m|V (x)|dx0


a()
1
[V (x)] 4

4 eh

cos

sin

a()

x
b()

2m
h2

Rx

158

x
b()

x < a()

2m|V (x)|dx0

2m
h2

a() < x < b()

( V (x0 ))dx0 +

( V (x0 ))dx0 +

a()

R

( V (x0 ))dx0 +

Rx

R

e h

2m
h2

2m
h2

( V (x0 ))dx0 +

b() < x

a() < x < b()

b() < x
x < a()

2m|V (x)|dx0

Region 3: 0 <  < 0


In this region we expect an oscillating solution in the well region, and decaying solutions
outside the well. So, we can again use the Airy function asymptotics to determine the solutions
on either side of turning points. Consider first the solutions outside the well:
C

(III) (x) =

[| V (x)|]
C

(III) (x) =

[| V (x)|]

e h

1
4

1
4

R c()
x

Rx

1
h

2m|V (x)|dx0

d()

2m|V (x)|dx0

x < c()

d() < x

The coefficient matches since we expect the solution to be symmetric about the well. These
two exponential forms connect to different solutions on either turning point of the well. This
gives a condition on  of:
Z

2C

(II)

lef t (x) =

2m

0
0
2 ( V (x ))dx + 4
h

2m

0
0
2 ( V (x ))dx + 4
h

c()
d()

2C

(II)

sin

[ V (x)] 4

right (x) =

sin

[ V (x)] 4

This solution can be normalized by integrating over the entire region. Since these states are
bound, the limit should converge and by setting it to 1 we determine C. Though, without an
explicit form for V (x), this cannot be done analytically.

(b) For the solutions in Region III to match up requires quantizing the energy of these bound states.
The Born-Sommerfeld quantization requires:
1
h

d() p

2m ( V

(x0 ))dx0

c()



1
= n

n = 1, 2 . . .

To estimate the number of bound states one must determine in the above form how many values of
 exist between  = 0 (the lowest energy allowed in the well) and  = 0 (above which the states
are no longer bound). Without an exact form for V (x) this integral cannot be done analytically.
However, it is possible to approximate the maximum energy of the bound states by assuming that
it is just slightly less than zero,

1
h

d() p

c()



1
2m (0 V (x0 ))dx = nmax

2
0

nmax

Z
1
2m L p
= +
0 V (x)dx
2
0
h

Where weve assumed that the well is of width L at the highest energy level. For the case that the
energy near  = 0 does not generate an integer value on the right hand side of this equation, then
the nearest integer less than that value is the highest energy state. That is,
159

1
+
2

nmax = Rnd

2m
A well
h

RLp
Where Rnd implies rounding down to the nearest integer value and Awell = 0
|V (x)|dx is
some value dependent on the area of the well.
(c) If we assume that , then the well becomes infinitely deep and we can assume that the
highest energy will be exactly at  = 0. This implies that the above result will not need to be
rounded off and the exact number of bound states is given by:

nmax

1
= +
2

2m
h

160

Z
0

Lp

|V (x)|dx

40

Scattering Problem

Consider some spherically symmetric potential V (r). Defined as:


n
V (r) =
0

r<R
rR

Determine the differential scatting cross-section.


Solution:
We determined in class that the scattering problem can be done by expanding in terms of an
incoming plane wave and spherically spreading scattered waves. This solution has the form:

k (
x) = eikx +

X
l,m


eikr
al,m k Yl,m (, )
r

r = |
x|  R

In the case that we assume x


= z
3 , then the above simplifies to an azimuthally symmetric form.
The angular dependence on the spreading term is the differential cross section and integrating over
the solid angle gives the total scattering cross section.
X



d
r 2 =
(2l + 1) 2l0 + 1 al al0 Pl (cos ) Pl0 (cos )
= f k,
d
0
l,l

= 4

(2l + 1) |al |2

l=0

Where weve used the completeness and orthogonality of the Legendre polynomials
Z

Pl (x) Pl0 (x) dx =


1

2
l,l0
2l + 1

Therefore, to determine the scattering cross section, we need to determine values for al .
We can solve this problem as:


h2 2

+ V (r)  = 0
2m
Plugging in the potential above,



2m
2  > = 0
h


2m
2 2 ( ) < = 0
h
2

161

Where < and > are the solutions inside and outside the potential respectively. Then, making the
substitutions:
r
k

2m
h2

2m
( )
h2

We can look up how to write eikz in terms of Legendre polynomials and write the solutions as stated
above (incoming plane, and outgoing spherical distribution). That is:
1
> =
2

"

i (2m + 1) jm (kr) Pm (cos ) +

m=0

#
ki

l+1

(2l +

(1)
1) al hl (kr) Pl

(cos )

l=0

< =


k 0 ij+1 (2j + 1) bj jj k 0 r Pj (cos )

j=0

Note that inside the spherical potential the radial solution is a Spherical Bessel function instead of
an outgoing Hankel Function. This is because the solution includes r = 0. The boundary on the
edge of the sphere requires:
0
0
>
(r R) = <
(r R)

> (r R) = < (r R)

Where 0 denotes a radial derivative. This first condition gives:


1

"

im (2m + 1) jm (kR) Pm (cos ) +

m=0

#
(1)

kil+1 (2l + 1) al hl (kR) Pl (cos )

l=0


k 0 ij+1 (2j + 1) bj jj k 0 r Pj (cos )

j=0

Multiplying by Pl (cos ) and integrating eliminates the angular dependence, leaving:


i

1 hl
(1)

i jl (kR) + kil+1 al hl (kR) = k 0 il+1 bl jl k 0 R


2
Dropping the il term thats common,
i

1 h
(1)

jl (kR) + ikal hl (kR) = ik 0 bl jl k 0 R


2
Repeating for the derivative condition yields the same result expect for an additional k or k 0 being
picked up and the radial functions are now the derivatives of themselves, that is:

162

i

1 h 0
(1)0

kjl (kR) + ik 2 al hl (kR) = ik 02 bl jl0 k 0 R


2
We can use one of these conditions to get bl in terms of al and the second to determine al . That is:

bl =

1
2

i
(1)
jl (kR) + ikal hl (kR)

#
"
(1)
1
k hl (kR)
i jl (kR)
=
al 0
0
k jl (k 0 R)
k jl (k 0 R)
2

ik 0 jl (k 0 R)

And then,
#!
"
(1)

k hl (kR)
i jl (kR)
1
0
0

j
k
R
al 0

l
k jl (k 0 R)
k 0 jl (k 0 R)
2

i
1 h 0
(1)0

kjl (kR) + ik 2 al hl (kR) = ik 02


2

(1)

(1)0

kjl0 (kR) + ik 2 al hl

al

(1)0
ik 2 hl (kR)

(kR) = al ikk 0

0
0
hl (kR) jl0 (k 0 R)
0 jl (kR) jl (k R)
+
k
jl (k 0 R)
jl (k 0 R)


0
0
0 jl (k R) (1)
ikk
h (kR)
jl (k 0 R) l

= k0

jl0 (k 0 R)
jl (kR) kjl0 (kR)
jl (k 0 R)

j 0 (k0 R)

0 l
0
i k jl (k0 R) jl (kR) kjl (kR)
al =
k k 0 jl0 (k00 R) h(1) (kR) kh(1)0 (kR)
jl (k R) l

Plugging in values from above and simplifying further,

q
q
q


2m
j0
()R
2
 l q h
2m
2m
0
jl
R

j
R
l
 j
2m
h2

h2

()R
l
h2

h
2

q
q


q
2m q  jl0 2m
()R
(1)
(1)0
2
2m
2m
q h
h
R

h
R
2
2
l
l
 j
2m
h

()R
l

s
al = i

h
2

Then, the cross-section can be written as:




q

q
q

q
 2

2m
j0
()R
s

2
 l q h
2m
2m
0
jl
jl


2 R
2 R

 j
2m
h

2
X
()R


l
h

h
2


q

= 4
(2l + 1) i
q

q

2m q  jl0 2m
()R


2
(1)
(1)0

2m
2m
l=0
h
q h
hl

2 R
2 R
l
 j
2m
h



()R
l
h
2
q

q
 2

(, ) j

2m
2m
0
R

j
R
l
2
2


l
2h
h


q

q

=
(2l + 1)
(1)0
m
2m
2m
(, ) h(1)

hl
l=0
2 R
2 R
l

2 X

163

r
(, )


( )R
q

2m
jl
2 ( )R
h

0
 jl

q

2m
h2

Alternately, in the limit that the scattering is not strong, it is possible to use the Born Approximation. As seen in class, this approximation gives the scattering cross section as:
m
f k ,k
=
2h2
0

ei(kk )r V (
r)dx

In the case of a spherically symmetric potential, this can be reduced to:



m
f k0 , k =
2
h2

Z Z Z

2m
1
= 2 0
h |k k |

ei|kk |r cos V (r)r2 drd (cos ) d


V (r) sin |k k0 |r rdr

Plugging in our potential, this gives that in the limit of weak scattering,
Z R


2m
1
k0 |r rdr
f k0 , k

sin
|
k
=
2 |k
h k0 | 0
"
#R









2m
1
1
= 2 0
k k0 r
k k0 r cos
k k0 r
2 sin
h |k k |

k k0
0








2m
1
k k0 R
k k0 R cos
f k0 , k
k k0 R
= 2 0 3 sin
h |k k |



We can determine
k k0 by recalling our initial arguments. We claimed that:
r
k

2m
h2

2m
( )
h2

If the incident wavefunction was along z then the quantity above becomes:
r

r
2m 12
2m

2m

0

k k =
3
r = 2
2 
2 ( )


h
h
h

p
|
v |2 + |
u|2 2
u v, this can be expressed in terms of the scattering angle

And expressing |
vu
| =
sc :




k k0 =

2m
h2



r



3 1 r



r
r



2m
1+ 1
2 1 cos sc =


h2




k k0 =

s
m
h2

2

164

1 cos sc


2 2

r

m

h2

r
1

cos sc


Plugging this into the above result,


2m
f k ,k
= 2
h

2
h
m

 32

The coefficient can be simplified as


the limit of weak scattering as:

r
sin

2m
h3
h2 m 32  32

 r
r

m
m
m
R
R cos
R
h2
h2
h2
h
2
.
m2

This gives the differential cross section in

r
 r
r
 2


4
h2 6
d
m
m
m
0 2

sin

R
cos

R
= f (k, k ) =


d
m2
h2
h2
h2

165

41

Perturbation Problem

 2


2
Let H0 be defined as 21 x
on the region [0, 1] , [0, 1] with Dirichlet boundary conditions
2 + y 2
on the box. Let there be some perturbing potential of the form:
V (x, y) = e(x+y)
Determine the leading order corrections to the first and second eigenvalues of H0 . Indicate the
correct zeroth order states.
Solution:
The general zero-th order solution of this problem is:

H0

(0)

(0)

=

(0)

2
2
+
x2 y 2

(0)

(0) = 0

The solution of this is a summation of sines and cosines. Applying the boundary conditions on the
edges eliminates all but the sine term:
(0)

k,j (x, y) = 2 sin (kx) sin (jy)


Where weve used the known normalization N =
determined:

2
Lx

2
Ly

j, k = 1, 2, 3 . . .
to determine the coefficient. Here weve


2 2
k + j2
2

(0) =

where the 21 has come from the extra one half in front of the differential operator.
In solving the perturbation problem:
H = 

H = H0 + V (x, y)

for the first two states, we have two cases to consider.


Ground State - Non-Degenerate Perturbation
(0)
For the ground state, well start with 1,1 (x, y) and determine the first order correction to the
wavefunction and the energy. From the notes, we have for non-degenerate states:
(1)

(0)

(0)

hn,m (x, y)|V (x, y)j,k (x, y)i

j,k = hj,k (x, y)|V (x, y)j,k (x, y)i


(0)

(1)
j,k

(0)

(0)

(0)

n,m j,k

n,m6=j,k

166

m,n

Because we are contained in a region of space, there is no continuous spectrum in the correction
(no integral term). So, to determine the corrections, we just have to calculate:
(0)

(0)

0
1

(0)

(0)
(x, y)j,k (x, y)i

Z
dx

hj,k (x, y)|V (x, y)j,k (x, y)i = 4


(0)
hj,k (x, y)|V

(0)

hj,k (x, y)|V (x, y)j,k (x, y)i =

Z
= 4

Z
dx

(0)

(0)

dyj,k (x, y)V (x, y)j,k (x, y)


1

dy sin2 (kx) sin2 (jy) e(x+y)

1
2

dx sin (kx) e

 Z

1
2

dy sin (jy) e
0

And similarly for the other terms:

(0)
hn,m
(x, y)|V

(0)
(x, y)j,k (x, y)i

Z
= 4

 Z

dx sin (nx) sin (kx) e


0

dy sin (my) sin (jy) e


0

167

All of the integrals in this problem reduce to a single form:


Z

sin (nx) sin (kx) ex dx

So, if we can determine this integral, then plug it into each part of the solution.
1

sin (nx) sin (kx) ex dx =

1
4
Z 1h
1

1h

1
(2i)2

einx einx




eikx eikx ex dx

i
ei(n+k)x ei(nk)x ei(nk)x + ei(n+k)x ex dx

i
e[i(n+k)1]x e[i(nk)1]x e[i(nk)1]x + e[i(n+k)1]x dx

Each integral is then simple:


Z

sin (nx) sin (kx) ex dx =

"
#1
e[i(n+k)1]x
e[i(nk)1]x
e[i(nk)1]x
e[i(n+k)1]x
1

+
4 [i (n + k) 1] [i (n k) 1] [i (n k) 1] [i (n + k) 1]
0

"

1 e[i(n+k)1] 1 e[i(nk)1] 1 e[i(nk)1] 1 e[i(n+k)1] 1

4 [i (n + k) 1] [i (n k) 1]
[i (n k) + 1]
[i (n + k) + 1]

The terms with matching exponential forms (n k) can each be combined as:



e[i(n+k)1] 1 [i (n + k) + 1] e[i(n+k)1] 1 [i (n + k) 1]
e[i(n+k)1] 1 e[i(n+k)1] 1

=
[i (n + k) 1] [i (n + k) + 1]
[i (n + k) 1] [i (n + k) + 1]




i (n + k) ei(n+k) ei(n+k) e1 + ei(n+k) + ei(n+k) e1 2
=
2 (n + k)2 1
=

i (n + k) 2i sin ( (n + k)) e1 + 2 cos ( (n + k)) e1 2


2 (n + k)2 1

e[i(n+k)1] 1 e[i(n+k)1] 1
2 (n + k) sin ( (n + k)) cos ( (n + k)) + e

=
[i (n + k) 1]
[i (n + k) + 1]
e
1 + 2 (n + k)2
However, since n and k are integer values, n + k is also an integer value. Therefore,
e[i(n+k)1] 1 e[i(n+k)1] 1
2 (1)(n+k) + e

=
[i (n + k) 1]
[i (n + k) + 1]
e 1 + 2 (n + k)2
Repeating this for the other terms:
168



e[i(nk)1] 1 [i (n k) + 1] + e[i(nk)1] 1 [i (n k) 1]
e[i(nk)1] 1 e[i(nk)1] 1

+
=
[i (n k) 1] [i (n k) + 1]
[i (n k) 1] [i (n k) + 1]
 i(nk)



i (n k) e
ei(nk) e1 ei(nk) + ei(nk) e1 + 2
=
2 (n k)2 1
=

i (n k) [2i sin ( (n k))] e1 [2 cos (n k)] e1 + 2


2 (n k)2 1

Again noting that n k is also an integer, the sine term is zero and the cosine is (1)nk ,
so:

e[i(nk)1] 1 e[i(nk)1] 1
2 (1)nk e1 + 2
2 (1)nk + e
+
=
=

[i (n k) 1]
[i (n k) + 1]
e 1 + 2 (n k)2
2 (n k)2 1

While this could be simplified further, this form is usable:


Z
0

#
"
nk
n+k
1
e
+
(1)
e
+
(1)
sin (nx) sin (kx) ex dx =

2e 1 + 2 (n k)2 1 + 2 (n + k)2

and we can also state the case of n = k:


Z

1
2

sin (nx) e
0



1
e+1
dx =
e+1
2e
1 + 2 (2n)2


1 e + 1 + 4e 2 n2 + 4 2 n2 e 1
4 2 n2 e + 1
1 (e + 1) 1 + 4 2 n2 (e + 1)
=
=
=
2e
1 + 4 2 n2
2e
1 + 4 2 n2
1 + 4 2 n2 2e
Z

sin2 (nx) ex dx =

169

2 2 n2 e + 1
1 + 4 2 n2 e

Using these results, we have the energy correction:

(1)
1,1

(0)
h1,1 (x, y)|V

(0)
(x, y)1,1 (x, y)i

(1)
1,1

Z

= 4

sin (x) e

16 4

(1 + 4 2 )2

e+1
e

2

2
2 2 12 e + 1
dx = 4
1 + 4 2 12 e

2

And the ground state wavefunction:


(0)

(1)
1,1

(0)

hn,m (x, y)|V (x, y)1,1 (x, y)i


(0)

n,m6=1,1

(1)

1,1 =

R

1
x
0 dx sin (nx) sin (x) e
2
2

n,m6=1,1

(1)

1,1 =

1
2e

e+(1)n1
1+ 2 (n1)2

n,m6=1,1

(0)

n,m 1,1
 R

1
y
0 dy sin (my) sin (y) e

(n2 + m2 )

e+(1)n+1
1+ 2 (n+1)2
2
2
2 (n +

(0)
m,n

2
2

i 

(12 + 12 )

1
2e

e+(1)m1
1+ 2 (m1)2

e+(1)m+1
1+ 2 (m+1)2

m2 2)

2 sin (mx) sin (ny)

i
2 sin (mx) sin (ny)

so the first correction to the wave function can be expressed:


(1)
1,1

#!
e + (1)n1
e + (1)n+1

=
2 (n 1)2
1
+

1 + 2 (n + 1)2
n,m6=1,1
"
#!
e + (1)m1
e + (1)m+1

sin (mx) sin (ny)


1 + 2 (m 1)2 1 + 2 (m + 1)2
X

8
e2 2 (2 n2 m2 )

"

170

First Excited State - Twice Degenerate Perturbation


(0)
This problem becomes more complicated in the first excited state since the states 1,2 (x, y)
(0)

and 2,1 (x, y) have degenerate eigenvalues. The pertubative potential will (likely) break this
degeneracy, however well need to work through several steps to get the correct combination of
these two states to perturb about. In this case, we can write the operator (H0 + V ) = 0
in matrix form as:

2  + V1,1;1,1
V1,2;1,1

V1,1;1,2
5 2
2

 + V1,2;1,2
5 2
2

V1,1;2,1

V1,1;2,2

V1,2;2,1

V1,2;2,2

 + V2,1;2,1

V2,1;1,1

V2,1;1,2

V2,2;1,1

V2,2;1,2

V2,2;2,1

22 2  + V2,2;2,2

..
.

..
.

..
.

..
.

(0)

V2,1;2,2

...


c1,1

. . .

c1,2

. . .

2,1
=0

. . .

c2,2

..
..
.
.

(0)

Where weve defined Vn,m;j,k hn,m (x, y)|V (x, y)j,k (x, y)i and expanded the solution as:
n,m (x, y) =

(0)

c(j,k)
n,m j,k (x, y)

j,k

In order to solve the problem, we expand energies and coefficients as:


(0)

(1)

(2)

j,k = j,k + j,k + 2 j,k + . . .


(0)(j,k)
(1)(j,k)
(2)(j,k)
c(j,k)
+ dn,m
+ 2 dn,m
+ ...
n,m = cn,m

However, since were only looking for corrections to the first excited state, we can assume each
coefficient is zero except for c1,2 and c2,1 . This leaves us with the matrix equation:

V1,2;1,2 (1)

V2,1;1,2

V1,2;2,1


c1,2

+ O 2 = 0
(1)

V2,1;2,1
c2,1

So, we just need:


Z

V1,2;1,2 = 4

dx sin (x) sin (x) e

 Z

dy sin (2y) sin (2y) e

Z
V1,2;2,1 = V2,1;1,2 = 4

 Z

dx sin (x) sin (2x) e


0

dy sin (2y) sin (y) e


0

171

Z
V2,1;2,1 = 4

dx sin (2x) sin (2x) e

 Z

dy sin (y) sin (y) e


0

Plugging in the integral results from the previous part,


V1,2;1,2 = V2,1;2,1 = 4

V1,2;2,1 = V2,1;1,2 = 4

2 2 12 e + 1
1 + 4 2 12 e



2 2 22 e + 1
1 + 4 2 22 e

#!
"
e + (1)1+2
1
e + (1)12

2e 1 + 2 (1 2)2 1 + 2 (1 + 2)2


1 e1
= 4

2e 1 + 2
"
(e 1)
= 2
e

64 4
=
(1 + 4 2 ) (1 + 16 2 )

e+1
e

2

#!
"
1
e + (1)2+1
e + (1)21

2e 1 + 2 (2 1)2 1 + 2 (2 + 1)2





1 e1
e1
e1
e1 2
= 2

2e 1 + 2 1 + 9 2
e 1 + 2 1 + 9 2

 #2

2
1 + 9 2 (e 1) 1 + 2

8 2 e 8 2
= 2
(1 + 2 ) (1 + 9 2 )
e (1 + 2 ) (1 + 9 2 )

e1
1 + 9 2

 

V1,2;2,1 = V2,1;1,2 =

64 4

[(1 + 2 ) (1 + 9 2 )]2

e1
e

2

Using these results, the problem reduces to:




 
c1,2
=0
c2,1

We require the determinant to be zero,


( )2 2 = 0 =
This gives the correct coefficients as:

( )

()
c1,2

=0
()
( )
c2,1

Which has solutions:

(+)


1
1

=
2 1
(+)
c2,1
c1,2

()

()

c1,2

=
2 1
()
c2,1

We can denote these first excited states as 1,2 . The above results give that the combination
of wavefunctions to expand about are:
172

i
1 h (0)
(+)
(0)
1,2 1,2 + 2,1 = 2 (sin (x) sin (2y) + sin (2x) sin (y))
2
i
1 h (0)
()
(0)
1,2 1,2 2,1 = 2 (sin (x) sin (2y) sin (2x) sin (y))
2
()

(0)

And the corrected energies of these wavefunctions are given as 1,2 1;2 + , so:

(+)


2 2
5
64 4
1 + 22 +( + ) = 2 +
2
2
(1 + 4 2 ) (1 + 16 2 )


2 2
5
64 4
1 + 22 +( ) = 2 +

2
2
(1 + 4 2 ) (1 + 16 2 )

1,2

()
1,2

173

e+1
e

2

e+1
e

2

64 4

e1
e

2

e1
e

2

[(1 + 2 ) (1 + 9 2 )]2

64 4
[(1 + 2 ) (1 + 9 2 )]2

You might also like