You are on page 1of 64

Phys2041 Notes

Jacob Westerhout
October 19, 2023

These notes mainly consist of a summary of Griffiths. I sometimes got a bit excited while reading
this stuff and looked outside of course content. For the most part stuff outside course content is
placed in boxes and make no claim that what is written is correct.

Week 1
Rather than considering the position of particle, in quantum mechanics we consider the wave function
of particle Ψ = Ψ(x, t). This is found by solving the Schrodinger equation

∂Ψ ℏ2 ∂ 2 Ψ
iℏ =− + V (x, t)Ψ (1)
∂t 2m ∂x2
for potential energy function V .
It is possible to derive the Schrodinger equation, but for the purposes of this course we treat it as a
postulate. For an intuitive argument where the equation came from, see Ref. [1].
The wave function gives rise to a probability density, |Ψ(x; t)|2 . Loosely speaking this is the proba-
bility of finding a particle at position x at time t.
This is a true probability — not a consequence of a ‘coarse grained’ analysis (i.e. by ignoring some
low level details as to give the appearance of a probability). Given infinite knowledge of the universe,
you still would not have knowledge of the position of the particle (unlike in coarse grained analysis
where infinite knowledge completely eliminates probabilities). There is no fundamental mechanism
that would determine which particular outcome is realized.
From the wave function we can get the probability of finding a particle in the interval [a, b] at
time t as
ˆb
|Ψ(x; t)|2 dx (2)
a

Upon measurement of the position of the particle, the wavefunction collapses. (What is meant
by a measurement remains unclear.) If the particle was found to be at position c at time t, then
the wavefunction immediately after the measurement would be very shapely peaked at c (except is
some esoteric cases). The wavefunction then spreads out in accordance with the schrodinger equation.

Page 1 of 64 Jacob Westerhout


It is not agreed upon how or why the wavefunction collapses [2]. For some of the philosophical
problems with wave function collapse see Ref [3]. In the orthodox interpretation of quantum
mechanics (the approach we take) we treat measurements as a “black box” process (as you normally
do in classical mechanics). This can lead to some very surprising results (such as the measurement
of the position of a particle radically changing it’s energy).

In the orthodox interpretation, it is not clear what exactly you can measure. This is a problem
since what you can measure is determined by the devices you can construct which is determined by
properties of the environment. It is also not entirely clear what precisely the wave function collapses
into.

The orthodox interpretation also places quite significant emphasis on the ‘observer’, which is not
governed by the theory of quantum mechanics. This is a problem as when the universe as a hole is
considered there (by definition) cannot be any external observers.

The ‘relative-state’ interpretations gives the entire universe a wavefunction that obeys the
Schrodinger equation.
Modal interpretations allow for definite measurement outcomes without collapse of the wavefunction
— the wavefunction evolves unitarily indefinitely. The central role of these approaches is to
provide an interpretation of decoherence. It’s more of an empirical theory than a theory about the
fundamental truths of the universe. It dictates what properties can be measured at what times and
with what probabilities (these do not arise dynamically from the Schrodinger equation). This is a
controversial interpretation.

There are collapse theories with modify the Schrodinger equation to give a physical mechanism for
the collapse of the wavefunction. There are ‘stochastic dynamical reduction’ theories that added
some noise to to the Schrodinger equation. However, it is often not clear how to add in this noise,
nor what state the system should collapse into.

This then gave rise to ‘spontaneous localization models’ where the wavefunction is spontaneous
localized at some random point (proportional to |Ψ|2 ) at random intervals in time. This localization
is just postulated as some fundamental process. The rate of the collapse is chosen so that there is
unitary evolution at microscoptic scales with ensuring macroscopic objects collapse rapidly. This
can further be generalized to ‘continuous spontaneous localization models’ where the schrodinger
equation is augmented by spatially correlated Brownian motion terms. However, this only leads
to partial wavefunction collapse (otherwise you require infinite momentum) and hence require
some addition interpretive framework to explain our perception of outcomes. In principle this
interpretation can easily be tested experimentally. However, state of the art experiments need 6
more orders of magnitude of accuracy to actually disprove the theory.

Bohmian mechanics gives that the wave equation is more like a guiding field (like a velocity field)
that the particles follow. That is quantum mechanics is like Newtonian mechanics but with new
equations of motion. However this is a hidden variable theory and hence is likely not correct.

Page 2 of 64 Jacob Westerhout


The apparent determinism of macroscopic systems can be understood through decoherence [4].
Roughly, if the environment acts on a system for a long enough time, the randomness of the system
tends to disappear.

However, decoherence never actually guarantees determinate values of physical quantities and hence
collapse is still required. Also, decoherence is only for local measurement. If the system is enlarged
enough, decoherence still gives a non-negligible probability of a particle being in a mixed state. I.e.
without collapse, it is plausible (if not likely) that the entire environment is in multiple places at
once.
Because measurement causes collapse “it is impossible to uniquely determine an unknown quantum
state of an individual system by means of measurements performed on that system only” [4].
Measurement is not the only thing that can cause wavefunction collapse. The wavefunction can
collapse whenever 2 quantum systems interact (such as collisions between particles) or even sponta-
neously in radioactive decay [5].
While collapse to a fully deterministic state is possible for quantities with discrete outcomes (such
as spin), it is not possible for position. Having a definite position would mean infinite uncertainty in
momentum and hence a significant probability of getting a near infinite momentum (not possible as
velocity must be at least smaller than the speed of light) [6]. That is, you can do only do a ‘partial
measure’ [7].

For instance, outcomes of measurements are typically registered by pointers localized in


position space. But these pointers are never perfectly localized, i.e., they cannot be de-
scribed by exact eigenstates of the position operator, since such eigenstates are unphysical
(they correspond to an infinite spread in momentum and therefore to an infinite amount
of energy to be contained in the system). Therefore the states corresponding to different
pointer positions cannot be exactly mutually orthogonal... individual eigenstates of the
position operator are not proper quantum states of physical objects.
The concept of classical “values” that can be ascribed through the eigenvalue–eigenstate
link based on observables and the existence of exact eigenstates of these observables has
therefore frequently been either weakened or altogether abandoned. [4]

Page 3 of 64 Jacob Westerhout


Week 2
Because |Ψ(x; t)|2 is a probability density it must be that
ˆ∞
|Ψ(x; t)|2 dx = 1 (3)
−∞

I.e. the particle must be somewhere. However, this is not guaranteed by the Schrodinger equation. If
Ψ solves the Schrodinger equation, then so does λΨ for any λ ∈ C. However the Schrodinger equation
preserves normalization. If Ψ satisfies equation (3) at time s, then it will also satisfy equation (3)
for all times t ∈ R. Hence solving for the wavefunction of a particle really requires solving equation
(1) and (3).
Again because |Ψ(x; t)|2 is a probability density, various statistical quantities are found in the
standard way. For example the average position of a particle is
ˆ∞
⟨x⟩ = x|Ψ(x; t)|2 dx (4)
−∞

and the variance of the position of a particle is


ˆ∞ ˆ∞
 2

σ 2 = x2 − ⟨x⟩2 = x2 |Ψ(x; t)|2 dx −  x|Ψ(x; t)|2 dx (5)


−∞ −∞

These averages are not taken over many observations of the wavefunction (as a single measurement
collapses the wavefunction and hence if you repeat the observation it is for a different wavefunction),
but the average is over an ensemble of particles. I.e. you have many identical experimental setups
and make a single measurement on each of setup.
Note that these expected values are time-dependent in general. From this we can then calculate
the expected value of the momentum of the particle (actually computing the momentum wavefunction
comes later)
ˆ∞ ˆ∞   ˆ∞
d ⟨x⟩ d ∗ ∂
⟨p⟩ = m = 2
x|Ψ(x; t)| dx = ... = Ψ −iℏ Ψ dx = Ψ∗ p̂Ψ dx (6)
dt dt ∂x
−∞ −∞ −∞

where p̂ is called the momentum operator.


Note that there are some subtleties that are ignored here. There are issues if the domain of the
problem is not the entire real line. Of particular importance is that momentum is not a measurable
quantity if the problem is defined only on the positive axis [8].
Currently we can treat a operator as a what you position between Ψ∗ and Ψ to compute expected
values. Clearly from the definition of ⟨x⟩ (equation (4)), x̂ = x.
However operators are much more than this. They are quantum analogues of the classical quantities
(the difference to the classical quantities is given by first quantisation [9]). Their use in computing
expected values is one of the postulates of quantum mechanics; see postulate 2b at wikipedia [10]
and the Born rule [11].
For any physical quantity Q(x, p) you can compute the expected value of this quantity for a
particle with wave function Ψ simply placing Q(x̂, p̂) (note the hats) between Ψ∗ and Ψ. Specifically
ˆ∞
⟨Q⟩ = Ψ∗ Q(x̂, p̂)Ψ dx (7)
−∞

Page 4 of 64 Jacob Westerhout


This is another postulate of quantum mechanics [12]. Similarly
ˆ∞
Q 2
= Ψ∗ Q(x̂, p̂)2 Ψ dx (8)
−∞

(this quantity is required to compute variances/standard deviations). For kinetic energy

p2
Q(x, p) =
2m
So the expected kinetic energy of a particle is
ˆ∞ ˆ∞
p̂2∗ −ℏ2 ∂ 2Ψ
⟨T ⟩ = Ψ Ψ dx = Ψ∗ dx
2m 2m ∂x2
−∞ −∞

where we have used that 2


∂2


a = a2 2
∂x ∂x
Using that the square of a derivative is a second derivative is a bit strange. Here we interpret the
derivative as a linear operator. That is ∂/∂x is a function that takes in a differentiable function and
outputs a function. Using standard multiplication of functions we would get that
 2  2
∂ ∂f
(f ) =
∂x ∂x

However, this is not we are using here. Here we are defining multiplication of functions as function
composition (this is surprisingly common). Hence,
2
∂ 2f
  
∂ ∂ ∂
(f ) ≡ (f ) =
∂x ∂x ∂x ∂x2

We then also have that 2


∂ 2f
  
∂ ∂ ∂
a (f ) ≡ a a (f ) = a2 2
∂x ∂x ∂x ∂x
It is then because of the linearity of the derivative that we actually have
 2  2
∂ 2 ∂
a (f ) = a (f )
∂x ∂x
With this we can precisely specify the Heisenberg uncertainty principle

σx σp ≥
2
where σx is the standard deviation of position and σp is the standard deviation of the momentum of
the particle. These can be computed with a combination of equations (5), (6), and (8).

Page 5 of 64 Jacob Westerhout


Week 3
To solve the Schrodinger equation, we guess a solution of the form

Ψ(x, t) = ψ(x)ϕ(t) (9)

This is called separation of variables. In principle this greatly reduces the number of initial
conditions that can be considered, but this turns out not to be the case.
For the potential V time-independent, after substituting this form of Ψ into the Schrodinger
equation, we get that
−ℏ2 d2 ψ
ϕ(t) = e−iEt/ℏ + V ψ = Eψ (10)
2m dx2
for some constant E. This right equation is called the time-independent Schrodinger equation
and can be written as
Ĥψ = Eψ (11)
where Ĥ is the Hamiltonian operator (i.e. the energy operator)

p̂2
Ĥ = +V (12)
2m

Note that there subtleties that make this definition of Ĥ more complicated when the problem is not
defined on the entire real line [8].
It turns out solutions of the form given in equation (9), the variance of the hamiltonian is 0. I.e.
these are states of deterministic energy.
Standard PDE theory would indicate that to for a wavefunction to solve the time-independent
Schrodinger equation must be twice differential as the Schrodinger equation requires the second
derivative of ψ. However Griffiths states that solutions to the time-independent Schrodinger equa-
tion are continuous everywhere, and differentiable everywhere except possibly where the potential is
infinite [13]. I have put the detailed math below but this become clearer when actually solving the
Schrodinger equation later.

Page 6 of 64 Jacob Westerhout


Things gets more complicated as we often allow ‘weak solutions’[14] to the Schrodinger equation
[15], as all that is really required is that |Ψ|2 represents a probability density [16]. A weak solution
need not even be differentiable, despite there being a derivative in the time-independent Schrodinger
equation and hence the proof given in Ref. [13] does not hold in full generality. However the result
still holds for weak solutions [17].

There are some restrictions on the functions allowed. The discontinuities of the derivatives can at
most be on a set of Lebesque measure 0 [18]. We further require the expectations of all moments of
position and momentum to be well defined and finite (otherwise the solution is not physical) [19].
This rules out step functions as they yield infinite energies [16].

The proof presented in Ref [13] does not actually show continuity of the wavefunction. It does
show that step discontinuities are not allowed but says nothing about removable discontinuities. If
you take a continuous function and add a removable discontinuity to it, this new function will be
completely equivalent to the first function from the perspective of the probability amplitude (these
2 function are in the same equivalence class) [16]. This is except in the case where the removable
discontinuity is at infinity. In principle ψ could include the dirac-δ function as it has well defined
derivatives [20].

If your potential is square integrable and finite, there are no problems and ψ will be smooth [21]
and weak solutions are just regular solutions [17]. Because in reality there are no discontinuous or
infinite potentials, the wave function will always be well defined for physical scenarios. But when
approximating physical systems we can get strange things out of the wavefunction.
Solutions of the form given in equation (9) also have time-independent probability density as

|Ψ(x, t)|2 = |ψ(x)|2 (13)

As such the expectation of any dynamical quantity is also time-independent. Because of this, solu-
tions of this form are called stationary states. In particular ⟨x⟩ is constant so ⟨p⟩ = 0.
As will become apparent later the time-independent Schrodinger equation has infinitely many
solutions
ψ1 (x), ψ2 (x), ... (14)
each with allowed energies
E1 , E2 , ... (15)
It turns out that any general wavefunction Ψ(x, t) can be written as

X
Ψ(x, t) = cn ψn (x)e−iEn t/ℏ (16)
n=1

I.e. a general wavefunction is a linear combination of stationary solutions. Note that Ψ(x, t) is not
necessarily stationary.
Because the linear combination of continuous functions need not be continuous, Ψ(x, t) in general
may be discontinuous and have discontinuous derivative of any order. This is in contrast to ψ(x)
which must be continuous (justified above).
It will be proved later that |cn |2 physically represents the probability that a measurement of
energy would return En . This then means that
X
⟨H⟩ = |cn |2 En (17)
n

Page 7 of 64 Jacob Westerhout


by definition of expected value.

There are some properties about solutions to the time-independent Schrodinger equation from week
4 that I feel is better suited here

• They are mutually orthogonal. I.e.


ˆ
ψm (x)∗ ψn (x) dx = δmn (18)

• They are complete in the sense that for any function f



X
f (x) = cn ψn (x) (19)
n=1

with ˆ
cn = ψn (x)∗ f (x) dx (20)

We tend to take f (x) = Ψ(x; 0) to solve for cn in real problems (there really isn’t any other
choice).

Page 8 of 64 Jacob Westerhout


Week 4
The infinite square well (of length a) has potential
(
0, 0 ≤ x ≤ a
V (x) = (21)
∞, otherwise

Outside the well ψ(x) = 0 (i.e. the probability of finding the particle there is 0). Intuitively the well
exerts an infinite force at the boundary that prevents the particle from escaping. A more rigorous
justification is possible later after analysis of the finite square well.
Inside the well (where V = 0) ψ satisfies the time-independent Schrodinger equation

−ℏ2 d2 ψ d2 ψ
= Eψ ⇐⇒ = −k 2 ψ (22)
2m dx2 dx2
where √
2mE
k≡ (23)

Note that this assumes that E > 0. This must be the case otherwise Dx 2 ψ and ψ have the same
sign which leads to an unnormalizable solution (see Griffiths Problem 2.2 for the details). This right
ODE is the famous simple harmonic oscillator equation and it has solution

ψ(x) = A sin(kx) + B cos(kx) (24)

To solve for A and B we need 2 boundary conditions (could possibly also use initial conditions but
the boundary conditions are more intuitive). We can use the continuity of ψ(x) (justified in week 3)
to deduce that we require ψ(0) = 0 = ψ(a). This gives

ψ(x) = A sin(kx) (25)

for

k = kn = , n = 1, 2, 3, . . . (26)
a
Using that |ψ|2 must integrate to 1 (equation 13) gives that
r r
2 2
A=± ≡ (27)
a a
where we take the positive root of convenience (the sign of A does not change |ψ|2 ). Because k can
only take discrete values, this means E (as defined above — energy) can only take discrete values as
well:
ℏ2 kn 2 n2 π 2 ℏ2
En = = (28)
2m 2ma2
It is interesting to note that ψ doesn’t actually solve the Schroedinger equation (in the typical
sense) because it is not differentiable at 0 and a. That is, we supposedly have a solution to a
differential equation that isn’t differentiable. This is fine because we are looking at weak solutions
to the differential equation (as mentioned above).

Page 9 of 64 Jacob Westerhout


Week 5
The harmonic oscillator has potential
1
V (x) = mω 2 x2 (29)
2
for some constant ω ∈ R. This potential is particularly important as near the local minima of any
potential, the potential is approximately that of a harmonic oscillator (for appropriate ω).
There are 2 main approaches to solve the time-independent Schrodinger equation for this case

Algebraic Method
This a very cleaver technique using ‘ladder operators’.

Some background
Before doing anything I think it is best to talk about commutators. In general it the commutator
of  and B̂ is h i
Â, B̂ = ÂB̂ − B̂ Â (30)
This is not so obvious to compute as we will see below. To evaluate this we tend to see how it acts
on a test function. I.e. compute h i
Â, B̂ f (31)
and try and manipulate this expression into

Ĉf (32)

and then conclude that h i


Â, B̂ = Ĉ (33)
We can evaluate the canonical commutation relation as
 
df dxf
[x̂, p̂]f = x(−iℏ) − (−iℏ) = ... = iℏf (34)
dx dx
and hence
[x̂, p̂] = iℏ (35)

The actual method


Now onto the actual method. The idea is to introduce the operators
1
â± = (∓ip̂ + mωx̂) (36)
2ℏmω
Griffiths gives some intuitive argument for how you might have come up with these. With these you
can rewrite the Hamiltonian as
   
1 1
Ĥ = ℏω â+ â− + = ℏω â− â+ − (37)
2 2
We also have that
[a− , a+ ] = 1 ⇐⇒ a− a+ = 1 + a+ a−

Page 10 of 64 Jacob Westerhout


The main idea behind this approach is that if ψ satisfies the time-independent Schrodinger equation
with energy E, (i.e. Ĥψ = Eψ) then â+ ψ satisfies the time-independent Schrodinger equation with
energy E + ℏω. Griffiths shows this by direct computation
 
1
Ĥ(â+ ψ) = ℏω â+ â− + (â+ ψ) = ... = â+ (Ê + ℏω)ψ = (E + ℏω)(âψ) (38)
2

Similarly the energy of â− ψ is E − ℏω. The reason this is useful is because if we know what any
solution is, say ψn (x), then we can immediately get all ψm (x) for m ̸= n just by applying the ladder
operators.
Note that their are some subtleties to this because how do you know that you get all the solutions.
For a given state you can generate many other states from it by application of raising and lowering
operators, but there might be some states that are not accessible this way. The idea is that if there
was some state that wasn’t accessible by applying raising and lowering operators, then there must
be at least 2 states for which â− ψ = 0. That is there must be at least 2 ground states. Below we see
that the equation â− ψ = 0 reduces to an ODE which has a unique solution. Thus, all states must
be accessible via applying the raising and lowering operators.
There is a more advanced question of can we get uniqueness of the ground state without resorting
to differential equations. From what I can tell the answer is no. See [22].
Griffiths takes the approach of finding ψ0 (x). We know that there cannot exist a state with
energy less than 0 (problem 2.3 in Griffiths) and hence we cannot keep applying â− and get a valid
wavefunction. However, from above we know that â− ψ will satisfy the Schrodinger equation. What
fails is the normalization condition.

There are then 2 options.


ˆ∞ ˆ∞
|â− ψ0 |2 = 0 or |â− ψ0 |2 = ∞ (39)
−∞ −∞

The latter is likely not correct as this would lead to either a significant probability of finding the
particle far from 0 (not something you’d expect of the lowest energy solution, especially when the
higher energy solutions don’t have this property) or the wavefunction is unbounded on some finite
interval (this would then require the particle to have a significant probability of having near infinite
momentum — not something you’d expect of the lowest energy solution). Instead we take the first
option (if this were to fail we could investigate the second option more closely). This requires
 
1 d dψ0 −mω
â− ψ0 = 0 ⇐⇒ ℏ + mωx ψ0 = 0 ⇐⇒ = xψ0 (40)
2ℏmω dx dx ℏ

Solving this differential equation gives


 
 mω 1/4 −mω 2
ψ0 (x) = exp x (41)
πℏ 2ℏ

It then turns out that


1
Ĥψ0 = ℏω(â+ â− + 1/2)ψ0 = ℏωψ0 (42)
2
and hence the energy of this solution is
1
E0 = ℏω (43)
2

Page 11 of 64 Jacob Westerhout


By above, this then means that the nth state is
 
n 1
ψn (x) = An (â+ ) ψ0 (x) with En = n+ ℏω (44)
2

for some An (as â+ does not necessarily preserve normalization). Note this is not an explicit form
for the wavefunction, just a method of computing it.
To determine An the idea is to note that (directly from the above equation)

â+ ψn = cn ψn+1 â− ψn = dn ψn−1 (45)

Then (by combining a bunch of equations from above — Griffiths states which ones)

â+ â− ψn = nψn , â− â+ ψn = (n + 1)ψn (46)

Griffiths then shows that the adjoint of â∓ is â± . So finally


ˆ∞ ˆ∞ ˆ∞
∗ ∗ 2
(â+ ψn ) (â+ ψn ) dx = (cn ψn+1 ) (cn ψn+1 ) dx = |cn | |ψn+1 |2 dx = |cn |2 (47)
−∞ −∞ −∞
ˆ∞ ˆ∞
= ψn ∗ (â− â+ ψn ) dx = (n + 1)|ψn |2 dx = (n + 1) (48)
−∞ −∞

and hence √
cn = n+1 (49)
Similarly
ˆ∞ ˆ∞ ˆ∞
(â− ψn )∗ (â− ψn ) dx = (dn ψn−1 )∗ (dn ψn−1 ) dx = |dn |2 |ψn−1 |2 dx = |dn |2 (50)
−∞ −∞ −∞
ˆ∞ ˆ∞
= ψn ∗ (â+ â− ψn ) dx = n|ψn |2 dx = n (51)
−∞ −∞

and hence √
dn = n (52)
In conclusion √ √
â+ ψn = n + 1ψn+1 â− ψn = nψn−1 (53)
and  
1 1
ψn (x) = √ (â+ )n ψ0 (x) with En = n+ ℏω (54)
n! 2
where  
 mω 1/4 −mω 2
ψ0 (x) = exp x (55)
πℏ 2ℏ
Note that r r
ℏ ℏmω
x̂ = (â+ + â− ) p̂ = i (â+ − â− )
2mω 2

Page 12 of 64 Jacob Westerhout


Analytic Method
This is just a brute force approach to solving the Schrodinger equation. The Schrodinger equation
can be written as
d2 ψ 2

= ξ − K ψ (56)
dξ 2
for r
mω 2E
ξ= x K= (57)
ℏ ℏω
We guess a solution of the form
2
ψ(ξ) = h(ξ)e−ξ /2 (58)
as in the limit as ξ → ∞, the Schrodinger equation approximately becomes
d2 ψ
≈ ξ2ψ (59)
dξ 2
which has solution (Griffiths has more working)
2 /2
ψ(ξ) = Ae−ξ (60)

Under this guess of ψ, the Schrodinger equation becomes


d2 h dh
− 2ξ + (K − 1)h = 0 (61)
dξ 2 dξ
We further guess that

X
h(ξ) = aj ξ j (62)
j=0

By Taylor’s theorem this is with minimal loss of generality. Under this guess the Schrodinger equation
further becomes ∞
X
[(j + 1)(j + 2)aj+2 − 2jaj + (K − 1)aj ]ξ j = 0 (63)
j=0

By the uniqueness of power series expansions, it must be that all the coefficients of ξ j must vanish.
I.e.
2j + 1 − K
(j + 1)(j + 2)aj+2 − 2jaj + (K − 1)aj = 0 =⇒ aj+2 = aj (64)
(j + 1)(j + 2)
Griffiths gives the full details, but for large j and large ζ
2 2 /2
h(ξ) ≈ Ceξ =⇒ ψ ≈ eξ (65)

which is not normalizable. Hence it must that exists some n s.t. an+2 = 0 (so that the large j formula
approximation is invalid) and it must also be that a1 = 0 if n is even and a0 = 0 if n is odd. From
equation (64) this then must mean that
 
1
K = 2n + 1 =⇒ En = n + ℏω (66)
2
which then simplifies equation (64) to
−2(n − j)
aj+2 = aj (67)
(j + 1)(j + 2)

Page 13 of 64 Jacob Westerhout


It then turns out that that these coefficients are closely related to the Hermite polynomials Hn (ξ).
Specifically
 mω 1/4 1 2
ψn (x) = √ Hn (ξ)e−ξ /2 (68)
πℏ 2n n!
For easy reference these states have energies
 
1
En = n + ℏω (69)
2

Unlike the algebraic method, this is actually a complete formula for the wavefunction.

Page 14 of 64 Jacob Westerhout


Week 6
Free particle
We now consider the free particle. That is the potential

V =0 (70)

The independent Schrodinger equation reads



d2 ψ 2mE
= −k 2 ψ k≡ (71)
dx2 ℏ
This has solution
ψ(x) = Aeikx + Be−ikx (72)
Adding the standard time dependence of exp(−iEt/ℏ) gives

Ψ(x, t) = Aeik(x− 2m t) + Be−ik(x+ 2m t)


ℏk ℏk
(73)

This is the same as the infinite square well except now there are no boundary conditions restricting
the values of k. This solution is the sum a wave moving to the right and a wave moving to the left
(or unchanging shape) with speed r
ℏk E
v= = (74)
2m 2m
However there is a problem

Ψ(x, t)∗ Ψ(x, t) = A2 + B 2 + 2AB cos(2kx) (75)

and hence
ˆ∞
Ψ(x, t)∗ Ψ(x, t) (76)
−∞

is not normalized. This is one case where using separation of variables to solve the Schrodinger
equation failed. This also means that there are no states of definite energy. Or equivalently, a free
particle can never have definite energy.
However, these solutions are not useless to us as they still solve the Schrodinger equation. The
idea is to take linear combinations of these to get a solution that is normalizable.
For reasons that will become clear later we will instead consider wavefunctions of the form

Ψk (x, t) = Aeik(x− 2m t)
ℏk
(77)

and just remove the restriction that k must be positive. k > 0 means the wave is moving to the right
and k < 0 means the wave is travelling to the left. This still doesn’t fix the normalization problem
(in this case the integral of |Ψ|2 is infinite) but it means that linear combinations of these functions
are a more familiar. Specifically, we want
ˆ∞ ˆ∞
1
ϕ(k)eik(x− 2m t) dk
ℏk
Ψ(x, t) = ϕ(k)Ψk (x, t) dk ≡ √ (78)

−∞ −∞

Page 15 of 64 Jacob Westerhout


(this is technically not a linear combination since it is an integral but it is common terminology in
physics). The reason for looking at an integral and not a sum will become apparent later. This is
called a wave packet. ϕ(k) can be found by using (just setting t = 0)
ˆ∞
1
Ψ(x, 0) = √ ϕ(k)eikx dk (79)

−∞

which is a classic problem in Fourier analysis. The solution is


ˆ∞
1
ϕ(k) = √ Ψ(x, 0)e−ikx dx (80)

−∞

This wave packet is no longer of constant shape as the stationary solutions that make it up are
moving at different speeds. The velocity presented above in equation (74) represents a phase velocity.
For any wavepacket of the form
ˆ∞
1
f (x, t) = √ ϕ(k)ei(kx−ωt) (81)

−∞

the phase velocity is (the motion of the individual peaks and troughs in the wave)
ω
vphase = (82)
k
and the group velocity (the motion of the wave envelope) is


vgroup = (83)
dk
In this case
ℏk 2
ω= (84)
2m
so we get
ℏk ℏk
vphase = vgroup = (85)
2m m
This group velocity is the velocity of a wave predicted by classical mechanics.

Bound and scattering states


Classically if V rises higher than a particles energy of both sides, then the particle will rock back a
forward indefinitely. This is called a bound state. If V rises higher only on 1 side, or not at all, then
the particle will escape to infinity. This is called a scattering state.
In quantum mechanics because particles can tunnel out of potentials the definition is slightly
different. A bound state occurs when

E < V (−∞) and E < V (∞) (86)

and scattering otherwise (E is the energy of the particle). Griffiths introduces these ideas with the
idea that if a particle is bound the energy levels are discrete and if the particle is not bound then
the energy levels are continuous.

Page 16 of 64 Jacob Westerhout


Note that this definition is not precisely correction. What we really mean when we say bound is
ˆr
lim sup |Ψ(x, t)|2 dx = 0 (87)
r→∞ t∈R
−r

I.e. for all times, the probability of finding the particle near infinity is 0 [23]. This could also have
been written as [24]: ˆ
∀ϵ > 0, ∃R > 0 : sup |Ψ(x, t)|2 dx < ϵ (88)
t
|x|>R

There are some potentials that follow this definition but have E > V (∞) [23].
I have managed to find some rigorous theorems (that I can understand — this comes under spectral
theory which I do not know)

• If V is bounded from below and there exists some set S with Lebesgue measure zero s.t if
x ∈ S, V is square integrable on some neighbour hood of x, then bound states have discrete
energies and scattering states have continuous energies [24].
Basically also long as the potential is square integrable then what Griffiths said was correct.

• If
lim V (x) = ∞ (89)
x→∞

then all energy levels are discrete [25] (presumably this also means that state are bound).

Apparently Griffiths result comes from the ‘RAGE’ theorem [26] but I can’t find anything solid on
it and I can’t see how it follows (see [27] for a statement of the theorem). There is some more notes
on the spectrum in week 8.

The delta function


The delta function is defined as (
0 x ̸= 0
δ(x) = (90)
∞ x=0
with the additional property that
ˆ∞
δ(x) dx = 1 (91)
−∞

This is technically not a function (despite it’s name). It is instead a distribution. This distinction
is very important. If you took δ was being a function, then its integral would be 0. Later we will
do things like scale the delta function. Treating δ as a function αδ = δ ∀α > 0, as α ∗ ∞ = ∞ and
α ∗ 0 = 0. However, because δ is a distribution, αδ ̸= δ.
The delta function has the property that for any function f : R → R,
ˆ∞
δ(x)f (x) dx = f (0) (92)
−∞

and hence the delta function can be thought of as an operator that takes a function f and returns

Page 17 of 64 Jacob Westerhout


f (0). It then follows (just by a shift in the axes) that
ˆ∞
δ(x − a)f (x) dx = f (a) (93)
−∞

Delta potential
Consider a potential of the form
V (x) = −aδ(x) (94)
where α ∈ R+ is the ‘strength’ of the potential. This represents an infinitely deep hole of width 0.

Bound states
Consider when E < 0 (i.e. the energy of the particle is negative). For x ̸= 0 the Schrodinger equation
reads (want to ignore the point where x = 0 for as long as possible as infinities are difficult to work
with)
d2 ψ 2 −2mE
= κ ψ κ ≡ (95)
dx2 ℏ
This has solution
ψ(x) = Ae−κx + Beκx (96)
In the region where x ≤ 0 we required A = 0 otherwise ψ blows up as x → −∞. Similarly in the
region x ≥ 0 we required B = 0. Hence to solution is
(
Aeκx x≤0
ψ(x) = −κx
(97)
Be x>0

By the continuity of the wavefunction we require A = B. Hence,


(
Beκx x≤0
ψ(x) = −κx
(98)
Be x>0

We can solve for B using normalization. The difficulty is solving for κ (and hence the allowed
energies). The standard way to do this is to use the continuity of the derivative of ψ but this is not
possible since the potential is infinite at 0.
The main complication is that we have differential equation that involves a distribution — it is
not a regular differential equation. Hence, we cannot get a solution in the typical sense (this is why
we can’t have continuity of the derivative and hence having ψ not second differentiable). We must
find a ‘weak solution’ to the Schrodinger equation. I have mentioned that this is what we are doing
above many times and this is where the distinction between a strong solution and a weak solution
matter. To avoid writing up a full introduction to distribution theory (which any good exposition
would required knowledge of functional analysis, topology, and measure theory) we can just trust in
the magic process.
The way to find the κ is to integrate the Schrodinger equation. The idea is that if you a solution
ψ to the Schrodinger equation then you must also have ∀ϵ
ˆϵ ˆϵ ˆϵ ˆϵ
−ℏ2 dψ
dx + + V (x)ψ(x) dx = E ψ(x) dx
2m dx2
−ϵ −ϵ −ϵ −ϵ

Page 18 of 64 Jacob Westerhout


Note that any function that satisfies this need not solve the Schrodinger equation. The form of ψ
found above cannot possible satisfy this because it is not twice differentiable at the origin (it is not
even differentiable at the origin). But if we continue to consider ψ as a (strong) solution to the
Schrodinger equation we get that this as ϵ → 0 this simplifies to
    ˆϵ
dψ ∂ψ ∂ψ 2m
∆ ≡ lim − = 2 lim V (x)ψ(x) dx
dx ϵ→0 ∂x ϵ ∂x −ϵ ℏ ϵ→0
−ϵ

Now over course this left hand side goes to 0 ordinarily. But we leave don’t simplify it (even though
in this process we simplified many other terms). Again, we trust in the magic process.
Using that V (x) = −αδ(x) this simplifies to
 
dψ −2ma
∆ = ψ(0) (99)
dx ℏ2

By the form of ψ above we get that  



∆ = −2Bκ (100)
dx
and ψ(0) = B so we get
ma
κ= (101)
ℏ2
and hence the allowed energies are
−ℏ2 κ2 −ma2
E= = (102)
2m 2ℏ2
Finally normalizing ψ to find B.
ˆ∞ √
2 |B|2 √ ma
|ψ(x)| dx = = 1 =⇒ B = κ = (103)
κ ℏ
−∞

So, in conclusion √
ma ma2 |x| −mα2
ψ(x) = eℏ E= (104)
ℏ 2ℏ2
Of note is that there is only 1 bound state for the delta function potential.

Scattering states
For E > 0 the Schrodinger equation reads for x ̸= 0

d2 ψ 2mE
2
= −k 2 ψ k≡ (105)
dx ℏ
This has general solution
ψ(x) = Aeikx + Be−ikx (106)
Hence, (
Aeikx + Be−ikx x≤0
ψ(x) = (107)
F eikx + Ge−ikx x>0
Continuity of the wavefunction gives
F +G=A+B (108)

Page 19 of 64 Jacob Westerhout


The analogous version of the continuity of the derivative gives
 
dψ −2ma
∆ = −2Bκ = ik(F − G − A + B) = (A + B) (109)
dx ℏ2

which can be rewritten as


ma
F − G = A(1 + 2iβ) − B(1 − 2iβ) β ≡ (110)
ℏ2 k
However this is only 2 equations and 5 unknowns (including k). Normalization is not enough to fix
this.
Instead of trying to induce new equations to solve for the remaining constants we make some
assumptions about the experiment we are actually trying to run. By analogy with the free particle,
the solutions represent waves.

• A is the coefficient of a wave starting for the left and moving to the right

• B is the coefficient of a wave starting for the left and moving to the left

• F is the coefficient of a wave starting for the right and moving to the right

• G is the coefficient of a wave starting for the right and moving to the left

These are summarized in the figure below.

In a typical scattering experiment, particles a fired in from one direction. Let’s say from the left
direction. This then means that the amplitude of the wave coming from the right will be 0. That is

G=0 (111)

This then means

• A is the amplitude of the incident wave

• B is the amplitude of the reflected wave

• F is the amplitude of the transmitted wave

Page 20 of 64 Jacob Westerhout


Solving the above equations we get
iβ 1
B= A, F = A (112)
1 − iβ 1 − iβ
However, the state is not normalizable so we cannot find the wavefunction exactly. But, we can still
compute the portion of the wave reflected and transmitted.
Because a non-normalizable state is not physical it might seem pointless to perform this analysis.
We are analysing some characteristics of a wavefunction that can never appear in nature. However,
much like the free particle case combinations of these solutions are normalizable and hence physical.
We are then generating approximations for physical wavefunctions that have energies around E.
However this real state will be a scaled version of these wavefunctions (as a true wavefunction will
be normalize) we cannot interpret the probabilities generated by the wavefunction directly. If we
instead look at relative probabilities then the interpretation of the results is still meaningful (the
normalizing constants will cancel).
By analogy with the electromagnetic case we can compute transmission and reflection coefficients.
See Refs. [28, 29] for more details. These parameters describe how much of a wave is reflected or
transmitted by some sort of impedance. The analysis is completely identical to the classical case as
the waves coming in and leaving a purely sine waves.
There are more complicated definitions of these coefficients that use probability fluxes.
We get that the reflection coefficient is
|B|2 β2 1
R≡ = = (113)
|A|2 1 + β2 1 + (2ℏ2 E/ma2 )
Similarly the transmission coefficient is
|F |2 1 1
T ≡ 2
= 2
= (114)
|A| 1+β 1 + (ma2 /2ℏ2 E)

Delta barrier
The analysis above is for the delta function extending down the negative infinite. Now we consider
the delta function instead extending up to positive infinity. I.e. taking a < 0.

The bound state completely disappears (E < 0 is now not well defined). The reflection and
transmission coefficients only depend on a2 so nothing changes there. This is quite surprising —
the probability of passing over a delta function well is the same as passing through a delta function
barrier. Classically a particle could never pass a delta function barrier as it would never have enough
energy to do so.

Page 21 of 64 Jacob Westerhout


Week 7
Finite square well
Now we consider the finite square well. It has potential
(
−V0 −a ≤ x ≤ a
V (x) = (115)
0, |x| > a

where V0 ∈ R+ is a constant determining how deep the well is and a is half the width of the well
(note this is different to the infinite square well where the well had length a).

Bound states
Take the case where E < 0. Classically this is when a particle is stuck in the well. The idea to solve
the Schrodinger equation is to solve in the in the regions x < −a, −a ≤ x ≤ a, and a < x and then
stitch the solutions together using the continuity of the wavefunction and its derivative.
The Schrodinger equation in the region |x| > a is

d2 ψ 2 −2mE
= κ ψ κ ≡ (116)
dx2 ℏ
This has solution
ψ(x) = Ae−κx + Beκx (117)
When x < −a, A = 0 othwerise ψ blows up as x → −∞. Similarly where x > 0, B = 0.
In the region −a ≤ x ≤ a the Schrodinger equation reads
p
d2 ψ 2 2m(E + V0 )
= −l ψ l ≡ (118)
dx2 ℏ
where we have exploited the fact that E > −V0 else the solutions is not well defined (Griffiths
problem 2.2). The solution is
ψ(x) = C sin(lx) + D cos(lx) (119)
So now the Schrodinger equation reads

κx
Be
 x < −a
ψ(x) = C sin(lx) + D cos(lx) −a ≤ x ≤ a (120)

 −κx
Ae x>a

The constants can be solved for directly but there is a nice shortcut we can use. Because the
potential is symmetric about x = 0, |Ψ|2 must also be symmetric about 0 (if flipping the problem
doesn’t change the potential, it can’t change the probability of finding a particle). This then means
that ψ must be either odd or even.

Even solutions
We known that C = 0 (otherwise the solutions wouldn’t be even). This then means we can write

−κx
F e
 x>a
ψ(x) = D cos(lx) 0 ≤ x ≤ a (121)

ψ(−x) x<0

Page 22 of 64 Jacob Westerhout


The continuity of ψ and ψ ′ give 2 equations

F e−κa = D cos(la) − κF e−κa = −lD sin(la) (122)

This is 2 equations for 3 unknowns (F, D and E which is present in both l and κ). Normalization
can give the 3rd required equation. Dividing these 2 equations gives an equation for E

κ = l tan(la) (123)

This can be written as the transcendental equation (for z)


r 
z 2 0
tan(z) = −1 (124)
z
where
ap
z ≡ la z0 2mV0 (125)

This equation must be solved numerically. Not there are only finite solutions to this equation. The
larger a for V0 is, the more solutions there are.
Griffiths does not solve for F and D but you can get a relation between F and D from equation
(122) and then use normalization to find the required values.
There are some limiting cases

Wide, deep well


For a >> 0 or V0 >> 0 the energies become

n2 π 2 ℏ 2
En + V0 ≈ , n = 1, 3, 5, . . . (126)
2m(2a)2

Presumably the case is similar for even n. The RHS of this equation is the energies for an infinite
square well (with a 7→ 2a as now the well is twice as wide) and the left is the energy above the
bottom of the well (what we where calling the energy in the infinite square well).
Quite intuitively as V0 → ∞ the problem becomes identical to the infinite square well. Quite
surprisingly as a → ∞ the energies become the same as the infinite square well, although presumable
the normalization will require that in this limit that |ψ|2 → 0 and hence the problem is not physical.

Shallow, narrow well


For a ≈ 0 or V0 ≈ 0 there are very few bound states. If z0 < π/2 there is only 1 bound state. This
is very surprising as no matter how small V0 is, there is always 1 bound state. The case for small a
is analogous to the delta function potential.

Odd solutions
Similar to even solutions. Instead take D = 0 and follow the procedure.

Scattering state
Now considering E > 0 We will try and recreate the scenario considered in the delta function
potential. That is a particle coming in from the left and scattering of the well (very surprisingly the
particle will scatter off a hole in the ground).

Page 23 of 64 Jacob Westerhout


For |x| > a the case is the same as the delta function potential (V is just 0). For x < −a we have

ikx −ikx 2mE
ψ(x) = Ae + Be k≡ (127)

for x > a the form of the equation is identical but we set one of the coefficients to 0 to represent the
scattering experiment we are doing.
ψ(x) = F eikx (128)
For |x| < a the form of ψ is the same as for the bound states:
p
2m(E + V0 )
ψ(x) = C sin(lx) + D cos(lx) l ≡ (129)

Hence we have 
ikx −ikx
Ae + Be
 x < −a
ψ(x) = C sin(lx) + D cos(lx) −a ≤ x ≤ a (130)

 ikx
Fe x>a
Continuity of ψ and ψ ′ at −a give the equations

Ae−ika + Beika = −C sin(la) + D cos(la) ik Ae−ika − Beika = l(C cos(la) + D sin(la))



(131)

and continuity of ψ and ψ ′ at a give the equations

C sin(la) + D cos(la) = F eika l(C cos(la) − D sin(la)) = ikF eika (132)

Solving these equations gives

sin(2la) 2
l − k2 F

B=i (133)
2kl
e−2ika A
F = 2 +l2 (134)
cos(2la) − i k 2kl sin(2la)

(Griffiths does not talk about the restrictions on energies but because the state is scattering there is
a continuous spectrum of energies). We can then compute the transmission coefficient as
−1
|F |2 V0 2
 
2 2a
p
T ≡ = ... = 1 + sin 2m(E + V0 ) (135)
|A|2 4E(E + V0 ) ℏ

You could similarly compute the reflection coefficient as

|B|2
R≡ (136)
|A|2

Note that if the sin2 is ever 0 then T = 1 and the particle gets perfectly transmitted. This
happens when
2a p n2 π 2 ℏ 2
2m(En + V0 ) = nπ ⇐⇒ En + V0 = n∈Z (137)
ℏ 2m(2a)2
which are the allowed energies for the infinite square well.

Page 24 of 64 Jacob Westerhout


Hilbert Spaces
The natural language of quantum mechanics is (roughly) linear algebra. Here I will only discuss how
this applies to what we have already seen, but this linear algebra(ish) formula is far more general and
far more useful than what is covered. This is the main reason for introducing it. It may seem like
we are unnecessarily making this more difficult and more abstract, but what we gain in generality
more than makes up for it.

Introduction to states
We have said repeatedly that a wavefunction is a normalized solution to the Schrodinger equation.
The normalizability of the wavefunction means that it must be square integrable. That is
ˆ∞
|Ψ(x, t)|2 dx < ∞
−∞

The space of all square integrable functions is denote L2 (technically this isn’t L2 but the subtleties
don’t matter). So a wavefunction must be an element of L2 . I.e. Ψ ∈ L2 .
We also require that wavefunctions must be continuous (this isn’t typically required in the more
general theory, see [16]). The space of continuous functions is denoted C, so Ψ ∈ C. Combining
this with above we have that Ψ ∈ L2 ∩ C. There are more conditions that a wavefunction must
satisfy (e.g. compact support or rapidly decreasing). We can just denote the space in which all the
wavefunctions live as H (this is often L2 or some extension of L2 but how this works is beyond this
course).
Later on we will see that we can represent a ‘spin wavefunction’ as an element of C2 . There we
will take H = C2 . This doesn’t matter for now but just note that H can be very arbitrary.
It can’t be completely arbitrary however. We require some conditions. Here we require H to be
a Hilbert space (a more complete theory takes it as rigged Hilbert space). A Hilbert space has a
very technical definition: a complete inner product space. Since the math requirements to appreciate
such a statement are not required for this course we have to go with a much less rigorous definition.
Griffiths basically gives that the Hilbert space is a space of column vectors. This seems clearly
false as we just said that wavefunctions are in Hilbert spaces. How do you write you
 mω 1/4 1 2
ψn (x) = √ Hn (ξ)e−ξ /2
πℏ 2n n!
as a column vector? There is technical way to do this (through the use a Schauder basis) but this
doesn’t matter for now. What Griffiths is getting at is that elements in your Hilbert space behave
like column vectors. This will become more apparent when we start talking about operators in detail.
On a Hilbert space we always have an inner-product. This how we compute projection of one
element of your Hilbert space onto another element. When working with wavefunctions we have the
inner product
ˆ∞
⟨Ψ|Φ⟩ = Ψ∗ (x)Φ(x) dx
−∞

It is often convenient to ‘break the inner product up’. This is what is called braket notation.
Written in very unconventional notation we have
ˆ∞
|Φ⟩ (x) = Φ(x) ⟨Ψ| (f ) = Ψ∗ (x)f (x) dx
−∞

Page 25 of 64 Jacob Westerhout


|Ψ⟩ is called a ket and ⟨Ψ| is called a bra (to get braket which is close enough to bracket). Basically
|Φ⟩ is just new notation for Φ (this is technically more general but we will get to that later. What is
being said here is how this works in the position ‘basis’). ⟨Ψ| is new. It is something called a linear
functional. ⟨Ψ| is a function that takes in other functions in your Hilbert space, and returns an
inner product. In math notation this is ⟨Ψ| : L2 → R.

The isomorphism between kets and bras


For any wavefunction Ψ we clearly have an associated ket |Ψ⟩ (x) = Ψ(x) (up to subtleties discussed
later) and we have an associated bra
ˆ∞
⟨Ψ| = Ψ∗ (x) · · · dx
−∞

Similarly, if we were given ⟨Ψ| we can get back |Ψ⟩ just be reading off the term in the integral.
This means that we can move back and forth between bras and kets. In math this is called an
isomorphism. When working in more abstract Hilbert spaces we always have something similar by
the Riesz representation theorem (you don’t have to know this).
There is some notation for this.

|Ψ⟩† = ⟨Ψ| ⟨Ψ|† = |Ψ⟩

This notation comes from when we are not working in Hilbert spaces but over regular (finite dimen-
sional) vector spaces. For example when working over C2
 
y
⟨x|y⟩ = x1 y1 + x2 y2 = (x1 x2 ) 1
∗ ∗ ∗ ∗
y2

so here we have (in a particular basis)


 
y
|y⟩ = 1 ⟨x| = (x∗1 x∗2 )
y2

So to move between a ket and a bra all we do is rotate your vector and take the complex conjugate
of the elements. That, is
⟨y| = (|y⟩∗ )⊤
This is called the Hermitian conjugate is often notated

⟨y| = |y⟩†

For all of these isomorphisms we have that (this is technically a requirement to call it an isomorphism)

(α |Ψ⟩ + β |Φ⟩)† = α∗ ⟨Ψ| + β ∗ ⟨Φ| (α ⟨Ψ| + β ⟨Φ|)† = α∗ |Ψ⟩ + β ∗ |Φ⟩

and we clearly also have


 †  †
† †
|Ψ⟩ = |Ψ⟩ ⟨Ψ| = ⟨Ψ|

Page 26 of 64 Jacob Westerhout


Introduction to operators
The above section was on a more general version of column vectors. This section will be on a more
general version of matrices.
We have already seen many operators: position, momentum, raising and lowering, and there are
many more that we will come across. Here I will try give the most rigorous definition that I can
while trying to stay roughly within the math requirements of this course (I will of course fail to do
this).
First I want to show/review another way of viewing matrices. These are linear operators on finite
dimensional vector spaces. One way of viewing matrices is a ‘block of numbers’. E.g.
 
1 2
A=
3 4

However there is an equivalent way of viewing matrices. We can view matrices as functions. You
can consider multiplication by a vector as the action of the function A on a vector. By are abusing
notation we can say  
x1 + 2x2
A(x) =
3x1 + 4x2
The functions are special. They are linear in the sense that

A(αx + βy) = αA(x) + βA(y)

This follows immediately from the rules of matrix multiplication. There is a really neat result that
any linear function f is just a matrix (once an ordered basis is fixed). We can define the (i, j) element
of f as
[f ]i,j = e†i f (ej )
where ei is a vector of all 0’s except at position i where there is a 1. E.g.
 
0
e2 =
1

Written in braket notation we have that the (i, j) element of f can be expressed as

[f ]i,j = ⟨ei |f (ej )⟩

where we recall that f (ej ) is a vector. We can then write the matrix as
 
⟨e1 |f (e1 )⟩ ⟨e1 |f (e2 )⟩
[f ] =
⟨e2 |f (e1 )⟩ ⟨e2 |f (e2 )⟩

Another way to write this is to use ei e⊤


j . This is a matrix of all 0’s except at position (i, j). E.g.
   
1 0 1
e1 e⊤
2 = (0 1) =
0 0 0

You can check for yourself but another way to write the matrix [f ] is
2 X
X 2
[f ] = fi,j ei e⊤
j
i=1 j=1

Page 27 of 64 Jacob Westerhout


or in braket notation XX
[f ] = ⟨ei |f (ej )⟩ |ei ⟩ ⟨ej | (138)
i j

Now an operator is just a linear function from H to H. A classic example of an operator acting
on a space of functions is the derivative. We can write something like D : C ∞ → C ∞ (where C ∞
denotes the set of infinitely differentiable functions)

df
D(f ) =
dx
This function is linear because
d df dg
D(αf + βg) = (αf + βg) = α + β
dx dx dx
With this we also have the momentum operator, the position operator, and the Hamiltonian are all
operators (as you’d expect from the name).
Another example of an linear function (but not an operator) on a space of functions is I : L1 → R,
ˆ∞
I(f ) = f (x) dx
−∞

Again this is linear by the standard rules of integration. Similarly we also have every bra is a linear
function (and hence the name linear functional).
Another example example of an operator is

|a⟩ ⟨a|

for some a. This is called the outer product. It’s action is as you’d expect

(|a⟩ ⟨a|) |Ψ⟩ = |a⟩ ⟨a|Ψ⟩

In math notation we can write |a⟩ ⟨a| : H → H.


This is very closely related to projection operator onto |α⟩.

|α⟩ ⟨α|
P̂ = (139)
⟨α|α⟩

The action of this operator is as you’d expect

|α⟩ ⟨α|Ψ⟩
P̂ |Ψ⟩ = (140)
⟨α|α⟩

This operator is particularly useful when computing probabilities when the eigenvectors have some
degeneracy (we will see this later).
It is an assignment question to show that equation (138) works in the Hilbert space setting. That
is, in a (separable) Hilbert space all operators are ‘infinite matrices’. Keep in mind that we will not
always be working over Hilbert space (sometimes in rigged Hilbert spaces). This formula sometimes
fails and the idea of a matrix starts to make less sense.

Page 28 of 64 Jacob Westerhout


Algebra on states
Above we said that bras are linear functionals. Hence

⟨x| (α |y⟩ + β |z⟩) = α ⟨x|y⟩ + β ⟨x|z⟩

We also have the reverse formula

(α ⟨y| + β ⟨z|) |x⟩ = α ⟨y|x⟩ + β ⟨z|x⟩

but this isn’t by linearity. This is just by definition of the action of a sum of functions Remember
bras are just functions. We can write this not in braket notation as

(αf + βg)(x) = αf (x) + βg(x)

This is just how we define linear combinations of functions (that take as arguments other functions).
There are a lot of other operations that just follow by definition. E.g.
! !
X X X X
⟨ψn | |Ψ⟩ = ⟨ψn |Ψ⟩ |ϕn ⟩ ⟨ψn | |Ψ⟩ = |ϕn ⟩ ⟨ψn |Ψ⟩
n n n n
ˆ  ˆ
|ϕn ⟩ ⟨ψn | dn |Ψ⟩ = |ϕn ⟩ ⟨ψn |Ψ⟩ dn

All of these have very natural analogues in finite dimensional vector spaces
1. To sum vectors just sum their components

2. To sum matrices just sum their components

3. To integrate matrices, just integrate each of the components

The Schrodinger equation


The time independent Schrodinger equation is
−ℏ2 ∂ 2 ψ
+ V ψ = Eψ
2m ∂x2
but also
p̂2 −ℏ2 ∂ 2
Ĥ = +V = +V
2m 2m ∂x2
so the time independent Schrodinger equation can be written as

Ĥ(ψ) = Eψ

or in braket notation
Ĥ |ψ⟩ = E |ψ⟩
Note that there are some subtleties above ‘bases’ which will be discussed next week.
This means that solving the time independent Schrodinger equation just boils down to solving
for the eigenvalues and eigenvectors of the Hamiltonian. There are a lot of subtleties in this as
neither eigenvalues or eigenvectors of operators are defined in the analogous way to how they are
defined for matrices. We have that in the traditional there need not be an eigenvector associated to
an eigenvalue (this will be discussed next week). This is mainly caused by H being above to have

Page 29 of 64 Jacob Westerhout


a continuum of eigenvalues. E.g. for the Hamiltonian associated to the free particle, for any real
value of energy, there was an associated wave function that solved the time independent Schrodinger
equation (E > 0 was only used above because otherwise you cannot normalize the solution).
Based on this, for any stationary state |ψn ⟩ we have that

Ĥ |ψn ⟩ = En |ψn ⟩

We similarly have that the (time dependent) Schrodinger equation

∂Ψ −ℏ2 ∂ 2 Ψ
iℏ = +VΨ
∂t 2m ∂x2
can be written as

iℏ |Ψ(t)⟩ = Ĥ |Ψ(t)⟩
∂t

Bases
When we are working over a (separable) Hilbert spaces we always have a (Schauder) basis, much
like in linear algebra. For example, we have seen that we write any state as a linear combination of
stationary states
X∞
Ψ(x, 0) = cn ψn (x)
n=1

The way we write this in braket notation is



X
|Ψ(0)⟩ = cn |ψn ⟩
n=1

We also had the formula


ˆ∞
cn = ψn∗ (x)Ψ(x, 0) dx
−∞

In braket notation we can write this as

cn = ⟨ψn |Ψ(0)⟩

as to now have the formula ∞


X
|Ψ(0)⟩ = ⟨ψn |Ψ(0)⟩ |ψn ⟩
n=1

We can also add back in the exponential time dependence to get



X
|Ψ(t)⟩ = ⟨ψn |Ψ(0)⟩ |ψn ⟩ e−iEn t/ℏ
n=1
X∞
= ⟨ψn |Ψ(t)⟩ |ψn ⟩
n=1

If we don’t want to work with stationary states, and instead with some other (Schauder) basis
{|en ⟩} and a more abstract state |f ⟩ we have the same formula
X
|f ⟩ = ⟨en |f ⟩ |en ⟩
n

Page 30 of 64 Jacob Westerhout


The definition of a (Schauder) basis in Hilbert spaces is slightly different to what it is in linear algebra
because now we have infinite sums but the idea is essentially the same.
However, all these formulas are for when we have a ‘countable basis’ (what I’ve written as Schauder
above). As we have seen in the case of the free particle we can have uncountably many stationary
states. In this case we the formula

X
Ψ(x, 0) = cn ψn (x)
n=1

doesn’t make any sense. Instead we had the formula


ˆ∞
Ψ(x, 0) = ϕ(k)ψx (k) dk
−∞

where here I’m using the notation


1
ψx (k) = √ eikx/ℏ
2πℏ
The way to describe this rigorously requires ridiculously high math requirements. All that you
need to know is that there are 2 expansions for states
X ˆ
|Ψ⟩ = cn |en ⟩ |Ψ⟩ = cn |en ⟩ dn
n

based on ‘how many basis vectors there are’. You can technically also have a mix of these. Equiva-
lently ˆ
X
|Ψ⟩ = ⟨en |Ψ⟩ |en ⟩ |Ψ⟩ = ⟨en |Ψ⟩ |en ⟩ dn
n

All bases we will be working over will be ‘eigenbases’. In order to describe what these are we
need to talk more about operators which will be done next week. There is also the problem of the
existence of such ‘bases’ which will also be discussed next week.

Page 31 of 64 Jacob Westerhout


Week 8
Continuation of Hilbert spaces
Hermitian operators
A matrix A is called Hermitan if A = A† where A† is the conjugate transpose of A. That is a matrix
is Hermitian if it equals it’s Hermitan conjugate. If A where to be a real matrix, then A is Hermitian
iff A = A⊤ . That is if A is symmetric.
We want to generalize such an idea to operators. We can use operators representation as an
‘infinite matrix’ and use the same definition. However, this is not the most convenient form to work
with.
For any matrix A, and 2 vectors x, y we have that

x† Ay = (A† x)† y

where the proof follows by noting

(AB)† = B † A† (A† )†

If we write this in braket notation this is saying that

⟨x|Ay⟩ = A† x y

This is how we define the hermitan conjugate of an operator. For any operator  the hermitian
conjugate † is defined as the operator such that for any vector |ψ⟩ , |ϕ⟩
D E D E
ψ Âϕ = † ψ ϕ

The idea is that if you want to more A to the other side of an inner product, you can do this but at
the cost of raising A to the †. There are problems about existence an uniqueness of A† that we wont
worry about here.
We then say that  is Hermitan if for any |ψ⟩ , |ϕ⟩
D E D E
ψ Âϕ = Âψ ϕ

This is equivalent to saying  = † . Because it doesn’t matter which side of the inner product A
acts we will often write D E
ψ Â ϕ
to mean either of the above 2 inner products.
These Hermitian operators are by far the most important operators in physics as all observables
are represented as Hermitian operators. That is the Hamitonian and the position and momentum
operators are Hermitian. Hermitian operators were the natural choice to represent observables as
they have real expectations. That is
D E D E
Q̂ = ψ Q̂ ψ ∈ R

We also have that Hermitian operators have real eigenvalues (there are some subtitles to this presented
below). This will be very important when talking about the generalized statistical interpretation of
quantum mechanics.

Page 32 of 64 Jacob Westerhout


|Ψ⟩ is a determined state for Q̂ if We say that a state |Ψ⟩ is a determinate state for Q̂ is the
variance of Q̂ is 0 for this state that is
 D E2  D  D E E
0= Q̂ − Q̂ = Ψ Q̂ − Q̂ Ψ ⇐⇒ Q̂ |Ψ⟩ = ⟨Q⟩ |Ψ⟩ (141)

I.e. the determinate states of Q̂ are the eigenfunctions of Q̂.

The identity operator in different bases


The identity operator is the analogous version of the identity matrix to this Hilbert space setting.
The identity operator I is the (unique) operator for which

I |Ψ⟩ = |Ψ⟩

It is a tutorial problem to show that when we have a countable basis {|ψn ⟩} we can express
X
I= |ψn ⟩ ⟨ψn |
n

When we have an uncountable basis {|en ⟩}


ˆ
I= |en ⟩ ⟨en | dn

This then means that we have (depending on the size of the basis)
X ˆ
I |Ψ⟩ = |ψn ⟩ ⟨ψn |Ψ⟩ I |Ψ⟩ = |en ⟩ ⟨en |Ψ⟩ dn
n

Representations
When we talk about bases for operators what we tend to mean is the representation for which we
are working over.
The main point of this section is to just say that sometimes the operators look different depend-
ing on the focus of a problem. It may be more important to know properties of the momentum
wavefunction than it it is to know properties of the position wavefunction. We may want to be able
to figure out an equivalent form of our operators that work on the momentum wavefunction instead.
In the position representation we know that

x̂ = x p̂ = −iℏ (142)
∂x
and hence for a wavefunction Ψ
ˆ∞ ˆ∞
∂Ψ
⟨x⟩ = x|Ψ|2 dx ⟨p⟩ = −iℏ Ψ∗ dx
∂x
−∞ −∞

But if we knew the momentum wavefunction Φ we almost always have


ˆ∞ ˆ∞
∂Φ
⟨x⟩ =
̸ 2
x|Φ| dx ⟨p⟩ =
̸ −iℏ Φ∗ dx
∂x
−∞ −∞

Page 33 of 64 Jacob Westerhout


What we then want are operators X̂ and P̂ so that
ˆ∞ ˆ∞

⟨x⟩ = Φ X̂Φ dx ⟨p⟩ = Φ∗ P̂ Φ dx
−∞ −∞

The operators that have this property are



X̂ = iℏ P̂ = p (143)
∂p
We tend to write

x̂ = iℏp̂ = p (144)
∂p
with the hope that we know the space we are working over. This is what we call a representation:
what our operators look like when working in a particular space.
We will prove that these operators are what I say they are below. In order to explain how to
move between this forms I want to first introduce how to vectors between general spaces. To do this
I first want to look at the eigenvalues are eigenvectors of these operators.

Eigenvalues and eigenvectors


Eigenvalues and eigenvectors are not defined the same way for operators as they are for operators.
What you’d expect is that for an operator  : H → H (where H is our Hilbert space) is that λ ∈ C
is an eigenvalue and v ∈ H is an eigenvector associated to λ if

Âv = λv

The problem is that this tends to be very restrictive.


Recall that the time independent Schrodinger equation is equivalent to solve for the eigenvalues
and eigenvectors of the Hamiltonian. When working with wavefunctions (which is all we have worked
with so far) H = L2 , and when we work working with the free particle case we had energy eigenstates

ψk (x) = Aeikx

However, these functions do so satisfy the above definition of an eigenvector as ψk isn’t square
integrable and so ψk ∈ / L2 .
This has real physical significance as it meant that these states aren’t physical. But we still want
to be able to find these states as in taking ‘linear combinations’ of these we can generate physical
states. There is a very technical way to fix this problem but this is physics and not math so I wont
go into this.
The basic fix is to remove the condition that v ∈ H in the equation

Âv = λv

and just let v exist. We can’t always take v to be a function as below we will see that we need v to
be the delta function (which recall isn’t a function).
Using this definition of eigenvalues and eigenvectors we can define the spectrum of an operator
to be the set of all eigenvalues. The discrete spectrum is the eigenvalues that actually have a
v ∈ H, and the continuous spectrum are the one where v ∈ / H (there are technicalities here that
don’t matter). As you’d expect the discrete spectrum tends to have discrete elements in it which the
continuous spectrum tends to be continuous.

Page 34 of 64 Jacob Westerhout


For the infinite square well the Hamiltonian has discrete spectrum
 2 2 2 
nπ ℏ
σdisc (H) = |n ∈ N
2ma2

and continuous spectrum


σcont (H) = {}
That is there are no eigenvalues in the continuous spectrum for the Hamiltonian associated to the
infinite square well. This is because all the stationary states for the infinite square well were all
square integrable.
For the free particle the Hamiltonian has discrete spectrum

σdisc (H) = {}

and continuous spectrum


σcont (H) = R+
That is, there is no discrete spectrum (there were no normalizable states) the continuous spectrum
contains every possible value (every positive energy is acceptable for the free particle).
Note that the discrete spectrum for the Hamiltonian had values that are separated, while the
continuous spectrum of the free particle had a continuum of values. This is very typical.
For the finite square well and delta function potential we had non-empty discrete and continuous
states. The discrete spectrum contains the energies associated with the bounded states and the
continuous spectrum corresponds to the scattering states.
When the spectrum of an operator is purely distinct

• The eigenvalues are real

• The eigenvectors corresponding to distinct eigenvalues are orthogonal.

– When there are degenerate eigenvalues we can still construct orthogonal eigenvectors by
using Gram-Schmidt and using that any linear combination of degenerate eigenvectors is
still an eigenvector (corresponding to the same eigenvalue)

• The eigenvectors are complete

Where the spectrum of an operator is purely continuous

• We restrict ourselves to those operators with real eigenvalues (i.e. the spectrum consists only
of real numbers)

• For real eigenvalues, eigenfunctions are not normalizable (by above) but they are Dirac or-
thonormal in the sense that we can take ⟨λi |λj ⟩ = δ(i − j) (as opposed to the standard
δij ).

• We restrict ourselves to operators that are complete.

In order to talk above vectors in different bases we need to talk above the eigenvalues and
eigenvectors of the position and momentum operator. In eigenvalues of the momentum operator is
position representation isn’t all that hard. We want to solve

P̂ p(x) ≡ −iℏ p(x) = λp p(x)
∂x

Page 35 of 64 Jacob Westerhout


This is just an easy differential to solve

p(x) = Aeiλp x/ℏ

Remember even in linear algebra eigenvectors are not determined up to a scaling. We like orthonormal
basis so we would like to solve for A such that
ˆ∞
δ(λp − λp′ ) = p′ (x)∗ p(x) dx
−∞

This integral is given in Griffiths example 3.2 (and also in a tute sheet) and we have
ˆ∞
p′ (x)∗ p(x) dx = |A|2 2πℏδ(λp − λp′ )
−∞

so
1
e−iλp x/ℏ
p(x) = √
2πℏ
We will look at how to write this in bracket notation below. The form of the position operator in
momentum space is similar and we have
1
x(p) = √ eiλp x/ℏ
2πℏ
Of note is that these 2 are complex conjugates of each other. We will see why below.
The full proof is given in Griffiths (example 3.3) that the (normalized) eigenvalues of the position
operator in position space are
x(y) = δ(y − x)
with eigenvalue x. Similarly the eigenvalues of the momentum operator in momentum space are

p(q) = δ(q − p)

with eigenvalue p.

Vectors in different bases


I said that there we some subtleties when I said that |Ψ⟩ (x) = Ψ(x). This is where these are
discussed. This formula is completely valid when working in the position ‘basis’.
There is some subtleties in the wording. Here basis has nothing to do with the bases we are
talking about above. There is not spanning set here. We are using the same word to mean 2 different
things. What we really mean is the position representation.
We can have funny things like saying we are working in the position basis using the energy
eigenbasis.
As discussed above from the position wavefunction we can get the momentum wavefunction
through an inverse fourier transform, and from the momentum wavefunction we can get the position
wavefunction by a fourier transform. In some sense these 2 wavefuntions are the ‘same’. At the very
least they represent the same physical state. It is this physical state that |Ψ⟩ is referring to.
When we are in the position basis (representation) we do indeed have |Ψ⟩ (x) = Ψ(x), but
when we are in the position basis (representation) we have |Ψ⟩ (p) = Φ(p) (for Φ your momentum
wavefunction). The notation of |Ψ⟩ transcends the idea of the representation. There is notation for

Page 36 of 64 Jacob Westerhout


getting the value of |Ψ⟩ in a particular representation. If we want |Ψ⟩ in the position representation
we write
⟨x|Ψ⟩
and if we want |Ψ⟩ in the momentum representation we write
⟨p|Ψ⟩
So if we are working in the position representation
ˆ∞
1
⟨x|Ψ⟩ = Ψ(x) ⟨p|Ψ⟩ = √ Ψ(x)e−ipx/ℏ dx
2πℏ
−∞

and when working in the momentum representation


ˆ∞
1
⟨x|Ψ⟩ = √ Φ(p)eipx/ℏ dp ⟨p|Ψ⟩ = Φ(p)
2πℏ
−∞

When working in the position basis we could have equivalently have written
ˆ∞
⟨x|Ψ⟩ = Ψ(y)δ(y − x) dy
−∞

for δ the delta function. This then means that braket notation is actually doing an inner product
(technically no because delta function isn’t a function). Also reading from the inner products, in the
position representation we have that
1
|x⟩ (y) = δ(y − x) |p⟩ (x) = √ eipx/ℏ
2πℏ
Similarly, in the momentum representation
1
|x⟩ (p) = √ e−ipx/ℏ |p⟩ (k) = δ(p − k)
2πℏ
As we saw above these expressions are exactly the expression for the eigenvectors of x̂ and p̂.
This then gives a way to find the components of a vector |Ψ⟩ in a basis {|en ⟩}: the n’th component
is
⟨en |Ψ⟩
In the case of using the position or momentum eignestates to define a basis, the ‘components’ are the
value of the wavefunction is just the wavefunction evaluated at a point. That is the xth component
of |Ψ⟩ in the position basis is
⟨x, Ψ⟩ = Ψ(x)
and the pth component of the |Ψ⟩ in the momentum basis is
⟨p, Ψ⟩ = Φ(p)
We actually use the above notation to write
1
⟨y|x⟩ = δ(y − x) ⟨x|p⟩ = √ eipx

1
⟨p|x⟩ = √ e−ipx ⟨k|p⟩ = δ(p − k)

where |y⟩ is the eigenvector of x̂ with eigenvalue y, and |k⟩ is the eigenvector of p̂ with eigenvalue k.

Page 37 of 64 Jacob Westerhout


Operators in different bases
Above we gave 2 forms of the position operator depending on what space we are in

x̂ = x x̂ = iℏ
∂p
Here will show where we got these equations from.
When we write something like x̂ |Ψ⟩ we can mean any number of things. If it is clear what
representation we are working in, say position, this can just mean x |Ψ⟩ where we are using |Ψ⟩ (x) =
Ψ(x).
More common in mathematical physics is that x̂ |Ψ⟩ is a very abstract operation. Remember that
|Ψ⟩ is an abstract way to represent a particle in a given state. Depending on the representation we
are working over this can either mean the position wavefunction or the momentum wavefunction (or
any number of other things) associated to a physical state. It is this abstract state for which x̂ is
acting.
This level of abstraction can be very helpful at times, but not for anything we are doing in this
course. We always want to work in a particular representation. When we want to talk about acting
on the position wavefunction we write

x̂ ⟨x|Ψ⟩ = xΨ(x)

and when we want to talk about acting on the momentum wavefunction we write
∂Φ(p)
x̂ ⟨p|Ψ⟩ = iℏ
∂p

This notation is a bit contradictory to what we defined above. For x̂ = iℏ , ⟨p|Ψ⟩ is a number
∂p
Φ(p) it looks like we are differentiating a number. But with the substitution on the right the formula
looks fine. It is like instead of looking at specifically ⟨p|Ψ⟩ we look at ⟨q|Ψ⟩ for q close p so we are
not actually just differentiating a number.
The reason for the notation that we want to basically say that x̂ either acts on the position
wavefunction Ψ or the momentum wavefucntion Φ where we have ⟨x|Ψ⟩ = Ψ(x) and ⟨p|Ψ⟩ = Φ(p).
To move between representations we use

x̂ ⟨p|Ψ⟩ ≡ ⟨p|x̂|Ψ⟩ (145)

This is a definition. x̂ as an operator that acts on ⟨p|Ψ⟩ = Φ(p) is the same operator that when we
act it on Ψ̂ and look at it in the momentum basis. This seems like we have gotten nowhere. We
wanted not to have to compute x̂Ψ because this is weird and abstract and yet we have defined our
fix exactly in terms of x̂Ψ.
This turns out to not be a problem because the eigenstates of x̂ in a particular representation.
The problem we are trying to solve here is to move the knowledge of one representation into another
representation. We can write (using the identity operator in the position basis)
ˆ ˆ
⟨p|x̂|Ψ⟩ = ⟨p| x̂ |x⟩ ⟨x|Ψ⟩ dx = ⟨p|x̂|x⟩ ⟨x|Ψ⟩ dx (146)

where we note that we can move ⟨p| and x̂ into the integral by linearity. We can then use that |x⟩
is an eigenvector of x̂ with eigenvalue x to give
ˆ ˆ
⟨p|x̂|Ψ⟩ = x ⟨p|x⟩ ⟨x|Ψ⟩ dx = x ⟨p|x⟩ Ψ(x, t) dx (147)

Page 38 of 64 Jacob Westerhout


We computed ⟨p|x⟩ above as
1
√ e−ipx/ℏ
2πℏ
Hence
ˆ ˆ
1 −ipx/ℏ ∂ 1 ∂
⟨p|x̂|Ψ⟩ = √ xe Ψ(x, t) dx = iℏ √ e−ipx/ℏ Ψ(x, t) dx = iℏ ⟨p|x⟩
2πℏ ∂p 2πℏ ∂p
so we then conclude that in the momentum basis

x̂ = iℏ (148)
∂p
as stated above.

Generalized statistical interpretation


As mentioned above, observable quantities in quantum mechanics are represented by Hermitian
operators. If you have some Hermitian operator  and you take a measurement of it, you find one
of the eigenvalues of A and your system will now be in an associated eigenvector of A. That is, if
you system is initially in some state |Ψ⟩, and you take a measurement of A and get a, if there is
one eigenvector associated to a, say |a⟩, then your system will now be in |a⟩. If there is more than
one eigenvector with eigenvalue a, then you system will be in a superposition state of the all the
eigenvectors associated to a.
When for an eigenvalue a, there is one eigenvector |a⟩, |a⟩ is called non-degenerate. If there is
more than one eigenvector, |a⟩ is called degenerate.
If the system is in state |Ψ⟩, and |a⟩ is a non-degenerate eigenvalue associated to  with eigenvalue
a, then the probability of measuring a is
| ⟨a|Ψ⟩ |2
Using the interpretation of an inner-product as a projection, this gives that probability of measuring
a depends very heavily on close |a⟩ is to |Ψ⟩.
However when there is more than 1 eigenvector associated to an eigenvalue the it is not so trivial.
Let {|Λj ⟩} be the eigenvectors corresponding to λj and let P̂ be the projection operator onto the
subspace spanned by {|Λj ⟩}. I.e.
X |Λj ⟩ ⟨Λj |
P̂ = (149)
j
⟨Λj |Λj ⟩

Then the probability of getting λj as a measurement when the system is in state |Ψ⟩ is
D E
Ψ P̂ Ψ (150)

Uncertainty principle
The generalized uncertainty principle is
 2
2 2 1 Dh iE
σA σB ≥ Â, B̂ (151)
2i
We say 2 observables are incompatible if
h i
Â, B̂ ̸= 0 (152)

Page 39 of 64 Jacob Westerhout


Hence we cannot measure incompatible observables together to arbitrary precision. Compatible
observables can be simultaneous diagonalized (incompatible observables can’t be) and hence there is
a complete set of simultaneous eigenfunctions for 2 compatible observable (and not for incompatible
observables). This is what gives rise to the uncertainty principle.
A necessary and sufficient condition for the lower bound in the uncertainty principle to be obtained
is for there to exist some a ∈ R s.t.
 D E  D E
B̂ − B̂ Ψ = ia  − Â Ψ (153)

This can be phrased as finding Ψ s.t. the above equality holds. The general solution is (not a well
formed equation as the averages also depend on Ψ) is
2 )/2ℏ
Ψ(x) = Ae−a(x−⟨x⟩) ei⟨p⟩x/ℏ (154)

There is also an energy-time uncertainty principle



∆E∆t ≥ (155)
2
This cannot be derived directly from the generalized uncertainty principle as time is not an operator
in quantum mechanics. As such ∆t does not refer to the standard deviation of time (what ever that
means — how do you make repeated measurements of time) but instead loosely ‘the time it takes
the system to change substantially’ (precise definition below). This means that if any observable
changes rapidly in time, then the uncertainty in energy must be small. Equivalently, if ∆E is small,
then the rate of change of all observables must be very rapid.
To derive the energy-time uncertainty principle we need the Heisenberg equation of motion
(or the generalized Ehrenfest theorem) which gives
* +
d i Dh iE ∂ Q̂
⟨Q⟩ = Ĥ, Q̂ + (156)
dt ℏ ∂t

Assuming the Q is not time dependent we can combine this with the generalized uncertainty principle
to give
ℏ d ⟨Q⟩
σH σq ≥ (157)
2 dt
If we take
σQ
∆E ≡ σh ∆t ≡ (158)
|d ⟨Q⟩ /dt|
then the energy-time uncertainty principle follows immediately. From the definition, rigorously ∆t
represents the amount of time it takes for ⟨Q⟩ to change by 1 standard deviation.
Where Q̂ also doesn’t depend on time (e.g. for a Hamiltonian associated with a time independent
potential) the Heisenberg equation reduces to

d i Dh iE
⟨Q⟩ = Ĥ, Q̂ (159)
dt ℏ

Page 40 of 64 Jacob Westerhout


Week 9
3D Schrodinger equation
If you write the Schrodinger equation as
∂Ψ
iℏ = ĤΨ (160)
∂t

then the generalization to 3D just requires generalizing Ĥ to

−ℏ2 2
Ĥ = ∇ +V (161)
2m
Everything from 1D then generalizes very naturally. The time independent Schrodinger equation
doesn’t change
Ĥψ = Eψ (162)
which solutions leads to the wavefunction

Ψn (r, t) = ψn (r)e−iEn t/ℏ (163)

which can then be used to construct a general solution to the Schrodinger equation via
X
Ψ(r, t) = cn ψn (r)e−iEn t/ℏ (164)
n

The normalization of Ψ is ˆ
1= |Ψ(r, t)|2 dr (165)
R3

Spherical potentials
To solve a problem that is more natural in spherical coordinates all we need to do is use the spherical
coordinate version of the Laplacian
     2 
2 1 ∂ 2 ∂ 1 ∂ ∂ 1 ∂
∇ = 2 r + 2 sin(θ) + 2 2 (166)
r ∂r ∂r r sin(θ) ∂θ ∂θ r sin (θ) ∂ϕ2

If we assume that V = V (r) (i.e. a spherically symmetric potential), we guess that the solution to
the time independent Schrodinger equation (analogously to the 1D case) is of the form

ψ(r, θ, ϕ) = ru(r)Y (θ, ϕ) (167)

We get
s
−ℏ2 d2 u ℏ2 l(l + 1)
 
m (2l + 1)(l − m)! imϕ m
Yl (θ, ϕ) = e Pl (cos(θ)) + V + u = Eu (168)
4π(l + m)! 2m dr2 2m r2

for l ∈ Z+ , m ∈ Z ∩ [−l, l]. Plm is the Legendre function


 m
m m 2 m/2
 d
Pl ≡ (−1) 1 − x Pl (x) (169)
dx

Page 41 of 64 Jacob Westerhout


where Pl (x) is the lth Legendre polynomial given by
 l
1 d
Pl (x) = l (x2 − 1)l
2 l! dx

Ylm are called the spherical harmonics and the differential equation is called the radial equation.
It has the same form as the 1D schrodinger equation, but there is a term adding to the potential.
This is analogous the centrifugal (quasi)-force in classical mechanics. Note that u is normalized (note
the integration bounds) as
ˆ∞
|u|2 dr = 1 (170)
0

Note that in spherical coordinates the normalization of the wavefunction is

ˆ∞ ˆ2π ˆπ
1= |Ψ(r, θ, ϕ)|2 r2 sin(θ) dθ dϕ dr
0 0 0

That is you have to include the Jacobian for the transformation from cartesian coordinates to spher-
ical coordinates. When we have spherical symmetry Ψ(r, θ, ϕ) = Ψ(r) this result reduces to
ˆ∞
1 = 4π |Ψ(r)|2 dr
0

Similarly, under spherical symmetry the probability density is no longer |Ψ(r)|2 but is instead
4πr2 |Ψ(r)|2 .

Hydrogen atom
The potential of the hydrogen atom is the same as the coulomb potential (hydrogen has just 1 charged
particle)
−e2 1
V (r) = (171)
4πϵ0 r

Page 42 of 64 Jacob Westerhout


Note this in a coordinate system where the nucleus is at the origin [30]. I think these needs an
explanation as the nucleus is not a fixed point, it is a wave. If you take this into account the
Hamiltonian is [31]
1 2 1 2 e2
Ĥ = p̂ + p̂ − (172)
2M N 2m e |r N − r e |
If you then move into the centre of mass frame by appropriate changes in the coordinates
1
r = re − rN R= (M r N + mr e ) (173)
M +m
M m
p= pe − p P = pN + pe (174)
M +m M +m N
In these coordinates
1 2 1 2 e2
Ĥ = P̂ + p̂ − (175)
2(M + m) 2µ |r|
where  −1
1 1
µ= + (176)
m M
The Hamiltonian is then the sum of 2 parts

1 2 e2 1 2
Ĥel = p̂ − , Ĥcom = P̂ (177)
2µ |r| 2(M + m)

In this case we only focus on Ĥel and are ignoring where the centre of mass is. The full wavefunction
is the solution we found, times the wavefunction of a free particle (as discussed in week 12, when
you can separate the Hamiltonian like this the total wavefunction is just the product of the solutions
to the 2 Hamiltonians). There would then also have to be some coordinate shift back the standard
coordinates.
This potential gives rise to bound and scattering states but we will just look at the bound states
(E < 0). The Schrodinger equation can be written as

d2 u me e2
 
ρ0 l(l + 1) −2me E
= 1− + 2
u, ρ ≡ κr, ρ0 ≡ 2
, κ≡ (178)
dρ ρ ρ 2πϵ0 ℏ κ ℏ
As ρ → ∞ the ODE becomes
d2 u
= u =⇒ u(ρ) = Ae−ρ + Beρ (179)
dρ2
We require B = 0 so this does not blow up as ρ → ∞. For ρ ≈ 0 the ODE becomes
d2 u l(l + 1)
2
= u =⇒ u(ρ) = Cρl+1 + Dρ−l (180)
dρ ρ2
We require D = 0 so this does not blow up as ρ → 0 (so the solution can still be normalizable).
Much like the case of the Harmonic oscillator we then guess that the solution is of the form
u(ρ) = ρl+1 e−ρ ν(ρ) (181)
The Schrodinger equation then becomes
d2 ν dν
ρ 2
+ 2(l + 1 − ρ) + [p0 − 2(l + 1)]ν = 0 (182)
dρ dρ

Page 43 of 64 Jacob Westerhout


If we then guess a solution of the form

X
ν(ρ) = cj ρ j (183)
j=0

we get the equation  


2(j + l + 1) − ρ0
cj+1 = cj (184)
(j + 1)(j + 2l + 2)
Again much like the harmonic oscillator we required some cN = 0 as for large j the solution becomes
approximately
u(ρ) = c0 ρl+1 eρ (185)
which blows up as ρ → ∞. Substituting cN = 0 into equation (184) gives

−ℏ2 κ2 −me e4
ρ0 = 2(N + l) ≡ 2n =⇒ E = = 2 2 2 2 (186)
2m 8π ϵ0 ℏ ρ0
and so the allowed energies are
 2 2 #
"
m2 e 1 E1
En = − 2 2
= 2 (187)
2ℏ 4πϵ0 n n

This equation is called the Bohr formula. We also get that


m2e
 
1 1 r
κ= ≡ n =⇒ ρ = (188)
4πϵ0 ℏ2 n a an
When n = 1, l = 0 = m, the recursion equation (184) stops after the first term so we get that
rc0 −r/a
v(ρ) = c0 =⇒ u(r) = ρe−ρ c0 = e (189)
a
Then,
ˆ∞ ˆ∞
|c0 |2 a 2
1= |u(r)|2 dr = 2 r2 e−2r/a = |c0 |2 =⇒ c0 = √ (190)
a 4 a
0 0

This then means that equation (184) gives rise to a recurrence is well known which gives (with
incorrect normalization)
v(ρ) = L2l+1
n−l−1 (2ρ) (191)
where  p
d
Lqp (x) ≡ (−1) p
Lp+q (x) (192)
dx
is the associated Laguerre polynomial and
 q
ex d
Lq (x) ≡ (e−x xq ) (193)
q! dx

is the q th Laguerre polynomial. Combining everything together (and then normalizing) this then
gives stationary solutions
s 
2  l   
2 (n − l − 1)! −r/na 2r 2l+1 2r
ψnlm (r, θ, ϕ) = e Ln−l−1 Ylm (θ, ϕ) (194)
na 2n(n + l)! na na

Page 44 of 64 Jacob Westerhout


Note that we can only have l < n to be consistent with the definition of n given in equation (186)
but all possible states corresponding to the allowed l, m occur at a specific energy as En does not
depend on l or m. This means that at each energy level there is degeneracy
n−1
X
d(n) = (2l + 1) = n2 (195)
l=0

as for each l there is 2l + 1 possible m.


When transitioning between stationary states, the change in energy is
 
1 1
Eγ = Ei − Ej = E1 − (196)
ni 2 nf 2

By Plancks formula E = hν and that λ = c/ν we can get the wavelength of a photon ab-
sorbed/emitted during a transition is
 
1 E1 1 1
= − (197)
λ 2πcℏ ni 2 nf 2

This is the Rydberg formula with


"  2 2 #
E1 m2 e
= 3
(198)
2πcℏ 4πcℏ 4πϵ0

is the Rydberg constant.

Page 45 of 64 Jacob Westerhout


Week 10
Angular momentum classically
The formula for momentum is p = mv for m a scalar (mass obviously) and v a vector. This formula
of course works in all cases. We can define the momentum for when v is associated to an object
moving in a circle. However, it is not the natural quantity to use. We famously have that momentum
is conserved for an object that isn’t acted upon by any external force. This is clearly not the case
for a particle moving in a circle (there is a centripetal force).
The more natural quantity to talk about when objects are rotating is the angular momentum and
angular velocities. First looking the classic motivating example of a particle rotating about some
plane in 3D space

We can define this plane in terms of it’s normal vector. We are free to move this normal vector
around the plane as we please so place it at the centre of the rotation. This normal vector then acts
the vector for which our object is rotating about. Any plane has 2 possible normal vectors, so we
chose the one which follows the right hand rule. This is the direction of the angular velocity vector.
The length of the angular velocity vector defines how fast the rotation is.
We can do this in more generality. Consider a point particular moving along a trajectory x(t)
with velocity v(t) = x′ (t). The classic way of thinking about this is that at time t, the particle is at
point x(t) and some small time dt afterwards the particle is at x(t) + dtv(t). There is another way
of thinking about this which is important for what we are doing here.
x(t) and v(t) define 2 vectors. If we include the origin we have 3 points which uniquely defines the
plane. We can then define the normal vector of this plane (up to scaling to be discussed in a second)
as the angular velocity ω of the particle. We can think that at time t the particle is a position x(t),
and some dt time later it is as if the particle had been rotated around ω by some small amount.
We technically define
v×x
ω=
||x||2
This aligns with the definition given above as v × x is perpendicular to x and v. Note that in this
generality we can define the angular velocity of a particle moving in a straight line (or moving along
any other path). However, doing so probably isn’t practical.
From this we can define the angular momentum of the particle as

L = Iω

Page 46 of 64 Jacob Westerhout


for I the moment of the inertia of out object. Note the similarities of this formula to the standard
equation for momentum. In our example above of a point particle I = m||x||2 .
There is an equivalent expression for angular momentum which we will use in the quantum
mechanical part of the question.
L = x × (mv) = x × p
Classically this quantity is useful as any object not experiencing a net torque has angular momentum
conserved. For the stupid example of a particle moving in a straight line (at uniform velocity) there
is technically conservation of angular momentum.
For quantum mechanics the components of the angular momentum are important. The description
given above of the normal of the rotation plane is not sufficient. There is a lovely result called Euler’s
rotation theorem[32] which gives that rotation about any axis is a combination of rotations about
the 3 cardinal axes.

We can see this by looking at rotation vectors. If you consider an arbitrary rotation matrix
R consisting of a product of rotations about the 3 cardinal axes (see [33] for the forms of these
matrices). That is R corresponds to a sequence of rotations about the cardinal axes. Because all
rotation matrices are unitary it has eigenvalues ±1 and if all eigenvalues are −1 then what we actually
have is a flip and not rotation. Then there is some vector n such that

Rn = n

I.e. a point that doesn’t change under rotation. This is exactly the vector of the rotation.
So then in quantum section below, when we are talking about Lx this is basically a measure of
how fast a particle is rotating about the x-axis.

Angular momentum in quantum mechanics


We are currently only considering orbital angular momentum. The quantum analogue to an object
rotating about it’s centre of mass is called spin and is reserved for next week (this will be a point
particle rotation about it’s centre of mass). Classically, angular momentum is computed as

L=r×p (199)

Hence, in quantum mechanics we define

L̂x = ŷ p̂z − ẑ p̂y , L̂y = ẑ p̂x − x̂p̂z , L̂z = x̂p̂y − ŷ p̂x (200)

Page 47 of 64 Jacob Westerhout


Note that there is no need to worry about the order of the position and momentum as we are only
multiply position and momentum along different axes.
Note that none of the angular momentum operators commute. Specifically
h i h i h i
L̂x , L̂y = iℏL̂z , L̂y , L̂z = iℏL̂x , L̂z , L̂x = iℏL̂y (201)

But if we consider the square or total angular momentum


2 2 2
L̂2 ≡ L̂x + L̂y + L̂z (202)

this does commute with all the angular momentums


h i h i h i
L̂2 , L̂x = 0, L̂2 , L̂y = 0, L̂2 , L̂z = 0 (203)

Note that this means that quantum mechanically, a particle does not have a well defined vector of
rotation. Griffiths has a good picture

If you know what Lz should be, then any on the sphere at the hight Lz is a valid rotation vector.
Note that we don’t allow points in the sphere as if you know Lz you know L2 (they commute).

Eigenvalues of angular momentum


We are going to solve this the same way we did the harmonic oscillator: raising and lowering operators.
Let
L̂± ≡ L̂x ± iL̂y (204)
These operators have the special property that for any f s.t.

L̂2 f = λf L̂z f = µf

then
L̂2 (L̂± f ) = .... = λ(L̂± f ) L̂z (L̂± f ) = .... = (µ ± ℏ)(L̂± f ) (205)
Note that such an f exists as [L̂2 , L̂z ] = 0. Hence L̂± increases/decreases the angular momentum in
z-direction but not the total angular momentum. Hence L+ is called t he raising operator and L−
is called the lowering operator. There was no particular reason to choose z, L̂2 commutes with
the angular momentum in all other directions.

Page 48 of 64 Jacob Westerhout


Much like the harmonic oscillator, the main idea is that if we know what one eigenstate is, we can
get all others just by applying these operators. For the case of the harmonic oscillator we concluded
that there must exist some lowest energy state (having energy levels decreasing to −∞ doesn’t make
sense). Because L̂± does not change the eigenvalues of L̂2 , we must conclude that there must exists
an upper and lower eigenstate otherwise it would be possible for the angular momentum in the
z-direction to be greater than the total angular momentum.
We can index the eigenstates of Lz accessible by applying the raising and lowering operators to
f as f n with
L̂+ f n = f n+1 L̂− f n = f n−1
We know that there must exist some upper state f l and a lower state f k s.t.

L+ f l = Af l+1 , L− flk = Bf k−1 (206)

with f l+1 and f k−1 not being physical (i.e. not normalizable) Griffiths leaves it as an exercise to
show that
p p
L+ f m = ℏ (l − m)(l + m + 1)f m+1 , L− f m = ℏ (l + m)(l − m + 1)f m−1 (207)

Using this, it follows directly that these states aren’t normalizable for m = l and m = −l. In light
if this I’m now going to use Griffths notation of fln to refer to the eigenstates of Lz accessible by
applying the raising and lowering operators to f .
Now we actually have to solve for one of the eigenvalues. Let the eigenvalue for L̂z corresponding
to fll be ℏq and the eigenvalue corresponding to fl−l be ℏq (with the goal of finding q and q). Because
you can move between these states using the raising/lowering operators, both these states have the
eigenvalue λ corresponding to L̂2 . Griffiths then shows that
2
L̂2 = L̂± L̂∓ + L̂z ∓ ℏLz (208)

from which he gets 2 expressions for λ by applying this to fll and fl−l

ℏ2 q(q + 1) = λ = ℏ2 q(q − 1) (209)

So either q = q + 1 (which doesn’t make sense as this would mean the lowest angular momentum is
higher than the highest angular momentum) or

q = −q (210)

Hence the eigenvalues have to move between −ℏq and ℏq in n steps of size ℏ between −l and l. This
then means that the eigenvalues of L̂z are mℏ for m ∈ Z ∩ [−l, l].
This also means that l = −l + N which means that l = N/2. If it is possible to move between
the lowest rung and the upper rung in an odd number of steps then l must be a half integer, whereas
if it takes an even number of steps l must be an integer. Hence,

L̂2 flm = ℏ2 l(l + 1)flm L̂z flm = ℏmflm , l = 0, 1/2, 1, 3/2, . . . , m = −l, −l − 1, . . . , l − 1, l (211)

For each given value of l (i.e. for each total angular momentum) there is 2l + 1 different values of m
(i.e. values for z angular momentum). Note that from the eigenvalues, l corresponds to a measure-
ment of the total angular momentum with m corresponds to a measurement of z angular momentum.

By symmetry arguments it turns out that only integer values of l are actually physical [34]. The
half-integer values make an appearance later with spin. So the concluding equations are

L̂2 flm = ℏ2 l(l + 1)flm L̂z flm = ℏmflm , l = 0, 1, . . . , m = −l, −l − 1, . . . , l (212)

Page 49 of 64 Jacob Westerhout


This conclusion is shown in the example below for a spherically symmetric potential.
Note that it has not been shown that this is all the possible eigenvalues. These where just
the ones accessible by applying the raising and lowering operators to a single eigenfunction. Again
this is beyond this course. Some explanation was given for the case of the Harmonic oscillator.
It
n seems
o follows from the operators being diagonalizable (working assumption) and that each singleton
L̂i forms a Cartan subalgebra [35]. Then the action of the L̂’s then (apparently) forms a weight
module which are irreducible in finite dimensions (which we are in — the eigenvalues were for a fixed
l) assuming the existence of a highest weight (which we have) [36]. From the irreducibility the result
follows immediately.

Eigenfunctions of angular momentum


We can write L̂x , L̂y , L̂z compactly as

L̂ = −iℏ(r × ∇) (213)

Writing ∇ in spherical coordinates as


∂ #» 1 ∂ #» 1 ∂ #»
∇= r+ θ+ ϕ
∂r r ∂θ r sin(θ) ∂ϕ
gives
#» ∂ #» 1 ∂
 
L̂ = −iℏ ϕ −θ (214)
∂θ sin(θ) ∂ϕ
#» #»
where #»
r , ϕ, θ are unit vectors (not using hat notation for confusion with operators). Decomposing
these vectors into cartesian coordinates the standard way gives
 
∂ ∂
L̂x = −iℏ − sin(ϕ) − cos(ϕ) cot(θ) (215)
∂θ ∂ϕ
 
∂ ∂
L̂y = −iℏ cos(ϕ) − sin(ϕ) cot(θ) (216)
∂θ ∂ϕ

L̂z = −iℏ (217)
∂ϕ
This then gives
∂2
   
2 2 1 ∂ ∂ 1
L̂ = −ℏ sin(θ) + (218)
sin(θ) ∂θ ∂θ sin2 (θ) ∂ϕ2
Hence the eigenvalue equations in equation (211) can be written as

∂2
   
2 1 ∂ ∂ 1 ∂
−ℏ sin(θ) + 2 2
flm = ℏ2 l(l + 1)flm − iℏ flm = ℏmflm (219)
sin(θ) ∂θ ∂θ sin (θ) ∂ϕ ∂ϕ

However this set of equations was seen when solving the Hydrogen atom (equations omitted in these
notes). The solution is
flm = Ylm (θ, ϕ) (220)

Page 50 of 64 Jacob Westerhout


Conservation of angular momentum
The lectures show that for a spherically symmetric potential,

[Ĥ, L̂] = 0 = [Ĥ, L̂2 ]

The Heisenberg equations of motion then gives that

dL̂x (t) dL̂y (t) dL̂z (t) dL̂2 (t)


= = = =0
dt dt dt dt

Just taking L̂z as an example that


D E D E D E D E D E D E
L̂z (t) = Ψ(t) L̂z Ψ(t) = Ψ L̂z (t) Ψ = Ψ L̂z (0) Ψ = Ψ(0) L̂z Ψ(0) = L̂z (0)

I.e. in a spherically symmetric potential the average value of the angular momentums doesn’t change
in time.

Total angular momentum


Above I have only ever talked about L̂2 but in lectures they like to talk abut L̂. For simplicity taking
L̂2 as a matrix. Then L̂ is basically the square root of the matrix. This is defined using the spectrum
of L̂2 . Because L̂2 it is diagonalizable as

L̂2 = SDS −1

where D is a diagonal matrix with the eigenvalues along the diagonal. L̂ is then defined as

L̂ = S DS −1

where the square root of a diagonal matrix is the square root of each element on the diagonal. What
matters most about this is that L̂ shares the same eigenvectors of L̂2 (that matrix S was unchanged)
with eigenvalues the square root of those of L̂2 . Physically this√makes sense. If you measure L̂2 and
get λ then if you had just measured L̂ you would have gotten λ.

Page 51 of 64 Jacob Westerhout


Week 11
Spin
Spin is regarded as a non-orbital intrinsic angular momentum of a particle with no classical coun-
terpart,
Ref [37] claims that spin can be explained as an angular momentum generated by a circulating flow
of energy in the wave field of an electron. It is analogous to an angular momentum carried by the
fields of circularly polarized electromagnetic waves. It is often claimed that spin comes naturally
from the Dirac equation (Schrodinger equation but using relativistic energies)
The theory of spin is practically identical to that of orbital angular momentum.
h i h i h i
Ŝx , Ŝy = iℏŜz , Ŝy , Ŝz = iℏŜx , Ŝz , Ŝx = iℏŜy (221)

Here we denote the eigenstates of Sz as |s m⟩ instead of flm .

S 2 |s m⟩ = ℏ2 s(s + 1) |s m⟩ , Sz |s m⟩ = ℏm |s m⟩ (222)
p
S± |s m⟩ = ℏ s(s + 1) − m(m ± 1) |s (m ± 1)⟩ (223)
Every elementary particle has a specific and immutable value of s called it’s spin. Then m takes
values
m = −s, −s + 1, . . . , s − 1, s (224)
Here we allow s to take integer and half integer values.

Spin 1/2
Spin 1/2 particles just have 2 spin state
 
1 1 1 −1
, (225)
2 2 2 2
called spin up and spin down. These 2 states form a basis for all possible spin states. In matrix
form, any spin state can be expressed as
     
a 1 0
χ= = aχ+ + bχ− ≡ a +b (226)
b 0 1

for
|a|2 + |b|2 = 1 (227)
in the eigenbasis of spin. You can similarly define matrix elements of S 2 and Sz in the spin eigenbasis
   
2 3 2 1 0 ℏ 1 0
Ŝ = ℏ Ŝz = (228)
4 0 1 2 0 −1

and similarly for S±    


0 1 0 0
Ŝ+ = ℏ Ŝ− = (229)
0 0 1 0
Then using that
1 1
Ŝx = (S+ + S− ) Ŝy = (S+ − S− ) (230)
2 2i
Page 52 of 64 Jacob Westerhout
gives    
ℏ 0 1 ℏ 0 −i
Ŝx = Ŝy = (231)
2 1 0 2 i 0
Because Sxyz all carry a factor of ℏ/2 we introduce Pauli spin matrices
     
0 1 0 −i 1 0
σ̂x ≡ σ̂y ≡ σ̂z ≡ (232)
1 0 i 0 0 −1
So that Ŝxyz = σ̂xyz ℏ/2.
The eigenvalues of Ŝy and Ŝx are

(233)
2
the same as Ŝz . Griffiths gives that in the spin-z basis, the eigenvectors for Ŝx are
 √   √ 
(x) 1/√2 (x) 1/ √2
χ+ = χ− = (234)
1/ 2 −1/ 2
Ŝx is Hermitian so these vectors span the space. Hence any spin state can be written as
   
a+b (x) a−b
χ= √ χ+ + √ χ− (x) (235)
2 2

Electron in a magnetic field


The particles spin produces a magnetic dipole moment
µ̂ = γ Ŝ (236)
where γ is the gyromagnetic ratio and is approximately −2e/m. Much like a dipole in a magnetic
field, this moment produces a torque. In this case, the energy associated with this torque is
Ĥ = −γB· Ŝ (237)
where B is the magnetic field the particle is placed in a S is the appropriate spin matrix (e.g. Ŝz or
Ŝx depending on your basis). This interaction forms part of the Zeeman effect where the electrons
in the atom gain extra/lose energy due spin and orbital magnetic moments [38]. Note that the if
particle where moving we would have to add the effect of the Lorentz force. This is not so trivial as
the Lorentz force is not derivable from a potential.

Addition of Angular Momentum


Suppose that there are now 2 particles with spins s1 and s2 . Let the combined state be denoted
|s1 m1 ⟩ ⊗ |s2 m2 ⟩ (238)
This ⊗ has a very technical definition that doesn’t matter (it is usually the Kronecker product).
Here we just treat it as a way of combining 2 states together into a bigger. We can define operators
corresponding to measurements of spins of a single particle
2
Ŝ (1) |s1 m1 ⟩ ⊗ |s2 m2 ⟩ = s1 (s1 + 1)ℏ2 |s1 m1 ⟩ ⊗ |s2 m2 ⟩ (239)
(2) 2
Ŝ |s1 m1 ⟩ ⊗ |s2 m2 ⟩ = s2 (s2 + 1)ℏ2 |s1 m1 ⟩ ⊗ |s2 m2 ⟩ (240)
(1)
Ŝz |s1 m1 ⟩ ⊗ |s2 m2 ⟩ = m1 ℏ |s1 m1 ⟩ ⊗ |s2 m2 ⟩ (241)
(2)
Ŝz |s1 m1 ⟩ ⊗ |s2 m2 ⟩ = m2 ℏ |s1 m1 ⟩ ⊗ |s2 m2 ⟩ (242)
(243)

Page 53 of 64 Jacob Westerhout


The total spin (in a particular direction) is denoted
(1) (2)
Ŝz = Ŝz + Ŝz (244)

In this case

Sz |s1 m1 ⟩ ⊗ |s2 m2 ⟩ = ℏ(m1 + m2 ) |s1 m1 ⟩ ⊗ |s2 m2 ⟩ ≡ ℏm |s1 m1 ⟩ ⊗ |s2 m2 ⟩ (245)

and hence m1 and m2 must be such that m1 + m2 take on values

s1 + s2 , s1 + s2 − −1, ...., −(s1 + s2 )

Thus a measurement on the z-spin of both particles returns the sum of the spins of the individual
particles. Note the same is not true of the total spin.
Given s1 and s2 possible the eigenvalues of total spin S 2 = |S 1 + S 2 |2 is [39]

(s1 + s2 ), (s1 + s2 − 1), · · · , |s1 − s2 | (246)

Given one of these eigenvalues and a value of Sz there is 1 (and only 1) associated vector (not
necessarily a basis state) [39]. n o
2 2 (1) (2)
Then instead of using a basis consisting of the vectors of S1 , S2 , Sz , Sz we now move
 2 2 2
into the basis consisting of the eigenvectors of S1 , S2 , S , Sz . I.e. rather than considering the
spin in the z-direction of each particle, we instead consider the total spin and the total spin in the
z-direction. Denote elements of this basis as |s m⟩. Note that we are only working with particles of
a constant spin so s, m refer to the quantum numbers of S 2 and Sz . Technically we should notate
|s m⟩ ≡ |s1 s2 s m⟩. We can calculate expressions for this new basis in terms of the old basis as
s1
X s2
X
|s m⟩ = (|s1 m1 ⟩ ⊗ |s2 m2 ⟩)(⟨s1 m1 | ⊗ ⟨s2 m2 |) |s m⟩ (247)
m1 =−s1 m2 =−s2

The coefficients (⟨s1 m1 | ⊗ ⟨s2 m2 |) |s m⟩ are called the Clebsch-Gordan coefficients and are de-
s1 s2 s
noted Cm 1 m2 m
in Griffiths. You can also go the other way and calculate values in the old basis in
terms of the new basis
sX
1 +s2 s
X sX
1 +s2 s
X
s1 s2 s
|s1 m1 ⟩ ⊗ |s2 m2 ⟩ = |s m⟩ ⟨s m| (|s1 m1 ⟩ ⊗ |s2 m2 ⟩) = Cm 1 m2 m
|s m⟩ (248)
s=|s1 −s2 | m=−s s=|s1 −s2 | m=−s

The values of the Clebsh-Gordan coefficients are usually stated in a weird table.

Page 54 of 64 Jacob Westerhout


Griffiths goes through how to read this.

Addition of spin 1/2 particles


Given 2 spin 1/2 particles, there are 4 possible states
       
11 11 11 1 −1 1 −1 11 1 −1 1 −1
⊗ ⊗ ⊗ ⊗ (249)
22 22 22 2 2 2 2 22 2 2 2 2

It can be shown that the eigenstates of Ŝ 2 with eigenvalue 2ℏ2 (and hence correspond to total spin
1) are
 
11 11
|1 1⟩ = ⊗ (250)
22 22
    
1 11 1 −1 1 −1 11
|1 0⟩ = √ ⊗ + ⊗ (251)
2 22 2 2 2 2 22
 
1 −1 1 −1
|1 − 1⟩ = ⊗ (252)
2 2 2 2

Similarly it can be shown the eigenstates of Ŝ 2 with eigenvalue 0 (and hence correspond to total spin
0) are     
1 11 1 −1 1 −1 11
|0 0⟩ = √ ⊗ − ⊗ (253)
2 22 2 2 2 2 22
There is no need to explicitly check it spin-z as by above the total spin z is just the sum of the
individual spin z’s.

Page 55 of 64 Jacob Westerhout


Week 12
For 2 particle systems, the wave function is
Ψ(r 1 , r 2 , t) (254)
The Schrodinger equation is still the same
∂Ψ
iℏ = ĤΨ (255)
∂t
but the Hamiltonian is
−ℏ2 2 ℏ2
Ĥ = ∇1 − ∇2 2 + V (r 1 , r 2 , t) (256)
2m1 2m2
where the subscripts on ∇ indicate differentiation WRT one of the particles coordinates. Normaliza-
tion is as you’d expect ˆ
|Ψ(r 1 , r 2 , t)|2 d3 r 2 d3 r 1 = 1 (257)

For time independent potentials we obtain a complete set of solutions from


Ψn,m (r 1 , r2 , t) = ψn,m (r 1 , r 2 )e−iEn,m t/ℏ (258)
where ψn,m satisfies the same time-independent Schrodinger equation
Ĥψ = Eψ (259)
These solutions can then be used to generate solutions to the Schrodinger equation under any initial
conditions satisfying X
Ψ= cn,m Ψn,m (260)
n,m

Non-interacting particles
This means that each particle is only subject to some external force. In this case the potential is just
V (r 1 , r 2 ) = V1 (r 1 ) + V2 (r 2 ) (261)
Note that the total potential will always be the sum of the potentials of the 2 particles, it’s just in
this case that the potential of one of the particles does not depend on the position of the other. In
this case the time-independent Schrodinger equation can be solved using separation of variables
ψ(r 1 , r 2 ) = ψa (r 1 )ψb (r 2 ) (262)
in which case the time-independent Schrodinger equation becomes 2 equations
−ℏ2 2
∇1 ψa (r 1 ) + V1 (r 1 )ψa (r 1 ) = Ea ψa (r 1 ) (263)
2m1
−ℏ2 2
∇2 ψb (r 2 ) + V2 (r 2 )ψb (r 2 ) = Eb ψb (r 2 ) (264)
2m2
where the total energy is just E1 + E2 . Hence
Ψ(r 1 , r2 , t) = ψa (r 1 )ψb (r 2 )e−i(Ea +Eb )t/ℏ = Ψa (r 1 , t)Ψb (r 2 , t) (265)
It is also possible to have entangle states which are sums of these types of solutions. E.g.
3 4
Ψ(r 1 , r 2 , t) = Ψa (r 1 , t)Ψb (r 2 , t) + Ψc (r 1 , t)Ψd (r 2 , t) (266)
5 5
Page 56 of 64 Jacob Westerhout
Bosons and Fermions
Suppose we have 2 non-interacting particles. Then

ψ(r 1 , r 2 ) = ψa (r 1 )ψb (r 2 ) (267)

If we have identical particles, the probability density should stay exactly the same if the 2 particles
swapped places. I.e. if r 1 7→ r 2 and r 2 7→ r 1 then the wavefunction should stay the same. The above
equation does not have this property hence it is for distinguishable particles. To fix this problem
we simply construct a wavefunction that is non-committal to which particle is in which state.
1
ψ± (r 1 , r 2 ) = √ [ψa (r 1 )ψb (r 2 ) ± ψb (r 1 )ψa (r 2 )] (268)
2
These functions are s.t.

ψ− (r 1 , r 2 ) = −ψ− (r 2 , r 1 ) ψ+ (r 1 , r 2 ) = ψ+ (r 2 , r 1 ) (269)

and hence the probability density generated from these wavefunctions remains the same after ex-
change of particles. ψ+ correspond to Bosons (which have integer spin) and ψ− correspond to
Fermions (which have half integer spin). This connection to spin is given by the spin statistics
theorem [40].
If 2 fermions are in the same state (i.e. a = b), then the wavefunction is
1
ψ− (r 1 , r 2 ) = √ [ψa (r 1 )ψa (r 2 ) − ψa (r 1 )ψa (r 2 )] = 0 (270)
2
Hence you cannot have 2 particles in the same state (the wavefunction cannot be normalized). This
is the Pauli exclusion principle.

Exchange forces
For distinguishable particles (in 1D)

(x1 − x2 )2 = x2 a
+ x2 b
− 2 ⟨x⟩a ⟨x⟩b (271)

whereas for indistinguishable particles (+ refers to bosons and − refers to fermions)

(x1 − x2 )2 ±
= x2 a
+ x2 b
− 2 ⟨x⟩a ⟨x⟩b ∓ 2| ⟨x⟩ab |2 (272)

where ˆ
⟨x⟩ab ≡ xψa (x)∗ ψb (x) dx (273)

When ⟨x⟩ab ̸= 0 there is like there is a force of attraction between identical bosons and force of
repulsion between identical fermions. This is called an exchange force (although it’s not an actual
force, just a consequence of symmetrization requirements)

Spin in wavefunction
The state of particle whose position is not coupled (similar to entangled) to it’s spin can be written
as
|Ψ⟩ ⊗ |S⟩ (274)

Page 57 of 64 Jacob Westerhout


Hence the wavefunction (in the position basis and spin basis) for this state can be written as
Ψ(r 1 , r 2 , s1 , s2 ) = |Ψ⟩ ⊗ |S⟩ = ψ(r 1 , r 2 )χ(s1 , s2 ) (275)
For 2 non-interacting fermions this total wavefunction has to be antisymmetric I.e.
Ψ(r 1 , r 2 , s1 , s2 ) = −Ψ(r 2 , r 1 , s2 , s1 ) (276)
This means that either ψ is symmetric and χ is antisymmetric, or ψ is antisymmetric and χ is
symmetric.

Generalized Symmetrization Principle


We are now going to show that the symmetry/antisymmetry of the wavefunction must not change
in time. Let P̂ be the exchange operator
P̂ |1 2⟩ = |2 1⟩ (277)
I.e. P̂ switches particles. It can be shown that the eigenvalues of P̂ are ±1. Hence if
|1 2⟩ = ± |2 1⟩ (278)
then |1 2⟩ is an eigenstate of P̂ .
If 2 particles are identical, the Hamiltonian must treat the particles as the same. The then follows
that h i
P̂, Ĥ = 0 (279)
and so
d⟨P̂ ⟩
=0 (280)
dt
Hence if at some time the system is an eigenstate of P̂ , then it will stay that way forever.

Atoms
For a neutral atom with atomic number Z (hence Z protons and Z neutrons) the Hamiltonian is
(assuming a stationary nucleus)
Z  Z
−ℏ2 2 Ze2 e2
    X
X 1 1 1
Ĥ = ∇j − + (281)
j=1
2m 4πϵ0 ||r j || 2 4πϵ0 j̸=k ||r j − r k ||
where the first term is the kinetic energy plus the potential energy for each electron in the electric
field of the nucleus, and the second sum is the potential associated with the repulsion of the electrons.
Unfortunately the time-independent Schrodinger equation is too complicated to solve except for the
case of Z = 1 (hydrogen).
Note that this Hamiltonian for the electrons is not accurate. We are not taking into account the
position of the protons in the nucleus and instead considering them to all be at the origin. As such
we are also ignoring the motion of the protons caused by interactions with the electrons as well as
interactions with other protons [41].
This doesn’t particularly matter as the electron is significantly lighter than the protons and quickly
rearranges in response to the slower motion of the nuclei [30]. This is called the Born-Oppenheimer
approximation. When considering the motion of the electrons separately the wavefunction is

ψ(r 1 , r 2 , . . . , r n , R1 , R2 , . . . , RN ) = ψe (r 1 , r 2 , . . . , r n )ψp (R1 , R2 , . . . , RN ) (282)

where r i denotes the positions of the electrons and Ri denotes the position of the protons
This was exact in the case of Hydrogen.

Page 58 of 64 Jacob Westerhout


Helium
For Z = 2 the Hamiltonian is
 2  2   2  2  
e2
  
−ℏ 2 1 2e −ℏ 2 1 2e 1
Ĥ = ∇1 − + ∇2 − + (283)
2m 4πϵ0 ||r 1 || 2m 4πϵ0 ||r 2 || 4πϵ0 ||r 1 − r 2 ||

If we don’t impose some of the assumptions presented above the Hamiltonian is [30]
2 2  2 X n
−ℏ2 X 2 X ZA e2 ZB e2 e2 ZA ZB e2
 X
Ĥ = ∇i − + + + (284)
2me i=1 i=1
4πϵ0 riA 4πϵ0 riB i=1 j>i
4πϵ0 rij 4πϵ0 RAB

where A refers one proton and B refers to the other. The first term is the kinetic energy of the
electrons, the second is the Coulomb attraction between the electrons and the nuclei, the third term
is the electron-electron repulsion, and the last term is the nuclear-nuclear repulsion. This last term
doesn’t contribute anything to the dynamics we’re interested in (electrons ignoring the motion of the
protons) but it does contribute
ZA ZB e2
(285)
4πϵ0 RAB
to the overall energy.
If we just ignore the last term, the potential becomes separable and we get

ψ(r 1 , r 2 ) = ψnml (r 1 )ψn′ l′ m′ (r 2 ) (286)

where ψnml is just the solution to the hydrogen potential but with e2 7→ 2e2 (due to the extra energy
from the extra proton). Hence using that for separable potentials, the energy of the total system is
just the sum of the constituents (equation (261)) and the expression for energy of the hydrogen atom
(equation (187)) with e2 7→ 2e2 we get
  "  2 2 #
1 1 m2 e
E = 4E1 2 + 2 E1 = − 2
(287)
n m 2ℏ 4πϵ0

Because the ground state ψ0 (Griffiths states it exactly) is symmetric, the spins of the electrons
in the ground state must be antisymmetric (as the electrons are fermions). Because electrons are
spin 1/2 this must mean that the spin state is the ‘singlet’ (equation (253)) with the spins oppositely
aligned.
If one electron is in the excited state then (without symmetrizing)

ψ(r 1 , r 2 ) = ψnml (r 1 )ψ100 (r 2 ) (288)

as if you put 2 electrons in an excited state, one immediately falls down to the ground state and
gives enough energy to knock the other other electron into a continuum state.
It is possible to have both symmetric and anti-symmetric position wavefunctions [42],
1
ψ = √ [ψnml (r 1 )ψn′ m′ l′ (r 2 ) ± ψnml (r 2 )ψn′ m′ l′ (r 1 )] (289)
2
This means that we can have antisymmetric spin states (called parahelium) or symmetric spin
states (called orthohelium). The energy for orthohlium should be lower than for parahelium as by
above symmetric wavefunctions have the particles slightly closer together.

Page 59 of 64 Jacob Westerhout


Periodic table
As a first approximation ignoring the mutual repulsion of the electrons, the individual electrons
occupy the same states as the hydrogen.
For each n, there are n2 possible states without spin, with each state have the same energy (but
different l, m). For each n there are hence 2n2 possible states when including spin. We call these
positions shells. For each (n, l) there are (2l +1) possible states (corresponding to different z-angular
momentum) which constitute a subshell. Finally each (n, l, m) is called an orbital.
Qualitatively, the horizontal rows on the period table correspond to filling out each shell (the
electron-electron repulsion throws the counting off — they can give enough energy to push some
electrons into the higher shell). These electron fill from the lowest energy up (the electron-electron
interactions causes states with the same n to have varying energies) which correspond to filling from
the lowest l upwards.
Each of these l have (non-sense) name. l = 0 is called s (for sharp), l = 1 is called p (for principle),
l = 2 is called d (for diffuse), l = 3 is called f (for fundamental), and then it continues alphabetically
upwards (skipping j). Typically orbitals are written using these letters. For example

(1s)2 (2s)2 (2p)2 (290)

correspond to 2 electrons in (1, 0, 0), 2 electrons are in (2, 0, 0), and 2 electrons are in some combi-
nation of (2, 1, 1), (2, 1, 0), (2, 1, −1).
In chemistry, the state of an atom (in it’s ground state) tends to be given by a term symbol (or
a Russell-Saunders term symbol [43])
2S+1
LJ (291)
where S is the total spin, J is grand total angular momentum, and L is the letter (capitalized version
of what’s above for various l) corresponding to the total orbital angular momentum. However, this is
not consistent with the atomic structure of the atom as we have currently stated — we do not know
the orbitals of the atoms for a given filling of the subshells so we cannot have definite momentum or
spin.
For example, in the case above there are 2 electrons with angular momentum quantum number
1 (l = 1) while the rest have angular momentum 0. Hence the possible values for the total angular
momentum are 2, 1, 0. The states (1s)2 and (2s)2 must have the spins in the singlet as both electrons
are in the same spatial state and hence the wavefunction is intrinsically symmetric [44, 45]. For the
(2p)2 the electrons can either be in a singlet state or a triplet state as the electrons do not have to
be in the same spatial state. Hence the total spin could either be 1 or 0 and hence the grand total
angular momentum (orbital plus spin) could either be 3, 2, 1 or 0.
There is ritual known as Hund’s Rules for figuring which states the electrons would be in when
the atom is in it’s ground state. This is not so trivial because we found above that the energy of the
electrons doesn’t depend on l or s. Hence if an electron has angular momentum quantum number l
and spin quantum number s, there are (2l + 1)(2s + 1) possible states with the same energy [46].
But this calculation has same strong assumptions. It neglects, for example, the interactions of the
atoms which obviously changes the energy of the electrons (for example, intuitively states where the
electrons are ‘further’ apart should have lower energies). It also doesn’t take into account spin-orbit
interaction which is a relativistic effect [46].
Hund’s rules are[47]

• The term with the maximum spin multiplicity has the lowest energy (I.e. electrons fill up all
position states before doubling up — the triplet state tends to have lower energy than the
singlet state [48]).

Page 60 of 64 Jacob Westerhout


• For a given multiplicity, the state with the lowest angular momentum quantum number has
the lowest energy (not relevant all the way up to titanium with Z = 22) (basically this gives
where the electron should double up).

• If a subshell (n, l) is no more than half filled (this is orbital angular momentum, not spin so
there can be more than 2 electrons in the subshell) the the lowest energy has J = |L − s|, else
if the subshell is more than half filled then J = L + S has the lowest energy.

For the example above, Hund’s rule gives 3 P0 . Some notes about Hund’s rules

• The output of Hund’s rule is not always right as it is from generalizing some experimental
observations [47].

• Hund’s rule does not give the specific states (only what the spin and angular momentum of
the states must be). E.g. most of the time it will give that the spins of the electrons must be
aligned, although it tends not to say which direction.

– I don’t think that the electrons need to be in definite state — I think they can be in
superpositions in the spin-(spin-z) basis (else how can spin and angular momentum be
definite).

• You cannot apply the same rules to generate an ordering of the energies for the term numbers
[46].

Page 61 of 64 Jacob Westerhout


References
1
R. Eisberg and R. Resnick, Quantum physics of atoms, molecules, solids, nuclei, and particles
(1985).
2
S. Gao, Collapse of the wave function: models, ontology, origin, and implications (Cambridge Uni-
versity Press, 2018).
3
T. Maudlin, “Three measurement problems”, topoi 14, 7–15 (1995).
4
M. A. Schlosshauer, Decoherence: and the quantum-to-classical transition (Springer Science & Busi-
ness Media, 2007).
5
T. I. Philosopher, Collapse of the wave function, (2015) https://www.informationphilosopher.
com/solutions/experiments/wave-function_collapse/ (visited on 03/28/2022).
6
P. Forums, Uncertainty principle and collapse wavefunction, (2015) https://www.physicsforums.
com / threads / uncertainty - principle - and - collapse - wavefunction . 842030/ (visited on
03/28/2022).
7
T. Physicist, What is a “measurement” in quantum mechanics?, (2011) https://www.askamathematician.
com/2011/06/q-what-is-a-measurement-in-quantum-mechanics/ (visited on 03/28/2022).
8
G. Bonneau, J. Faraut, and G. Valent, “Self-adjoint extensions of operators and the teaching of
quantum mechanics”, American Journal of physics 69, 322–331 (2001).
9
Wikipedia, Canonical quantization, (2022) https : / / en . wikipedia . org / wiki / Canonical _
quantization (visited on 03/28/2022).
10
Wikipedia, Mathematical formulation of quantum mechanics, (2022) https : / / en . wikipedia .
org / wiki / Mathematical _ formulation _ of _ quantum _ mechanics # Postulates _ of _ quantum _
mechanics (visited on 03/28/2022).
11
Wikipedia, Born rule, (2022) https://en.wikipedia.org/wiki/Born_rule (visited on 03/28/2022).
12
R. Shankar, Principles of quantum mechanics (Springer Science & Business Media, 2012).
13
J. Branson, Continuity of wavefunctions and derivatives, (2013) https : / / quantummechanics .
ucsd.edu/ph130a/130_notes/node141.html (visited on 03/28/2022).
14
Wikipedia, Weak solution, (2022) https://en.wikipedia.org/wiki/Weak_solution (visited on
03/28/2022).
15
S. Exchange, Can we have discontinuous wavefunctions in the infinite square well?, (2016) https://
physics.stackexchange.com/questions/38181/can-we-have-discontinuous-wavefunctions-
in-the-infinite-square-well (visited on 03/28/2022).
16
S. Exchange, Can a physical wavefunction be non-smooth or even discontinuous?, (2016) https:
//physics.stackexchange.com/questions/262671/can-a-physical-wavefunction-be-non-
smooth-or-even-discontinuous (visited on 03/28/2022).
17
S. Exchange, Weak solution of schrödinger equation, (2021) https://physics.stackexchange.
com/questions/618172/weak-solution-of-schr%C3%B6dinger-equation (visited on 03/28/2022).
18
R. De la Madrid, “The role of the rigged hilbert space in quantum mechanics”, European journal
of physics 26, 287 (2005).
19
S. Exchange, How to know if a wave function is physically acceptable solution of a schrödinger
equation?, (2020) https://physics.stackexchange.com/questions/149001/how-to-know-if-
a-wave-function-is-physically-acceptable-solution-of-a-schr%c3%b6dinge/149006#
149006 (visited on 03/28/2022).

Page 62 of 64 Jacob Westerhout


20
S. Exchange, Derivative of a delta function, (2021) https://math.stackexchange.com/questions/
444884/derivative-of-a-delta-function (visited on 03/28/2022).
21
S. Exchange, Continuity & smoothness of wave function, (2021) https://physics.stackexchange.
com/questions/19667/continuity-smoothness-of-wave-function (visited on 03/28/2022).
22
PhysicsForums, Is the ground state of a harmonic oscillator unique?, (2012) https : / / www .
physicsforums.com/threads/is- the- ground- state- of- a- harmonic- oscillator- unique.
573056/ (visited on 08/15/2023).
23
S. Exchange, The delta-function potential, (2014) https : / / physics . stackexchange . com /
questions/141042/the-delta-function-potential (visited on 03/28/2022).
24
D. Ruelle, “A remark on bound states in potential-scattering theory”, Il Nuovo Cimento A (1965-
1970) 61, 655–662 (1969).
25
A. Pankov, “Introduction to spectral theory of schrödinger operators”, (2006).
26
S. Exchange, Scattering and bound states, (2014) https : / / physics . stackexchange . com /
questions/141235/scattering-and-bound-states (visited on 03/28/2022).
27
J. Lampart and M. Lewin, “A many-body rage theorem”, Communications in Mathematical Physics
340, 1171–1186 (2015).
28
Wikipedia, Transmission coefficient, (2022) https://en.wikipedia.org/wiki/Transmission_
coefficient (visited on 03/28/2022).
29
Wikipedia, Reflection coefficient, (2022) https : / / en . wikipedia . org / wiki / Reflection _
coefficient (visited on 03/28/2022).
30
T. Engel, Physical chemistry (Pearson Education India, 2006).
31
S. Exchange, Hydrogen atom, what’s the wave equation for the atom’s nucleus?, (2018) https:
/ / physics . stackexchange . com / questions / 401853 / hydrogen - atom - whats - the - wave -
equation-for-the-atoms-nucleus (visited on 03/28/2022).
32
Wikipedia, Euler’s rotation theorem, (2022) https : / / en . wikipedia . org / wiki / Euler % 27s _
rotation_theorem.
33
Wikipedia, Rotation matrix, (2023) https://en.wikipedia.org/wiki/Rotation_matrix#In_
three_dimensions.
34
I. R. Gatland, “Integer versus half-integer angular momentum”, American journal of physics 74,
191–192 (2006).
35
M. stack exchange, How to find a cartan subalgebra of so(3). (2022) https://math.stackexchange.
com/questions/1851671/how-to-find-a-cartan-subalgebra-of-so3.
36
Wikipedia, Weight (representation theory), (2022) https://en.wikipedia.org/wiki/Weight_
(representation_theory).
37
H. C. Ohanian, “What is spin?”, American Journal of Physics 54, 500–505 (1986).
38
J. P. Lowe and K. Peterson, Quantum chemistry (Elsevier, 2011).
39
Q. Mechanics, By c. cohen-tannoudji, b. diu, f. laloe, 1977.
40
Wikipedia, Spin–statistics theorem, (2022) https : / / en . wikipedia . org / wiki / Spin % E2 % 80 %
93statistics_theorem (visited on 03/28/2022).
41
Wikipedia, Born–oppenheimer approximation, (2022) https://en.wikipedia.org/wiki/Born%
E2%80%93Oppenheimer_approximation (visited on 03/28/2022).

Page 63 of 64 Jacob Westerhout


42
Wikipedia, Helium atom, (2022) https://en.wikipedia.org/wiki/Helium_atom (visited on
03/28/2022).
43
M. P. Mueller, Fundamentals of quantum chemistry: molecular spectroscopy and modern electronic
structure computations (Springer Science & Business Media, 2007).
44
S. Exchange, Why are l=s=0 in full shells? spectroscopic notation, (2021) https : / / physics .
stackexchange.com/questions/648548/why-are-l-s-0-in-full-shells-spectroscopic-
notation (visited on 03/28/2022).
45
S. Exchange, Ground state of beryllium (be), (2018) https : / / physics . stackexchange . com /
questions/442745/ground-state-of-beryllium-rm-be (visited on 03/28/2022).
46
I. N. Levine, D. H. Busch, and H. Shull, Quantum chemistry, Vol. 6 (Pearson Prentice Hall Upper
Saddle River, NJ, 2009).
47
Wikipedia, Hund’s rules, (2022) https://en.wikipedia.org/wiki/Hund%27s_rules (visited on
03/28/2022).
48
L. Piela, Ideas of quantum chemistry (Elsevier, 2006).

Page 64 of 64 Jacob Westerhout

You might also like