Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Schr¨ odingerNewton equations
Richard Harrison
St.Peter’s College
University of Oxford
A thesis submitted for the degree of
Doctor of Philosophy
Trinity 2001
Acknowledgements
I would like to thank my supervisors Paul Tod and Irene Moroz, Roger Penrose
for the idea behind my thesis, Nick Trefethen for his help in use of spectral
methods and his excellent lectures, Ian Sobey for his helpful comments and
Lionel Mason for getting me started with this project. I would also like to
thank the EPSRC for their ﬁnancial support. Last but not least I would like
thank my mother, father and sister for their support.
Abstract
The Schr¨ odingerNewton (SN) equations were proposed by Penrose [18] as a model for
gravitational collapse of the wavefunction. The potential in the Schr¨ odinger equation is
the gravity due to the density of ψ
2
, where ψ is the wavefunction. As with normal
Quantum Mechanics the probability, momentum and angular momentum are conserved.
We ﬁrst consider the spherically symmetric case, here the stationary solutions have been
found numerically by Moroz et al [15] and Jones et al [3]. The ground state which has the
lowest energy has no zeros. The higher states are such that the (n+1)th state has n zeros.
We consider the linear stability problem for the stationary states, which we numerically
solve using spectral methods. The ground state is linearly stable since it has only imaginary
eigenvalues. The higher states are linearly unstable having imaginary eigenvalues except
for n quadruples of complex eigenvalues for the (n +1)th state, where a quadruple consists
of {λ,
¯
λ, −λ, −
¯
λ}. Next we consider the nonlinear evolution, using a method involving an
iteration to calculate the potential at the next time step and CrankNicolson to evolve the
Schr¨ odinger equation. To absorb scatter we use a sponge factor which reduces the reﬂection
back from the outer boundary condition and we show that the numerical evolution converges
for diﬀerent mesh sizes and time steps. Evolution of the ground state shows it is stable
and added perturbations oscillate at frequencies determined by the linear perturbation
theory. The higher states are shown to be unstable, emitting scatter and leaving a rescaled
ground state. The rate at which they decay is controlled by the complex eigenvalues of
the linear perturbation. Next we consider adding another dimension in two diﬀerent ways:
by considering the axisymmetric case and the 2D equations. The stationary solutions are
found. We modify the evolution method and ﬁnd that the higher states are unstable. In
2D case we consider rigidly rotationing solutions and show they exist and are unstable.
Contents
1 Introduction: the Schr¨ odingerNewton equations 1
1.1 The equations motivated by quantum statereduction . . . . . . . . . . . . . 1
1.2 Some analytical properties of the SN equations . . . . . . . . . . . . . . . . 4
1.2.1 Existence and uniqueness . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Rescaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 Lagrangian form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.4 Conserved quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.5 Lie point symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.6 Dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Analytic results about the timeindependent case . . . . . . . . . . . . . . . 8
1.3.1 Existence and uniqueness . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Variational formulation . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.3 Negativity of the energyeigenvalue . . . . . . . . . . . . . . . . . . . 9
1.4 Plan of thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Sphericallysymmetric stationary solutions 12
2.1 The equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Computing the sphericallysymmetric stationary states . . . . . . . . . . . . 13
2.2.1 RungeKutta integration . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.2 Alternative method . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Approximations to the energy of the bound states . . . . . . . . . . . . . . 15
3 Linear stability of the sphericallysymmetric solutions 18
3.1 Linearising the SN equations . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Separating the O() equations . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Restriction on the possible eigenvalues . . . . . . . . . . . . . . . . . . . . . 24
3.5 An inequality on Re(λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
i
4 Numerical solution of the perturbation equations 26
4.1 The method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2 The perturbation about the ground state . . . . . . . . . . . . . . . . . . . 28
4.3 Perturbation about the second state . . . . . . . . . . . . . . . . . . . . . . 33
4.4 Perturbation about the higher order states . . . . . . . . . . . . . . . . . . 36
4.5 Bound on real part of the eigenvalues . . . . . . . . . . . . . . . . . . . . . . 39
4.6 Testing the numerical method by using RungeKutta integration . . . . . . 39
4.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5 Numerical methods for the evolution 46
5.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2 Numerical methods for the Schr¨ odinger equation with arbitrary time inde
pendent potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3 Conditions on the time and space steps . . . . . . . . . . . . . . . . . . . . 48
5.4 Boundary conditions and sponges . . . . . . . . . . . . . . . . . . . . . . . . 49
5.5 Solution of the Schr¨ odinger equation with zero potential . . . . . . . . . . . 50
5.6 Schr¨ odinger equation with a trial ﬁxed potential . . . . . . . . . . . . . . . 51
5.7 Numerical evolution of the SN equations . . . . . . . . . . . . . . . . . . . 52
5.8 Checks on the evolution of SN equations . . . . . . . . . . . . . . . . . . . 53
5.9 Mesh dependence of the methods . . . . . . . . . . . . . . . . . . . . . . . . 54
5.10 Large time behaviour of solutions . . . . . . . . . . . . . . . . . . . . . . . 54
6 Results from the numerical evolution 56
6.1 Testing the sponges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.2 Evolution of the ground state . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.3 Evolution of the higher states . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.3.1 Evolution of the second state . . . . . . . . . . . . . . . . . . . . . . 62
6.3.2 Evolution of the third state . . . . . . . . . . . . . . . . . . . . . . . 67
6.4 Evolution of an arbitrary spherically symmetric Gaussian shell . . . . . . . 69
6.4.1 Evolution of the shell while changing v . . . . . . . . . . . . . . . . . 72
6.4.2 Evolution of the shell while changing a . . . . . . . . . . . . . . . . . 74
6.4.3 Evolution of the shell while changing σ . . . . . . . . . . . . . . . . 77
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
ii
7 The axisymmetric SN equations 81
7.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.2 The axisymmetric equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.3 Finding axisymmetric stationary solutions . . . . . . . . . . . . . . . . . . . 82
7.4 Axisymmetric stationary solutions . . . . . . . . . . . . . . . . . . . . . . . 83
7.5 Timedependent solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
8 The TwoDimensional SN equations 97
8.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.2 The equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.3 Sponge factors on a square grid . . . . . . . . . . . . . . . . . . . . . . . . . 98
8.4 Evolution of dipolelike state . . . . . . . . . . . . . . . . . . . . . . . . . . 99
8.5 Spinning Solution of the twodimensional equations . . . . . . . . . . . . . . 102
8.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
A Fortran programs 108
A.1 Program to calculate the bound states for the SN equations . . . . . . . . 108
B Matlab programs 116
B.1 Chapter 2 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
B.1.1 Program for asymptotically extending the data of the bound state
calculated by RungeKutta integration . . . . . . . . . . . . . . . . . 116
B.1.2 Programs to calculate stationary state by Jones et al method . . . . 116
B.2 Chapter 4 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
B.2.1 Program for solving the Schr¨ odinger equation . . . . . . . . . . . . . 119
B.2.2 Program for solving the O() perturbation problem . . . . . . . . . . 121
B.2.3 Program for the calculation of (4.10) . . . . . . . . . . . . . . . . . . 122
B.2.4 Program for performing the RungeKutta integration on the O()
perturbation problem . . . . . . . . . . . . . . . . . . . . . . . . . . 123
B.3 Chapter 6 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
B.3.1 Evolution program . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
B.3.2 Fourier transformation program . . . . . . . . . . . . . . . . . . . . . 126
B.4 Chapter 7 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
B.4.1 Programs for solving the axisymmetric stationary state problem . . 126
B.4.2 Evolution programs for the axisymmetric case . . . . . . . . . . . . . 133
B.5 Chapter 8 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
B.5.1 Evolution programs for the two dimensional SN equations . . . . . 138
iii
Bibliography 143
iv
List of Figures
2.1 First four eigenfunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 The least squares ﬁt of the energy eigenvalues . . . . . . . . . . . . . . . . . 16
2.3 The number of nodes with the Moroz et al [15] prediction of the number of
nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 Loglog plot of the energy values with log(n) . . . . . . . . . . . . . . . . . 17
4.1 The smallest eigenvalues of the perturbation about the ground state . . . . 29
4.2 All the computed eigenvalues of the perturbation about the ground state . . 29
4.3 The ﬁrst eigenvector of the perturbation about the ground state. Note the
scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.4 The second eigenvector of the perturbation about the ground state . . . . . 31
4.5 The third eigenvector of the perturbation about the ground state . . . . . . 31
4.6 The change in the sample eigenvalue with increasing values of N (L = 150) 32
4.7 The change in the sample eigenvalue with increasing values of L (N = 60) . 32
4.8 The lowest eigenvalues of the perturbation about the second bound state . . 34
4.9 All the computed eigenvalues of the perturbation about the second bound state 34
4.10 The ﬁrst eigenvector of the perturbation about the second bound state, Note
the scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.11 The second eigenvector of the perturbation about the second bound state . 35
4.12 The third eigenvector of the perturbation about the second bound state . . 36
4.13 The eigenvalues of the perturbation about the third bound state . . . . . . 37
4.14 The second eigenvector of the perturbation about the third bound state . . 37
4.15 The third eigenvector of the perturbation about the third bound state . . . 38
4.16 The third eigenvector of the perturbation about the third bound state . . . 38
4.17 The eigenvalues of the perturbation about the fourth bound state . . . . . . 40
4.18 The ﬁrst eigenvector of the perturbation about the ground state, using Runge
Kutta integration; compare with ﬁgure 4.3 . . . . . . . . . . . . . . . . . . . 41
4.19 The second eigenvector of the perturbation about the ground state, using
RungeKutta integration; compare with ﬁgure 4.4 . . . . . . . . . . . . . . . 42
v
4.20 The ﬁrst eigenvector of the perturbation about the second bound state, using
RungeKutta integration; compare with ﬁgure 4.10 . . . . . . . . . . . . . . 42
4.21 The second eigenvector of the perturbation about the second bound state,
using RungeKutta integration; compare with ﬁgure 4.11 . . . . . . . . . . . 43
4.22 The second eigenvector of the perturbation about the third bound state,
using RungeKutta integration; compare with ﬁgure 4.14 . . . . . . . . . . . 43
4.23 The real part of third eigenvector of the perturbation about the third bound
state, using RungeKutta integration; compare ﬁgure 4.15 . . . . . . . . . . 44
4.24 The imaginary part of third eigenvector of the perturbation about the third
bound state, using RungeKutta integration; compare with ﬁgure 4.16 . . . 44
6.1 Testing the sponge factor with wave moving oﬀ the grid . . . . . . . . . . . 56
6.2 Testing sponge factor with wave moving oﬀ the the grid . . . . . . . . . . . 57
6.3 The graph of the phase angle of the ground state as it evolves . . . . . . . . 58
6.4 The oscillation about the ground state . . . . . . . . . . . . . . . . . . . . . 59
6.5 Evolution of the ground state . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.6 The eigenvalue associated with the linear perturbation about the ground state 60
6.7 The oscillation about the ground state at a given point . . . . . . . . . . . . 61
6.8 The Fourier transform of the oscillation about the ground state . . . . . . . 61
6.9 The graph of the phase angle of the second state as it evolves . . . . . . . . 63
6.10 The long time evolution of the second state in Chebyshev methods . . . . . 63
6.11 Evolution of second state tolerance E9 approximately . . . . . . . . . . . . 65
6.12 f(t) −A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.13 Fourier transformation of f(t) −A . . . . . . . . . . . . . . . . . . . . . . . 66
6.14 Decay of the second state . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.15 The short time evolution of the second bound state . . . . . . . . . . . . . . 67
6.16 Oscillation about second state at a ﬁxed radius . . . . . . . . . . . . . . . . 68
6.17 Fourier transform of the oscillation about the second state . . . . . . . . . . 68
6.18 Evolution of third state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.19 f(t) −A with respect to the third state . . . . . . . . . . . . . . . . . . . . 70
6.20 Fourier transform of growing mode about third state . . . . . . . . . . . . . 71
6.21 Probability remaining in the evolution of the third state . . . . . . . . . . . 71
6.22 Progress of evolution with diﬀerent velocities and times . . . . . . . . . . . 72
6.23 Diﬀerence of the other time steps compared with dt = 1 . . . . . . . . . . . 73
6.24 Richardson fraction done with diﬀerent h
i
’s . . . . . . . . . . . . . . . . . . 73
6.25 Diﬀerence in N value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
vi
6.26 Evolution of lump at diﬀerent a . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.27 Comparing diﬀerences with diﬀerent time steps . . . . . . . . . . . . . . . . 75
6.28 Richardson fraction with diﬀerent h
i
’s . . . . . . . . . . . . . . . . . . . . . 76
6.29 Diﬀerence in N value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.30 Evolution of lump at diﬀerent σ . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.31 Comparing diﬀerences with diﬀerent time steps . . . . . . . . . . . . . . . . 78
6.32 Richardson fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.33 Diﬀerence in N value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.1 Ground state, axp1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.2 Contour plot of the ground state, axp1 . . . . . . . . . . . . . . . . . . . . . 85
7.3 “Dipole” state next state after ground state , axp2 . . . . . . . . . . . . . . 86
7.4 Contour Plot of the dipole, axp2 . . . . . . . . . . . . . . . . . . . . . . . . 86
7.5 Second spherically symmetric state, axp3 . . . . . . . . . . . . . . . . . . . 87
7.6 Contour Plot of the second spherically symmetric state, axp3 . . . . . . . . 87
7.7 Not quite the 2nd spherically symmetric state ,axp4 . . . . . . . . . . . . . 88
7.8 Contour plot of the not quite 2nd spherically symmetric state, axp4 . . . . 88
7.9 Not quite the 3rd spherically symmetric state E = −0.0162, axp5 . . . . . . 89
7.10 Contour plot of Not quite the 3rd spherically symmetric state E = −0.0162,
axp5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.11 3rd spherically symmetric sate ,axp6 . . . . . . . . . . . . . . . . . . . . . . 90
7.12 Contour plot of 3rd spherically symmetric state ,axp6 . . . . . . . . . . . . 90
7.13 Axially symmetric state with E = −0.0263, axp7 . . . . . . . . . . . . . . . 91
7.14 Contour plot of axp7  double dipole . . . . . . . . . . . . . . . . . . . . . . 91
7.15 State with E = −0.0208 and J = 3.1178 ,axp8 . . . . . . . . . . . . . . . . . 92
7.16 Contour plot of the state with E = −0.0208 and J = 3.1178 ,axp8 . . . . . 92
7.17 Evolution of the dipole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
7.18 Average Richardson fraction with dt = 0.25, 0.5, 1 . . . . . . . . . . . . . . 95
7.19 Average Richardson fraction with dt = 0.125, 0.25, 0.5 . . . . . . . . . . . . 95
7.20 Evolution of the dipole with diﬀerent mesh size . . . . . . . . . . . . . . . . 96
8.1 Sponge factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
8.2 Lump moving oﬀ the grid with v = 2, N = 30 and dt = 2 . . . . . . . . . . 100
8.3 Lump moving oﬀ the grid with v = 2, N = 30 and dt = 2 . . . . . . . . . . 100
8.4 Lump moving oﬀ the grid with v = 2, N = 30 and dt = 2 . . . . . . . . . . 101
8.5 Stationary state for the 2dim SN equations . . . . . . . . . . . . . . . . . 101
8.6 Evolution of a stationary state . . . . . . . . . . . . . . . . . . . . . . . . . 102
vii
8.7 Real part of spinning solution . . . . . . . . . . . . . . . . . . . . . . . . . . 104
8.8 Imaginary part of spinning solution . . . . . . . . . . . . . . . . . . . . . . . 104
8.9 Evolution of spinning solution . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.10 Evolution of spinning solution . . . . . . . . . . . . . . . . . . . . . . . . . . 105
viii
Chapter 1
Introduction: the
Schr¨ odingerNewton equations
1.1 The equations motivated by quantum statereduction
The Schr¨ odingerNewton equations (abbreviated as SN equations) are :
i
∂Ψ
∂
˜
t
=
−
2
2m
∇
2
Ψ +mΦΨ, (1.1a)
∇
2
Φ = 4πGmΨ
2
, (1.1b)
where is Planck’s constant, m is the mass of the particle, Ψ is the wave function, Φ is
the potential, G is the gravitational constant and t is time.
To get the timeindependent form of the SN equations (1.1) we consider the substitution
Ψ(x, y, z, t) = Ψ(x, y, z)e
−i
˜
E
˜
t
, (1.2a)
Φ(x, y, z, t) = Φ(x, y, z), (1.2b)
which gives the timeindependent SN equations,
−
2
2m
∇
2
Ψ +mΦΨ =
˜
EΨ, (1.3a)
∇
2
Φ = 4πGmΨ
2
. (1.3b)
The boundary conditions for the equations are the same as those for the usual Schr¨ odinger
equation and the Poisson equation, i.e we require the wave function to be smooth for all
x ∈ R
3
and normalisable, we also require the potential function to be smooth and zero at
large distances.
One idea for a theory of quantum state reduction was put forward by Penrose [17], [18].
The idea was that a superposition of two or more quantum states, which have a signiﬁcant
amount of mass displacement between the states, ought to be unstable and reduce to one of
the states within a ﬁnite time. This argument is motivated by a conﬂict between the basic
1
principles of quantum mechanics and those of general relativity. This idea requires that
there be a special set of quantum states which collapse no further. This would be called a
preferred basis. This preferred basis would be the stationary quantum states. Penrose [18]
states that “the phenomenon of quantum state reduction is a gravitational phenomenon,
and that essential changes are needed in the framework of quantum mechanics in order that
its principles can be adequately married with the principles of Einstein’s general relativ
ity.” According to Penrose, superpositions of these states ought to decay within a certain
characteristic average time of T
G
, where T
G
=
E
G
and E
G
is the gravitational selfenergy
of the diﬀerence bewteen the mass distributions of the two superposed states. That is, the
selfenergy is
1
2
R
3
∇ψ
2
dV , where ψ = ψ
1
−ψ
2
the diﬀerence of the two superposed states
mass distributions. [The SN equations are a ﬁrst approximation we can consider the case
where gravity is taken to be Newtonian and spacetime is nonrelativistic.]
These equations were ﬁrst considered by Ruﬃni and Bonazzola [21] in connection with
the theory of selfgravitating bosonstars. There the SN equations are the nonrelativistic
limit of the governing KleinGordon equations. Bosonstars consist of a large collection of
bosons under the gravitational force of their own combined mass. Ruﬃni and Bonazzola [21]
considered the problem of ﬁnding stationary bosonstars in the nonrelativistic spherically
symmetric case which corresponds to ﬁnding stationary spherically symmetric solutions of
the SN equations. These equations have also been considered by Moroz, Penrose and Tod
[15] where they computed stationary solutions in the case of spherical symmetry. Moroz
and Tod [14] prove some analytic properties of the SN equations. They have also been
considered by Bernstein, Giladi and Jones [3] who developed a better way than a shooting
method for calculating the stationary solutions in the case of spherical symmetry. Bernstein
and Jones [4] have started considering a method for the dynamical evolution.
Our object in this thesis is to study the SN equations in the time dependent and time
independent cases.
We can consider the nondimensionalized SN equations, via a transformation (Ψ, Φ,
˜
t, R) →
(ψ, φ, t, r) where:
ψ = αΨ,
φ = βΦ,
t = γ
˜
t,
r = δR, (1.4)
2
such that the SN equation become :
i
∂ψ
∂t
= −∇
2
ψ +φψ, (1.5a)
∇
2
φ = ψ
2
, (1.5b)
Normalisation is preserved, i.e
Ψ
2
d
3
x = 1 and
ψ
2
d
3
X = 1 if α
2
= δ
−3
. The
gravitation equation (1.5b) then becomes,
β
δ
2
∇
2
Φ = α
2
Ψ
2
, (1.6)
from this we deduce that
α
2
δ
2
β
= 4πGm, thus β =
1
4πGmδ
. The Schr¨ odinger equation
becomes,
iα
γ
Ψ
˜
t
= −
α
δ
2
∇
2
Ψ +αβΨΦ, (1.7)
which becomes,
im
γβ
Ψ
˜
t
= −
m
δ
2
β
∇
2
Ψ +mΨΦ, (1.8)
so we then deduce that
m
γβ
= and
m
δ
2
β
=
2
2m
so,
4πGm
2
δ
=
2
2m
, (1.9)
and
δ =
8πGm
3
2
. (1.10)
For γ we have
γ =
32π
2
G
2
m
5
3
. (1.11)
Now from (1.5) the nondimensionalized time independent SN equations are
Eψ = −∇
2
ψ +φψ, (1.12a)
∇
2
φ = ψ
2
, (1.12b)
where
˜
E = E
2G
2
m
5
2
. We can eliminate the E term from (1.12) by letting
ψ = S, (1.13a)
E −φ = V, (1.13b)
which reduces (1.12) to
∇
2
S = −SV, (1.14a)
∇
2
V = −S
2
, (1.14b)
which was the form of the equations consider in [14], [15].
3
1.2 Some analytical properties of the SN equations
1.2.1 Existence and uniqueness
Existence and uniqueness for solutions is established by the following simpliﬁed version of
the theorem of Illner et al [13].
Given χ(x) ∈ H
2
((R)
3
) with L
2
norm equal to 1, the system (1.5) has a unique, strong
solution ψ(x, t) global in time with ψ(x, 0) = χ(x) and
ψ
2
= 1. Illner et al [13] give
regularity properties for ψ and φ.
1.2.2 Rescaling
Note that if (ψ, φ, x, t) is a solution of the SN equations then
(λ
2
ψ, λ
2
φ, λ
−1
x, λ
−2
t) is also a solution where λ is any constant. Suppose that
V
ψ
2
dV = λ
−1
, (1.15)
where λ is a real constant, and consider
ˆ
ψ = λ
2
ψ(λx, λ
2
t), (1.16a)
ˆ
φ = λ
2
φ(λx, λ
2
t). (1.16b)
Then
ˆ
ψ and
ˆ
φ satisfy the SN equations with
V

ˆ
ψ
2
d
ˆ
V = 1. (1.17)
We also note that the rescaling of the solutions will also cause the rescaling of the energy
eigenvalues (as well as the action). that is:
E
new
= λE. (1.18)
1.2.3 Lagrangian form
Note that (1.5) can be obtained from the Lagrangian:
[∇ψ · ∇
¯
ψ +
1
2
φψ
2
+
i
2
(ψ
¯
ψ
t
−
¯
ψψ
t
)]d
3
xdt, (1.19)
where it is understood that φ is the solution of ∇
2
φ = ψ
2
. Alternatively, one may solve
the Poisson equation with the appropriate Green’s function and consider the Lagrangian.
[∇ψ · ∇
¯
ψ −
1
8π
ψ(x)
2
ψ(y)
2
x −y
d
3
y +
i
2
(ψ
¯
ψ
t
−
¯
ψψ
t
)]d
3
xdt. (1.20)
See Christian [5] and Diosi [7] for details.
4
1.2.4 Conserved quantities
The system (1.5) admits several conserved quantities, all of which are to be expected from
linear quantum mechanics. Deﬁne
ρ = ψ
2
, (1.21a)
J
i
= −i(
¯
ψψ
i
−ψ
¯
ψ
i
), (1.21b)
S
ij
= −ψ
¯
ψ
ij
−
¯
ψψ
ij
+
¯
ψ
i
ψ
j
+ψ
i
¯
ψ
j
+ 2φ
i
φ
j
−δ
ij
φ
k
φ
k
, (1.21c)
where ψ
i
=
∂ψ
∂x
i
and φ
i
=
∂φ
∂x
i
then
˙ ρ = −J
i,i
, (1.22a)
˙
J
i
= −S
ij,j
, (1.22b)
where the dot denotes diﬀerentiation with respect to time. Therefore
P =
ρd
3
x = constant, (1.23)
that is, the total probability is conserved. Next deﬁne the total momentum P
i
by
P
i
=
J
i
d
3
x. (1.24)
Then
˙
P
i
= 0. (1.25)
Deﬁne the centre of mass < x
i
> by
< x
i
>=
ρx
i
d
3
x, (1.26)
then
˙
< x
i
> = P
i
, (1.27)
so that, as expected, the centreofmass follows a straight line. The total angular momentum
is
L
i
=
ijk
x
j
J
k
d
3
x, (1.28)
and then, by the symmetry of S
ij
, it follows that
˙
L
i
= 0. (1.29)
5
We deﬁne the kinetic energy T and potential energy V in the obvious way by
T =
∇ψ
2
d
3
x, (1.30a)
V =
φψ
2
d
3
x. (1.30b)
Note from (1.5b) that
φ ˙ ρd
3
x =
˙
φρd
3
x. (1.31)
It follows from this that the quantity
E = T +
1
2
V, (1.32)
is conserved. We shall sometimes call this the conserved energy or the action. It is closely
related to the action for the timeindependent equations. The total energy E = T + V is
consequently not conserved. The identity
ρφ
i
= (φ
i
φ
j
−
1
2
δ
ij
φ
k
φ
k
)
i
(1.33)
is a consequence of (1.5b) and it follows from this that the averaged ‘selfforce’ is zero, i.e
that
< F
i
>=
φ
i
ρd
3
x = 0, (1.34)
which may be thought of as Newton’s Third Law.
We may deﬁne a sequence of moments q
(n)
i...j
by
q
(n)
i...j
=
ρ x
i
. . . x
j
. .. .
n
d
3
x, (1.35)
and other tensors p
(n)
i...jk
and s
(n)
i...jkm
by
p
(n)
i...jk
=
x
(i
. . . x
j
. .. .
n
J
k)
d
3
x, (1.36a)
s
(n)
i...jkm
=
x
(i
. . . x
j
. .. .
n
S
km)
d
3
x. (1.36b)
Then it follows from (1.22b) that
˙ q
(n)
i...j
= np
(n−1)
i...jk
, (1.37a)
˙ p
(n)
i...jk
= ns
(n−1)
i...jkm
. (1.37b)
In particular this means that p
(n)
i...jk
and s
(n)
i...jkm
are always zero for steady solutions.
6
1.2.5 Lie point symmetries
Following the general method of Stephani [24] we can ﬁnd the Lie point symmetries of
(1.5). These consist of rotations and translations together, with a generalised Galilean
transformation which can be expressed as follows:
x → ˆ x = x +P(t), (1.38a)
ψ →
ˆ
ψ = ψ(x +P, t) exp[
−i
2
˙
P.x +
i
4

˙
P
2
dt], (1.38b)
φ →
ˆ
φ = φ(x +P, t) +
1
2
x.
¨
P, (1.38c)
which is a Galilean transformation if and only if
¨
P = 0. By a Galilean transformation,
we can reduce the total momentum to zero, and then by a translation we can place the
centreofmass at the origin.
These clearly are all Lie point symmetries. The Galilean invariance of (1.5) was noted by
Christian [5]. The analysis leading to the claim that there are no more Lie pont symmetries
is straightforward but has not been published.
1.2.6 Dispersion
One of the moments is particularly signiﬁcant, namely the dispersion:
< x
2
>= q
(2)
ii
=
x
2
ρd
3
x. (1.39)
Following Arriola and Soler [2], we ﬁnd
¨ q
(2)
ii
= 2(
x
i
˙
J
i
d
3
x) = 2
S
ii
d
3
x =
(8∇ψ
2
+ 2φψ
2
)d
3
x, (1.40)
so that
d
2
< x
2
>
dt
2
= 4E + 4T = 8E −2V. (1.41)
Now recall that, as a consequence of the maximum principle (see e.g. [8]) φ is everywhere
negative. Thus
d
2
< x
2
>
dt
2
> 8E. (1.42)
The dispersion grows at least quadratically with time if E is positive, but we cannot conclude
this if E is negative.
7
1.3 Analytic results about the timeindependent case
1.3.1 Existence and uniqueness
In Moroz and Tod [14] it was shown that the system (1.12) has inﬁnitely many spherically
symmetric solutions, all with negative energyeigenvalue and ψ real. (It is easy to see that
ψ may always be assumed real in stationary solutions.) These authors did not show, but it
is believed that, for each integer n there is a unique (up to sign) real sphericallysymmetric
solution with n zeroes, and the energyeigenvalues increase monotonically in n to zero.
1.3.2 Variational formulation
The system (1.12) can be obtained from a variational problem, by seeking stationary points
of the action
I =
1
2
(E −E) =
[
1
2
∇ψ
2
+
1
4
φψ
2
−
1
2
Eψ
2
]d
3
x, (1.43)
subject to
∇
2
φ = ψ
2
, (1.44)
or by solving (1.12b) with the relevant Green’s function and considering,
I =
[
1
2
∇ψ
2
−
1
16π
ψ(x)
2
ψ(y)
2
x −y
d
3
y −
1
2
Eψ
2
]d
3
x. (1.45)
If we vary ψ →ψ +δψ, φ →φ +δφ then the ﬁrst variation of (1.43) is
δI =
1
2
δ
¯
ψ(−∇
2
ψ +φψ −Eψ) +c.c]d
3
x, (1.46)
from which we obtain (1.12a) as the expected EulerLagrange equations, while the second
variation, subject to (1.12b), is
δ
2
I =
[∇δψ
2
+φδψ
2
−
1
2
∇δψ
2
−Eδψ
2
]d
3
x. (1.47)
By exploiting various standard inequalities (Tod [25]) would show that the action is bounded
below:
E ≥ −
1
54π
4
. (1.48)
One expects that the direct method in the Calculus of Variations should now prove that the
inﬁmum of I is attained, that the minimising function is analytic (since the system (1.12)
is elliptic) and that the minimising function will be the ground state found numerically by
Moroz et al [15] and proved to exist by Moroz and Tod [14]. Note that, at the ground state,
the second variation (1.47) cannot be negative.
8
1.3.3 Negativity of the energyeigenvalue
We write E
0
for the energyeigenvalue of the ground state. We now prove that for any
stationary state the E energyeigenvalue is negative. Following Tod [25], we deﬁne the
tensor
T
ij
= ψ
i
¯
ψ
j
+
¯
ψ
i
ψ
j
+φ
i
φ
j
−δ
ij
(ψ
k
¯
ψ
k
+
1
2
φ
k
φ
k
+φψ
2
−Eψ
2
). (1.49)
Then as a consequence of (1.12) it follows that
T
ij,j
= 0, (1.50)
whence
0 =
(x
i
T
ij
)
,j
d
3
x =
T
ii
d
3
x, (1.51)
so that
0 =
[−∇ψ
2
−
5
2
φψ
2
+ 3Eψ
2
]d
3
x. (1.52)
and
3E = T +
5
2
V. (1.53)
With E = T +V this shows that for a steady solution
T = −
1
3
E, (1.54a)
V =
4
3
E, (1.54b)
E =
1
3
E, (1.54c)
as T > 0 by deﬁnition it follows that E < 0.
Tod [25] shows that
0 ≤ (−V ) ≤
4
3
√
3π
2
T
1
2
. (1.55)
Arriola and Soler [2] have a stronger result, with 4
−E
0
3
T
1
2
on the right hand side. Since
E = T +
1
2
V we may solve for T to ﬁnd the bounds
T
1
2
≤
1
3
√
3π
2
+ [E +
1
27π
4
]
1
2
, (1.56a)
≥
1
3
√
3π
2
−[E +
1
27π
4
]
1
2
. (1.56b)
These results still hold at each instant for the timedependent case, so that the kinetic and
potential energies are separately bounded in terms of the action or conserved energy at all
times.
9
1.4 Plan of thesis
In this thesis we start with a review of the methods which can be used to compute the
stationary solutions in the case of spherical symmetry (chapter 2). Then in chapter 3 we
consider the case of the linear perturbation about the spherically symmetric stationary
solutions, obtaining an eigenvalue problem (3.24) to solve. Also in chapter 3 we obtain
restrictions on the eigenvalues analytically such that the eigenvalues are purely real or
imaginary or an integral condition on the eigenvectors vanishes (see section 3.4). In chapter 4
we solve the eigenvalue problem using spectral methods for the ﬁrst few stationary solutions
and check the results using RungeKutta integration as well as conﬁrming the conditions
on the eigenvalues. In chapter 5 we consider the problem of ﬁnding a numerical method to
evolve the timedependent SN equations and we consider the boundary conditions to put
on the numerical problem. Also in chapter 5 we consider adding a small heat term, called a
sponge factor to the Schr¨ odinger equation to absorb scatter waves. In chapter 6 we evolve
numerically the problem with diﬀerent initial conditions and check the convergence of the
evolution method with diﬀerent mesh and time step sizes. We consider the evolution of the
stationary states to see if they are stable and we also look at the stationary states with
added perturbation and compare the frequencies of oscillation with the linear perturbation
theory. We consider the axisymmetric SN equations in chapter 7 and look at the evolution
as well as the timeindependent equations. In chapter 8 we consider the 2dimensional
SN equations (8.2) or equivalently the translationally symmetric case. The evolution is
considered as well as the concept of ﬁnding two spinning lumps orbiting each other.
1.5 Conclusion
The results obtained from considering the spherically symmetric case are:
• The ground state is linearly stable.(See section 4.2)
• The linear perturbation about the nth excited state, which is to say the (n + 1)th
state, has n quadruples of complex eigenvalues as well as pure imaginary pairs. (See
sections 4.3, 4.4)
• The ground state is stable under the full (nonlinear) evolution. (See section 6.5)
• Perturbations about the ground state oscillate with the frequencies obtained by the
linear perturbation theory. (See section 6.5)
• The higher states are unstable and will decay into a “multiple” of the ground state,
while emitting some scatter oﬀ the grid. (See section 6.3)
10
• The decay time for higher states is controlled by the growing linear mode obtained in
the linear perturbation theory. (See section 6.3)
• Perturbations about higher states will oscillate for a while (until they decay) according
to the linear oscillation obtained by the linear perturbation theory. (See section 6.3)
• The evolution of diﬀerent exponential lumps indicates that any initial condition ap
pears to decay, that is they scatter and leave a “multiple” of the ground state. (See
section 6.4)
The results obtained from considering the axially symmetric case are:
• Stationary solutions that are axisymmetric exist and the ﬁrst one is like the dipole of
the Hydrogen atom. (See section 7.4)
• Evolution of the dipolelike solution shows that it is unstable in the same way as the
spherically symmetric stationary solutions are, that is it emits scatter oﬀ the grid
leaving a “multiple” of the ground state and that lumps of probability density attract
each other. (See section 7.5)
The results obtained from considering the 2dimensional case are:
• Evolution of the higher states are unstable, emitting scatter and leaving a “multiple”
of the ground state. (See section 8.4)
• There exist rotating solutions, but these are unstable. (See section 8.5)
• Lumps of probability density attract each other and come together emitting scatter
and leave a “multiple” of the ground state. (See section 8.4)
11
Chapter 2
Sphericallysymmetric stationary
solutions
2.1 The equations
In the case of spherical symmetry and timeindependence we can assume without loss of
generality ψ is real. So (1.14) becomes
(rS)
= −rSV, (2.1a)
(rV )
= −rS
2
. (2.1b)
We have the boundary conditions S → 0 as r → ∞ and S
= 0 = V
at r = 0. We also
note that if (S, V, r) is a solution then so is (λ
2
S, λ
2
V, λ
−1
r). At large r bounded solutions
to (2.1) decay like
V = A−
B
r
, (2.2a)
S =
C
r
e
−kr
, (2.2b)
where
A = V
0
−
∞
0
xS
2
dx, (2.3a)
B =
∞
0
x
2
S
2
dx, (2.3b)
where V
0
is the initial value of the potential V , (i.e. V
0
= V (0)).
12
2.2 Computing the sphericallysymmetric stationary states
2.2.1 RungeKutta integration
Now (2.1) can be rewritten as a system of four ﬁrst order ODEs
Y
1
= Y
2
, (2.4a)
Y
2
=
−2Y
2
r
−Y
2
Y
4
, (2.4b)
Y
3
= Y
4
, (2.4c)
Y
4
=
−2Y
4
r
−Y
2
4
, (2.4d)
where Y
1
= S and Y
3
= V .
The numerical technique used by Moroz et al [15] to obtain the stationary states uses
fourthorder RungeKutta integration on (2.4), starting at r = 0 and integrating outwards
towards inﬁnity. The initial values are picked so that the boundary conditions at r = 0 are
satisﬁed and then other values are guessed. The normalisation invariance allows for either
V (0), S(0) be set equal to a chosen constant. The states are obtained by reﬁning the initial
guess so that a solution which tends to zero when r is large is obtained.
There are various methods for obtaining the correct initial values which correspond to
stationary states. The method used by Moroz et al [15] was to integrate up to a ﬁxed value
in the domain, waiting for the routine to fail or blow up, then plotting the solution so far,
and reﬁne the guess based on which way the function blows up.
Another is a shooting method, which involves integration until some value for r and
looking at the value of S at that point, then modifying the initial values such that the S
at that point is zero. The problem with this routine is that as the function blows up the
step size decreases so that the tolerance remains low. This increases the computation time
needed. Also the routine will fail if the routine takes to long, so the value for r cannot be
too large.
To calculate the stationary states, I have used a modiﬁed shooting method, to avoid the
problem which occurs with RungeKutta integrations, which is the exponential blow up of
solutions. I have modiﬁed the integration in such way that the program will integrate over
small steps using a 4th order RungeKutta NAG routine. After each step it will terminate
if the solution is too large in absolute magnitude. In the case where V
0
= 1 we say that
1.5 is large since the ﬁrst state occurs around 1.088. It will also terminate if the solution
for S blows up exponentially. From information on which side of the axis the solution
becomes unbounded we can reﬁne the initial conditions to obtain the eigenfunction. Using
this method we are able to obtain the ﬁrst 50 values for S
0
or equivalently the ﬁrst 50
energy levels.
13
For the stationary spherically symmetric case it was shown in Moroz et al [15] that the
eigenvalue is
˜
E =
2G
2
m
5
2
A
B
2
, (2.5)
where A and B are given by (2.3), and V
0
is the initial value of the potential function V .
Using the above method we calculated A, B and
2A
B
which is the energy up to a factor
of
G
2
m
5
2
and which compare with the ﬁrst 16 eigenvalues of Jones et al [3]. The ﬁrst
20 eigenvalues calculated with the above routine are given in Table 2.1. We also plot in
ﬁgure 2.1 the ﬁrst four spherically symmetric states normalised such that
ψ
2
d
3
x = 4π.
Note that the nth state has (n −1) zeroes or “nodes”.
Number of zeros Energy Eigenvalue Jones et al [3] Eigenvalues
0 0.16276929132192 0.163
1 0.03079656067054 0.0308
2 0.01252610801692 0.0125
3 0.00674732963038 0.00675
4 0.00420903256689 0.00421
5 0.00287386420271 0.00288
6 0.00208619042678 0.00209
7 0.00158297244845 0.00158
8 0.00124207860434 0.00124
9 0.00100051995162 0.00100
10 0.00082314193054 0.000823
11 0.00068906850493 0.000689
12 0.00058527053127 0.000585
13 0.00050327487416 0.000503
14 0.00043737620824 0.000437
15 0.00038362194847 0.000384
16 0.00033920111442
17 0.00030207158301
18 0.00027072080257
19 0.00024400868816
20 0.00022106369652
Table 2.1: The ﬁrst 20 eigenvalues
2.2.2 Alternative method
An alternative method, which we use later on in chapters 4 and 6 to compute the eigen
function at the Chebyshev points is given below. Jones et al [3] used an iterative numerical
scheme for computing the nnode stationary states, instead of using a shooting method. An
outline of their method is as follows:
14
0 10 20 30 40
0
0.02
0.04
0.06
0.08
0.1
r
ψ
first state
0 20 40 60 80 100 120
−5
0
5
10
15
20
x 10
−3
r
ψ
second state
0 50 100 150 200
−2
0
2
4
6
8
x 10
−3
r
ψ
third state
0 100 200 300
−2
0
2
4
6
x 10
−3
r
ψ
fourth state
Figure 2.1: First four eigenfunctions
1. Set an outer radius R.
2. Supply an initial guess for u
n
= rψ
n
.
3. Solve for Φ in
∂
2
Φ
∂r
2
+
2
r
∂Φ
∂r
= r
−2
u
2
n
. (2.6)
4. Solve for the nnode eigenvalue and eigenfunction of
∂
2
u
n
∂r
2
= 2u
n
(Φ −). (2.7)
5. Iterate the previous two steps until the eigenvalue converges suﬃciently, a typical
criterion being that the change in
n
from one iteration to the next is less than 10
−9
.
6. Iterate the previous ﬁve steps, increasing R until
n
stops changing with the change
in R.
2.3 Approximations to the energy of the bound states
Jones et al [3] claim, that the energy eigenvalues
2A
B
2
closely follow a leastsquares ﬁt of the
formula:
E
n
= −
α
(n +β)
γ
, (2.8)
15
0 5 10 15 20 25 30 35 40 45 50
−5
−4.5
−4
−3.5
−3
−2.5
−2
−1.5
−1
−0.5
0
x 10
−3
2A/B
2
number of nodes
E
n
e
r
g
y
least squares fit
Figure 2.2: The least squares ﬁt of the energy eigenvalues
where α = 0.096, β = 0.76 and γ = 2.00. This appears to be a very good ﬁt of the ﬁrst
50 eigenvalues. The eigenvalues and the ﬁt are plotted in ﬁgure 2.2. Moroz et al [15] claim
that the the number of zeros is of the order of exp(
V
2
0
S
2
0
). Figure 2.3 shows that exp(
V
2
0
S
2
0
) is
an over estimate for the number of nodes, which appears to converge as n gets large.
The loglog plot ﬁgure 2.4 shows that we have a gradient of 2 as n gets large. This is
what we expect since E
n
=
−α
(n +β)
γ
is such a good ﬁt when γ = 2.
16
0 10 20 30 40 50 60
0
10
20
30
40
50
60
number of nodes
e
x
p
(
V
0 2
/
S
0 2
)
Figure 2.3: The number of nodes with the Moroz et al [15] prediction of the number of
nodes
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
−11
−10
−9
−8
−7
−6
−5
−4
−3
−2
−1
log n
l
o
g
(
−
E
n
)
Figure 2.4: Loglog plot of the energy values with log(n)
17
Chapter 3
Linear stability of the
sphericallysymmetric solutions
3.1 Linearising the SN equations
In this chapter 3 we shall set up the linear stability problem, ready for numerical solution
in chapter 4.
We look for a solution to (1.5) of the form :
ψ = ψ
0
(r, t) +ψ
1
(r, t) +
2
ψ
2
(r, t) +. . . , (3.1a)
φ = φ
0
(r, t) +φ
1
(r, t) +
2
φ
2
(r, t) +. . . , (3.1b)
where 1. Substitution of (3.1) into the SN equations (1.5) gives:
iψ
0t
+∇
2
ψ
0
+[iψ
1t
+∇
2
ψ
1
] +
2
[iψ
2t
+∇
2
ψ
2
] =
ψ
0
φ
0
+(ψ
0
φ
1
+ψ
1
φ
0
) +
2
(ψ
0
φ
2
+ψ
1
φ
1
+ψ
2
φ
0
), (3.2a)
∇
2
φ
0
+∇
2
φ
1
+
2
∇
2
φ
2
=
ψ
2
0
+(ψ
0
¯
ψ
1
+
¯
ψ
0
ψ
1
) +
2
(ψ
0
¯
ψ
2
+ψ
1
¯
ψ
1
+
¯
ψ
0
ψ
2
). (3.2b)
Finally equating the powers of we obtain
0
:
iψ
0t
+∇
2
ψ
0
= ψ
0
φ
0
, (3.3a)
∇
2
φ
0
= ψ
0

2
, (3.3b)
1
:
iψ
1t
+∇
2
ψ
1
= ψ
0
φ
1
+ψ
1
φ
0
, (3.4a)
∇
2
φ
1
= ψ
0
¯
ψ
1
+
¯
ψ
0
ψ
1
, (3.4b)
18
2
:
iψ
2t
+∇
2
ψ
2
= ψ
0
φ
2
+ψ
1
φ
1
+ψ
2
φ
0
, (3.5a)
∇
2
φ
2
= ψ
0
¯
ψ
2
+ψ
1
¯
ψ
1
+
¯
ψ
0
ψ
2
. (3.5b)
We consider the case of spherical symmetry so that
∇
2
f =
1
r
2
(r
2
f
r
)
r
=
1
r
(rf)
rr
, then (3.3) becomes
i(rψ
0
)
t
+ (rψ
0
)
rr
= rψ
0
φ
0
, (3.6a)
(rφ
0
)
rr
= rψ
0
¯
ψ
0
. (3.6b)
Since we are interested in the stability of the stationary problem we take
ψ
0
= R
0
(r)e
−iEt
, (3.7a)
φ
0
= E −V
0
(r), (3.7b)
where R
0
is real, so that
(rR
0
)
rr
= −rR
0
V
0
, (3.8a)
(rV
0
)
rr
= −rR
2
0
. (3.8b)
Substituting into (3.4), the O() problem we have
i(rψ
1
)
t
+ (rψ
1
)
rr
= r(E −V
0
)ψ
1
+rφ
1
R
0
(r)e
−iEt
, (3.9a)
(rφ
1
)
rr
= rR
0
(r)e
−iEt
¯
ψ
1
+rR
0
(r)e
iEt
ψ
1
. (3.9b)
To eliminate e
−iEt
, we seek solutions of the form
ψ
1
= R
1
(r, t)e
−iEt
, (3.10a)
φ
1
= φ
1
, (3.10b)
where R
1
is complex and φ
1
is real so that (3.9) simpliﬁes to give
i(rR
1
)
t
+ (rR
1
)
rr
= R
0
(rφ
1
) −V
0
(rR
1
), (3.11a)
(rφ
1
)
rr
= R
0
(r
¯
R
1
) +
¯
R
0
(rR
1
). (3.11b)
For convenience we introduce P = rφ
1
, R = rR
1
. Note that P and R must vanish at the
origin. The O() problem then becomes,
iR
t
+R
rr
= R
0
P −V
0
R, (3.12a)
¶
rr
= R
0
¯
R +
¯
R
0
R. (3.12b)
19
3.2 Separating the O() equations
We look for a solution of the O() problem in the form
R = (A+B)e
λt
+ (
¯
A−
¯
B)e
¯
λt
, (3.13a)
P = W
1
e
λt
+W
2
e
¯
λt
, (3.13b)
where we assume for now that λ is not real and A, B, W
1
and W
2
are functions which are
time independent. As we are considering the spherical symmetric case they depend upon r
only. Now since P is real we note that W
1
=
¯
W
2
, so we can let W = W
1
=
¯
W
2
. Substituting
into (3.12a) gives,
i(λ(A+B)e
λt
+
¯
λ(
¯
A−
¯
B)e
¯
λt
) + (A
rr
+B
rr
)e
λt
+ (
¯
A
rr
−
¯
B
rr
)e
¯
λt
=
R
0
(We
λt
+
¯
We
¯
λt
) −V
0
((A+B)e
λt
+ (
¯
A−
¯
B)e
¯
λt
). (3.14)
Equating the coeﬃcients of e
λt
, e
¯
λt
and noting that we can do this only if
¯
λ = λ so that λ
is not real, the coeﬃcient of e
λt
gives
iλ(A+B) + (A
rr
+B
rr
) = R
0
W −V
0
(A+B), (3.15)
while the coeﬃcient of e
¯
λt
gives
i
¯
λ(
¯
A−
¯
B) + (
¯
A
rr
−
¯
B
rr
) = R
0
¯
W −V
0
(
¯
A−
¯
B). (3.16)
Substituting into (3.12b) we obtain
W
rr
e
λt
+
¯
W
rr
e
¯
λt
= R
0
((
¯
A+
¯
B)e
¯
λt
+ (A−B)e
λt
) +
¯
R
0
((A+B)e
λt
+ (
¯
A−
¯
B)e
¯
λt
). (3.17)
Again equating coeﬃcients of e
λt
, e
¯
λt
gives,
W
rr
= R
0
(A−B) +
¯
R
0
(A+B), (3.18)
and
¯
W
rr
= R
0
(
¯
A+
¯
B) +
¯
R
0
(
¯
A−
¯
B). (3.19)
Since R
0
=
¯
R
0
we have
W
rr
= 2R
0
A, (3.20a)
¯
W
rr
= 2R
0
¯
A, (3.20b)
20
which is just one equation, so we have,
R
0
W = iλ(A+B) + (A
rr
+B
rr
) +V
0
(A+B), (3.21a)
(R
0
¯
W) = R
0
W = −iλ(A−B) + (A
rr
−B
rr
) +V
0
(A−B). (3.21b)
Therefore
B
rr
+V
0
B = −iλA, (3.22)
and
R
0
W = iλB +A
rr
+V
0
A. (3.23)
To summarise, the O() problem leads to three coupled linear O.D.E’s for the perturbation:
W
rr
= 2R
0
A, (3.24a)
B
rr
+V
0
B = −iλA, (3.24b)
A
rr
+V
0
A = R
0
W −iλB. (3.24c)
If λ is real, we note that (3.13b) reduces to,
R = Ae
λt
, (3.25a)
P = We
λt
. (3.25b)
Substituting into the O() equations (3.12) gives,
iλAe
λt
+A
rr
e
λt
= R
0
We
λt
−V
0
Ae
λt
, (3.26a)
W
rr
e
λt
= R
0
¯
Ae
λt
+R
0
Ae
λt
. (3.26b)
Equating coeﬃcients of e
λt
we obtain
iλA+A
rr
= R
0
W −V
0
A, (3.27a)
W
rr
= R
0
(A+
¯
A). (3.27b)
If we let A = a + ib where a and b are real functions of r, and substitute into (3.27) we
obtain
iλ(a + ib) + (a
r
r + ib
rr
) = R
0
W −V
0
(a + ib), (3.28a)
W
rr
= 2R
0
a, (3.28b)
so that equating real and imaginary parts gives
W
rr
= 2R
0
a, (3.29a)
b
rr
+V
0
b = −λa, (3.29b)
a
rr
+V
0
a = R
0
W −λb. (3.29c)
21
We consider the real eigenvalues of
W
rr
= 2R
0
A, (3.30a)
B
rr
+V
0
B = −iλA, (3.30b)
A
rr
+V
0
A = R
0
W −iλB. (3.30c)
Suppose that the eigenvalues are such that A is real. This implies that B is imaginary,
while W is real. We can therefore set A = a and B = ib where a,b are real functions of r.
Then (3.30) become
W
rr
= 2R
0
a, (3.31a)
b
rr
+V
0
b = −λa, (3.31b)
a
rr
+V
0
a = R
0
W −λb, (3.31c)
which are the same as (3.28), so (3.24) covers both.
We note that λ = 0 will be an eigenvalue of (3.28), with a = 0, b = iR
0
and W = 0, but
this corresponds to just a rotation in the phase factor.
We note that (3.24) transformed by (A, B, W) →(A, −B, W) becomes,
W
rr
= 2R
0
A, (3.32a)
B
rr
+V
0
B = iλA, (3.32b)
A
rr
+V
0
A = R
0
W + iλB, (3.32c)
so if λ is eigenvalue then −λ is a eigenvalue. Also if of (A, B, W) → (
¯
A,
¯
B,
¯
W) then the
equations become
W
rr
= 2R
0
A, (3.33a)
B
rr
+V
0
B = −i
¯
λA, (3.33b)
A
rr
+V
0
A = R
0
W −i
¯
λB, (3.33c)
and if λ is an eigenvalue then −
¯
λ is an eigenvalue. So if λ is an eigenvalue then so are
¯
λ,−λ
and −
¯
λ. That is in the case of λ complex, eigenvalues exist in groups of four, otherwise
they exist in pairs or the singleton λ = 0.
3.3 Boundary Conditions
We now consider the boundary conditions for the O() problem (3.12). Since φ
1
=
1
r
P and
R
1
=
1
r
R and we require φ
0
+ φ
1
to be a physically representable potential function and
22
R
0
+ R
1
to be a physically representable wavefunction, φ
1
and R
1
must be wellbehaved
functions of r at r = 0. This implies φ
1
, R
1
→ ﬁnite values as r →0, so that
P(0) = 0, R(0) = 0. (3.34)
The condition that R
0
+ R
1
be a physically representable wavefunction is that it be
normalisable, i.e the following integral exists:
∞
0
(R
0
+R
1
)(R
0
+R
1
)r
2
dr. (3.35)
This becomes
∞
0
(R
0
¯
R
0
+(
¯
R
1
R
0
+
¯
R
0
R
1
) +
2
R
1
¯
R
1
)r
2
dr, (3.36)
and upon equating coeﬃcients of we require the following integrals to exist:
coeﬃcient of
0
∞
0
(R
0
¯
R
0
)r
2
dr, (3.37)
coeﬃcient of
1
∞
0
(
¯
R
1
R
0
+
¯
R
0
R
1
)r
2
dr, (3.38)
coeﬃcient of
2
∞
0
(R
1
¯
R
1
)r
2
dr. (3.39)
The O(1) condition (3.37) requires the wavefunction of the stationary spherically symmetric
problem to be normalisable, and the remaining two integrals become, in terms of R,
∞
0
(
¯
R +R)R
0
rdr, (3.40a)
∞
0
(R
¯
R)dr. (3.40b)
The integral (3.40a) exists provided
¯
R+R does not grow exponentially with r, as R
0
behaves
like e
Er
as r →∞. The integral (3.40b) implies that R
2
→0 as r →∞. Hence R →0 as
r →∞is a necessary but not a suﬃcient condition for the wavefunction to be normalisable.
We expect that the solution for which this is not the case will blow up exponentially, so
that the boundary condition that R → 0 as r → ∞ should be suﬃcient. For the potential
φ
0
+φ
1
we require function to be wellbehaved at r = 0. We also have an arbitrary scaling
to the potential function, corresponding to the freedom we have of where to set the zero on
the energy scale. We can therefore choose φ
0
→0 and φ
1
→0 as r →∞.
23
When R = (A+B)e
λt
+ (
¯
A−
¯
B)e
¯
λt
and P = We
λt
+
¯
We
¯
λt
, the boundary conditions on R
and φ imply that,
R(0) = 0 ⇒ A(0) = 0, B(0) = 0,
P(0) = 0 ⇒ W(0) = 0,
R(∞) = 0 ⇒ A(∞) = 0, B(∞) = 0,
(rP)(∞) = 0 ⇒ W(∞) = 0.
So to summarise the boundary condition on A, B and W are
A(0) = 0, B(0) = 0, W(0) = 0, A(∞) = 0, B(∞) = 0, W(∞) = 0. (3.41)
3.4 Restriction on the possible eigenvalues
We claim that λ
2
is real for the perturbation equations unless
∞
0
¯
ABdr = 0. The following
proof is due to Tod [26]. Consider (3.24) where (R
0
, V
0
) satisfy
(rR
0
)
= −rR
0
V
0
, (3.42a)
(rV
0
)
= −rR
2
0
. (3.42b)
We now consider
−iλ
∞
0
A
¯
Bdr =
∞
0
B
rr
¯
B +V
0
B
¯
Bdr,
=
∞
0
V
0
B
2
−B
r

2
dr. (3.43)
We note that the R.H.S of (3.43) is real since V
0
is real. We also consider:
−iλ
∞
0
¯
ABdr =
∞
0
A
rr
¯
AV
0
A
¯
A −R
0
W
¯
Adr,
=
∞
0
V
0
A
2
−A
r

2
−
1
2
W
r

2
dr. (3.44)
We note that the R.H.S of (3.44) is real since V
0
is real. Combining the two results we have
that:
−iλ
∞
0
A
¯
Bdr
i
¯
λ
∞
0
A
¯
Bdr
, (3.45)
is real if i
¯
λ
∞
0
A
¯
Bdr = 0 which implies that
λ
¯
λ
is real i.e that λ
2
is real . Hence either λ
2
is real or
∞
0
A
¯
Bdr = 0.
24
3.5 An inequality on Re(λ)
From the linearised perturbation system (3.24) it is possible to prove an inequality for the
real part of λ . First we obtain
i
[λA
¯
A+
¯
λB
¯
B]d
3
x = −
R
0
B
¯
Wd
3
x, (3.46)
and then by the Holder and Sobolev inequalities, as in Tod [25], we ﬁnd

R
0
B
¯
W ≤ C
1
(
A
2
)
1
2
(
B
2
)
1
2
(
R
0

3
)
2
3
, (3.47)
with C
1
=
2
4
3
3π
4
3
. We choose a normalisation of the perturbation so that
A
2
= cos
2
θ,
B
2
= sin
2
θ, (3.48)
and set λ = λ
R
+ iλ
I
to ﬁnd from (3.46)
λ
R
+ iλ
I
cos 2θ ≤ 2C
1
sinθ cos θ(
R
0

3
)
2
3
. (3.49)
Now use Sobolev and Section 1.2 to ﬁnd
R
0

3
≤ C
3
4
1
(−
E
3
)
3
4
, (3.50)
so ﬁnally
λ
R
 ≤
4
9π
2
√
−E. (3.51)
Note that if the normalisation is diﬀerent to one we need to rescale the E value.
25
Chapter 4
Numerical solution of the
perturbation equations
4.1 The method
The O(1) perturbation equations (3.24) are linear whereas the SN equations are not. We
can therefore solve these equations using spectral methods, by approximating the problem
to that of solving a matrix eigenvalue problem. See [27] for more details about spectral
methods. We use Chebyshev polynomial interpolation to get a diﬀerentiation matrix, where
the Chebyshev polynomials are such that
p
n
(x) = cos(nθ), (4.1)
with θ = cos
−1
(x). They also satisfy the diﬀerential equation
d
dx
((1 −x
2
)
1
2
dp
n
(x)
dx
) = n
2
p
n
(x). (4.2)
When using Chebyshev polynomials we sample our data at Chebyshev points that is at
x
i
= cos(
iπ
N
) where i = 0, 1 . . . N, N +1 and N +1 is the number of Chebyshev points. We
note that these points correspond to zeros of the Chebyshev polynomials.
We let p(x) be the unique polynomial of degree N or less such that p(x
i
) = v
i
for
i = 0, 1 . . . N, N+1 , where the v
i
’s are values of a function at the Chebyshev points. Deﬁne
w
i
’s by w
i
= p
(x
i
) for i = 0, 1 . . . N, N + 1. The diﬀerentiation matrix for polynomials of
degree N, denoted D
N
, is deﬁned to be such that
¸
¸
¸
¸
¸
¸
w
1
w
2
.
.
.
w
N
w
N
+ 1
¸
= D
N
¸
¸
¸
¸
¸
¸
v
1
v
2
.
.
.
v
N
v
N
+ 1
¸
, (4.3)
26
which is :
D
0,0
=
2N
2
+ 1
6
,
D
N,N
= −
2N
2
+ 1
6
,
D
i,i
= −
x
i
2(1 −x
2
i
)
1 ≤ i ≤ N −1,
D
i,j
=
c
i
c
j
(−1)
i+j
(x
i
−x
j
)
(i = j), (4.4a)
where c
i
= 2 for i = 0 or i = N and c
i
= 1 otherwise.
The second derivative matrix is just D
2
N
since the p
(x) is the unique polynomial of
degree N or less through w
i
’s.
We note that since the interval we are interested in is [0, L] instead of [−1, 1] we
rescale points by X
i
=
L
2
(1 +x
i
). We also need to rescale the diﬀerentiation matrix so
D
[0,L]
=
1
L
D
[−1,1]
, all diﬀerentiation matrix below are on the interval [0, L]. The requirement
that A,B and W be zero at the boundary is applied by deleting the ﬁrst and last rows and
columns of the diﬀerentiation matrix in question, since A(0) = 0 = A(L), B(0) = 0 = B(L)
and W(0) = 0 = W(L). In this case it is the D
2
N
matrix. This yields a (N −1) ×(N −1)
matrix denoted by
˜
D
2
N
.
For the perturbation equations we therefore obtain the matrix eigenvalue equation
¸
−2R
0
0
˜
D
2
N
0
˜
D
2
N
+V
0
0
−
˜
D
2
N
−V
0
0 R
0
¸
¸
A
B
W
¸
= iλ
¸
0 0 0
−I 0 0
0 I 0
¸
¸
A
B
W
¸
, (4.5)
where
¸
A
B
W
¸
=
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
A(X
1
)
.
.
.
A(X
n−1
)
B(X
1
)
.
.
.
B(X
n−1
)
W(X
1
)
.
.
.
W(X
n−1
)
¸
, (4.6)
And
R
0
=
¸
¸
¸
¸
¸
¸
¸
R
0
(X
1
) 0 . . . . . . 0
0 R
0
(X
2
) 0 . . . 0
.
.
. 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
0 . . . . . . 0 R(X
n−1
)
¸
, (4.7)
27
V
0
=
¸
¸
¸
¸
¸
¸
¸
V
0
(X
1
) 0 . . . . . . 0
0 V
0
(X
2
) 0 . . . 0
.
.
. 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
0 . . . . . . 0 V (X
n−1
)
¸
, (4.8)
We then solve this generalised eigenvalue problem to obtain the eigenvalues and eigenvectors.
We note that since this generalised matrix eigenvalue problem (4.5) is singular, we suspect
that the eigenvalues might be inaccurate due to this. We therefore rewrite (4.5) as:
0
˜
D
2
N
+ (V
0
+V )
˜
D
2
N
+ (V
0
+V ) −2R
0
(
˜
D
2
N
)
−1
R
0
0
A
B
= iλ
I 0
0 I
A
B
, (4.9)
that is we are inverting or solving the equation for W in terms of A.
The diﬀerence between the calculated values for the eigenvalues turns out to be small
so we can use (4.5) to obtain (A, B, W) straight away.
4.2 The perturbation about the ground state
We consider now the results obtained by the solving (4.5) about the ground state of the
spherically symmetric SN equations (1.1), which has no zeros. Recall the eigenvalue for
the ground state in the nondimensionalized unit is 0.082.
In ﬁgure 4.1 we plot eigenvalues obtained by solving (4.5), excluding the near zero
eigenvalue which does not correspond to a perturbation. To compute these results, we used
N = 60 and length of the interval L = 150. We also plot all the eigenvalues obtained from
solving (4.8) in ﬁgure 4.2 (note that the scale is diﬀerent) to see that up to these limits
there are no eigenvalues other than imaginary ones. The eigenvalues obtained are presented
in table 4.1.
±0.00000011612065
±0 −0.03412557804571i
±0 −0.06030198252911i
±0 −0.06882503249714i
±0 −0.07310346395426i
±0 −0.07654635649084i
±0 −0.08100785899036i
±0 −0.08665500294956i
Table 4.1: Eigenvalues of the perturbation about the ground state
28
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
Eigenvalues of the perturbation equation
Figure 4.1: The smallest eigenvalues of the perturbation about the ground state
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−150
−100
−50
0
50
100
150
Eigenvalues of the perturbation equation
Figure 4.2: All the computed eigenvalues of the perturbation about the ground state
29
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
0
0.5
1
1.5
2
x 10
7
B/r
0 50 100 150
−20
−15
−10
−5
0
W/r
Figure 4.3: The ﬁrst eigenvector of the perturbation about the ground state. Note the
scales
We note that the nearzero eigenvalue obtained corresponds to the trivial zeromode,
as the eigenfunction shows. In ﬁgure 4.3 since A,W are very small compared to B, we can
deduce that this solution corresponds to a phase rotation of the background solution, up to
numerical error.
We note that all the eigenvalues are imaginary and that this agrees with the result
obtained in section 3.4. We conclude that the ground state is linearly stable, that is the
perturbations are only oscillatory.
We note that the eigenvalues are symmetric about the real axis, which was expected
(section 3.2).
We plot the ﬁrst three eigenvectors for the ground state as
A
r
,
B
r
and
W
r
in ﬁgures 4.3,
4.4, 4.5.
To test convergence of the eigenvalues we plot graphs against increase in N and increase
in L. As an example we plot the graph of the eigenvalue 0.0765463562i as the value of N
increases. We note that this shows the eigenvalue is converging with N see ﬁgure 4.6.
We can also plot the graph of the eigenvalue 0.0765463562i as a sample eigenvalue with
increasing L instead of N, and again this shows the eigenvalue converging with increasing
L see ﬁgure 4.7.
30
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
0
2
4
6
8
B/r
0 50 100 150
−15
−10
−5
0
5
W/r
Figure 4.4: The second eigenvector of the perturbation about the ground state
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
−2
0
2
4
B/r
0 50 100 150
−15
−10
−5
0
5
W/r
Figure 4.5: The third eigenvector of the perturbation about the ground state
31
30 40 50 60 70 80 90
−2
0
2
4
6
8
10
12
14
16
18
x 10
−7
d
i
f
f
e
r
e
n
c
e
i
n
e
i
g
e
n
v
a
l
u
e
N
Figure 4.6: The change in the sample eigenvalue with increasing values of N (L = 150)
100 110 120 130 140 150 160 170 180 190
−0.01
−0.009
−0.008
−0.007
−0.006
−0.005
−0.004
−0.003
−0.002
−0.001
0
d
i
f
f
e
r
e
n
c
e
i
n
e
i
g
e
n
v
a
l
u
e
L
Figure 4.7: The change in the sample eigenvalue with increasing values of L (N = 60)
32
4.3 Perturbation about the second state
We now consider perturbation about the second state, when the unperturbed wavefunction
has one zero and the energy eigenvalue is 0.0308.
Using the same method we compute the numerical solutions of the perturbation about
the second bound state. This time we obtain some eigenvalues with nonzero real parts.
We note that the eigenvalues have the symmetries expected from section 4.2, which we
expected from the equations. In ﬁgure 4.8 we plot the lower eigenvalues for the case where
N = 60 and L = 150. We also plot all the eigenvalues obtained in ﬁgure 4.9 on a diﬀerent
scale, to see that up to these limits there are no other complex ones. The lowest eigenvalues
about the second state are presented in table 4.2.
±0 −0.00000003648902i
±0 −0.00300149174300i
±0 −0.00859859681740i
±0.00139326930981 −0.01004023179300i
±0.00139326930981 +0.01004023179300i
±0 −0.01533675005044i
±0 −0.02105690867367i
±0 −0.02761845529494i
Table 4.2: Eigenvalues of the perturbation about the second state
We have some complex eigenvalues so if our results are to satisfy the result in section 3.4
we require that
∞
0
¯
ABdr = 0. To see whether this is the case up to numerical error we
compute
Q =

L
0
¯
ABdr
(
L
0
A
2
dr)
1
2
(
L
0
B
2
dr)
1
2
, (4.10)
If this much less than one then we know that
∞
0
¯
ABdr = 0. In the case where L = 145
and N = 60 we present in table 4.3 the calculated values of Q with the eigenvalues.
λ Q
0 −0.00000003648902i 0.22194825684122
0 −0.00300149174300i 0.23476646058070
0 −0.00859859681740i 0.36801505735057
−0.00139326930981 −0.01004023179300i 3.533821320897923e −14
0.00139326930981 + 0.01004023179300i 3.919537745928816e −14
0 −0.01533675005044i 0.89372677914625
Table 4.3: Q for diﬀerent eigenvalues of the perturbation about the second state
33
−1.5 −1 −0.5 0 0.5 1 1.5
x 10
−3
−0.04
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
0.04
Eigenvalues of the perturbation equation
Figure 4.8: The lowest eigenvalues of the perturbation about the second bound state
−1.5 −1 −0.5 0 0.5 1 1.5
x 10
−3
−150
−100
−50
0
50
100
150
Eigenvalues of the perturbation equation
Figure 4.9: All the computed eigenvalues of the perturbation about the second bound state
34
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
−5
0
5
10
x 10
6
i
m
a
g
i
n
a
r
y
p
a
r
t
B/r
0 50 100 150
0
5
10
15
20
W/r
Figure 4.10: The ﬁrst eigenvector of the perturbation about the second bound state, Note
the scales
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
−5
0
5
10
15
B/r
0 50 100 150
0
5
10
15
20
W/r
Figure 4.11: The second eigenvector of the perturbation about the second bound state
35
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
−60
−40
−20
0
20
B/r
0 50 100 150
−10
0
10
20
W/r
Figure 4.12: The third eigenvector of the perturbation about the second bound state
We note that for the eigenvalues with nonzero real part Q is almost zero, so the results
of section 3.4 are conﬁrmed. In ﬁgure 4.10 we plot the eigenvector corresponding to the near
zero eigenvalue. Note that this is approximately B = R
0
hence this eigenvector corresponds
to a phase rotation, i.e a trivial zeromode.
In ﬁgures 4.11, 4.12 we plot the next two eigenfunctions corresponding to the next two
eigenvalues.
4.4 Perturbation about the higher order states
We now consider the numerical method about the third state, with N = 60 and L = 450.
In ﬁgure 4.13 we have the eigenvalues of the perturbation about the third state, the state
with just two zeros in the unperturbed wavefunction. The eigenvalues are presented in
table 4.4.
Note again that the ﬁrst near zero eigenvalue just corresponds to a phase rotation. In
ﬁgure 4.14 we plot the eigenfunction corresponding to the next eigenvalue after the near
zero one. We note that the result of section 3.4 can be conﬁrmed.
In ﬁgure 4.15, 4.16 we plot the eigenfunction of a complex eigenvalue. In ﬁgure 4.15 the
real part is plotted and in ﬁgure 4.16 the imaginary part.
Now for the fourth bound state with L = 700 and N = 100 we have eigenvalues which
are presented in table 4.5.
36
−6 −4 −2 0 2 4 6
x 10
−4
−8
−6
−4
−2
0
2
4
6
8
x 10
−3
Eigenvalues of the perturbation equation
Figure 4.13: The eigenvalues of the perturbation about the third bound state
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
A/r
0 50 100 150 200 250 300 350 400 450
−10
−5
0
5
B/r
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
6
W/r
Figure 4.14: The second eigenvector of the perturbation about the third bound state
37
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
A/r
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
B/r
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
6
W/r
Figure 4.15: The third eigenvector of the perturbation about the third bound state
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
A/r
0 50 100 150 200 250 300 350 400 450
−10
−5
0
5
i
m
a
g
i
n
a
r
y
p
a
r
t
B/r
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
6
W/r
Figure 4.16: The third eigenvector of the perturbation about the third bound state
38
±0.00000001236655
±0 −0.00078451579204i
±0.00051994798783 −0.00225911281180i
±0.00051994798783 +0.00225911281180i
±0 −0.00297504982576i
±0 −0.00368240791656i
±0 −0.00431823282039i
±0.00039265148039 −0.00504446459212i
±0.00039265148039 +0.00504446459212i
±0 −0.00557035273905i
Table 4.4: Eigenvalues of the perturbation about the third state
±0.00000000975044
±0 −0.00031089278959i
±0.00022482647750 −0.00088225206795i
±0.00022482647750 +0.00088225206795i
±0 −0.00134992819716i
±0.00017460656919 −0.00168871696256i
±0.00017460656919 +0.00168871696256i
±0 −0.00185003885932i
±0 −0.00220985235805i
±0 −0.00260936445117i
±0.00015985257765 −0.00308656938411i
±0.00015985257765 +0.00308656938411i
±0 −0.00340215999936i
Table 4.5: Eigenvalues of perturbation about the fourth state
There we notice that there are three quadruples.
4.5 Bound on real part of the eigenvalues
From section 3.5 we have a bound on the real part of the eigenvalues of the perturbation
equations. In table 4.6 we compare this bound with the values obtained, and see that it is
satisfactorily conﬁrmed.
4.6 Testing the numerical method by using RungeKutta in
tegration
The result obtained via the Chebyshev numerical methods can be veriﬁed by using Runge
Kutta integration as in section 2.2.1, that is we convert (3.24) into a set of ﬁrst order
39
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5
x 10
−4
−5
−4
−3
−2
−1
0
1
2
3
4
5
x 10
−3
Eigenvalues of the perturbation equation
Figure 4.17: The eigenvalues of the perturbation about the fourth bound state
State Numerical maximum real part Bound
1 0 0.00362396540325
2 0.0013932693098 0.00157632209179
3 0.0005199479878 0.00100532361160
4 0.0002248264775 0.00073784267376
5 0.0001143754114 0.00058275894209
6 0.0000652878367 0.00048153805083
Table 4.6: Bound of the real part of the eigenvalues
O.D.E’s,
Y
1
= Y
2
, (4.11a)
Y
2
= −V
0
Y
1
−iλY
1
+R
0
Y
5
, (4.11b)
Y
3
= Y
4
, (4.11c)
Y
4
= −V
0
Y
3
−iλY
3
, (4.11d)
Y
5
= Y
6
, (4.11e)
Y
6
= 2R
0
Y
3
, (4.11f)
where Y
1
= A, Y
3
= B, Y
5
= W and λ is an eigenvalue. The boundary conditions at the
origin are Y
1
(0) = 0, Y
3
(0) = 0 and Y
5
(0) = 0. For the initial conditions on Y
6
, Y
4
and
40
0 5 10 15 20 25 30 35 40
−5
0
5
10
A/r
0 5 10 15 20 25 30 35 40
−1
0
1
2
x 10
7
B/r
0 5 10 15 20 25 30 35 40
−20
−15
−10
−5
0
W/r
Figure 4.18: The ﬁrst eigenvector of the perturbation about the ground state, using Runge
Kutta integration; compare with ﬁgure 4.3
Y
2
we can use the values obtained from the eigenvectors obtain when we solved the matrix
eigenvalue problem.
In ﬁgures 4.18, 4.19 we plot the result doing a RungeKutta integration on (4.11) for
the ﬁrst two eigenvalues of (3.24) with R
0
and V
0
corresponding to the ﬁrst bound state.
The ﬁrst two eigenvalues are those worked out in section 4.2. We note that except for the
blowing up near the ends which we expect since the RungeKutta method is sensitive to
inaccuracies in the initial data, the solutions obtain by RungeKutta integration correspond
to the eigenvectors obtain by solving (4.5).
Proceeding to do RungeKutta integration on the results obtained with the perturbation
about the second bound state in section 4.3, we plot the results obtained in ﬁgures 4.20,
4.21.
To test the perturbation about the third bound state, we proceed in the same way. The
result plotted ﬁgure 4.22 show the ﬁrst eigenvector of the perturbation about the third
bound state. Figures 4.23, 4.24 show the second eigenvector with real and imaginary parts.
4.7 Conclusion
In this section, we have analysed the linear stability of the stationary sphericallysymmetric
states using a spectral method, and have checked the results with a RungeKutta method.
41
0 5 10 15 20 25 30 35 40 45 50
−5
0
5
10
15
A/r
0 5 10 15 20 25 30 35 40 45 50
0
5
10
15
B/r
0 5 10 15 20 25 30 35 40 45 50
−15
−10
−5
0
W/r
Figure 4.19: The second eigenvector of the perturbation about the ground state, using
RungeKutta integration; compare with ﬁgure 4.4
0 5 10 15 20 25 30 35 40 45 50
−40
−20
0
20
A/r
0 5 10 15 20 25 30 35 40 45 50
−2
0
2
4
6
x 10
8
B/r
0 5 10 15 20 25 30 35 40 45 50
−60
−40
−20
0
W/r
Figure 4.20: The ﬁrst eigenvector of the perturbation about the second bound state, using
RungeKutta integration; compare with ﬁgure 4.10
42
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
−5
0
5
10
15
B/r
0 50 100 150
0
5
10
15
20
W/r
Figure 4.21: The second eigenvector of the perturbation about the second bound state,
using RungeKutta integration; compare with ﬁgure 4.11
0 20 40 60 80 100 120 140 160 180 200
−10
−5
0
5
A/r
0 20 40 60 80 100 120 140 160 180 200
−10
−5
0
5
B/r
0 20 40 60 80 100 120 140 160 180 200
0
2
4
6
8
W/r
Figure 4.22: The second eigenvector of the perturbation about the third bound state, using
RungeKutta integration; compare with ﬁgure 4.14
43
0 20 40 60 80 100 120 140 160 180 200
−15
−10
−5
0
5
A/r
0 20 40 60 80 100 120 140 160 180 200
−10
0
10
20
B/r
0 20 40 60 80 100 120 140 160 180 200
0
2
4
6
W/r
Figure 4.23: The real part of third eigenvector of the perturbation about the third bound
state, using RungeKutta integration; compare ﬁgure 4.15
0 20 40 60 80 100 120 140 160 180 200
−20
0
20
40
i
m
a
g
i
n
a
r
y
p
a
r
t
A/r
0 20 40 60 80 100 120 140 160 180 200
−30
−20
−10
0
10
i
m
a
g
i
n
a
r
y
p
a
r
t
B/r
0 20 40 60 80 100 120 140 160 180 200
−10
−5
0
5
W/r
i
m
a
g
i
n
a
r
y
p
a
r
t
Figure 4.24: The imaginary part of third eigenvector of the perturbation about the third
bound state, using RungeKutta integration; compare with ﬁgure 4.16
44
We have checked convergence of the eigenvalues obtained by the spectral method under
increase of the number N of Chebyshev points and the radius L of the spherical grid. The
calculation of section 4.3 showed that complex eigenvalues necessarily occur in quadruples
(λ,−λ,
¯
λ and −
¯
λ) while real or complex ones occur in pairs. In the numerical calculation,
we ﬁnd that:
• the ground state is linearly stable, in that the eigenvalues are purely imaginary;
• the nth excited state, which is to say the (n+1)th state, has n quadruples of complex
eigenvalues as well as pureimaginary pairs;
• no purely real eigenvalues appear, and in particular no zeromodes occur.
Thus only the ground state, which we saw in section 1.2 is the absolute minimiser of
the conserved energy, is linearly stable. All other sphericallysymmetric stationary states
are linearly unstable.
In section 3.4, we found that complex eigenvalues could only arise if a certain integral
of the perturbed wavefunction was zero, and this is satisfactorily veriﬁed by numerical
solutions found here.
We must now turn to the nonlinear stability of the ground state, which requires a
numerical evolution of the nonlinear equations.
45
Chapter 5
Numerical methods for the
evolution
5.1 The problem
In the next two chapters, the aim is to ﬁnd and test a numerical technique for solving (1.5)
with the restriction of spherical symmetry. We want to evolve an initial ψ long enough to see
the dispersive eﬀect of the Schr¨ odinger equation and the concentrating eﬀect of the gravity.
In particular, we want to see how far the linearised analysis of chapter 3 and chapter 4 is
an accurate picture.
In this chapter, we consider what the problems are in this programme, and methods for
dealing with them. Chapter 6 contains the numerical results.
Since the problem is nonlinear, we ﬁrst consider the simpler problem of solving the
timedependent Schr¨ odinger equation in a ﬁxed potential. We need a numerical method for
evolution, and a technique for dealing with the boundary that will allow the wave function
to escape from the grid, keeping reﬂection from the boundary to a minimum. This is the
content of section 5.2 to section 5.4 below. In section 5.5, we consider how to test the
method against an explicit solution of the zeropotential Schr¨ odinger equation, and what
we can check with a nonzero but ﬁxed potential. In section 5.7, we face the problem
of evolving the full SN equations, which is to say evolving the potential as well as the
wavefunction. We describe an iteration for this and list some checks which we can make
on the reliability and convergence of the method. Finally, in section 5.10 we introduce
the notion of residual probability which we may describe as follows. On the basis of the
linearised calculations of chapter 3 and chapter 4, we expect the ground state to be the only
stable state. Thus we have a preliminary picture of the evolution: any initial condition will
disperse to large distances (i.e. oﬀ the grid) leaving a remnant at the origin consisting of a
ground state rescaled as in subsection 1.2.2 and with total probability less than one. If the
46
initial condition is a state with negative energy and this preliminary picture is sound then
we can estimate the residual probability remaining in this rescaled ground state.
5.2 Numerical methods for the Schr¨ odinger equation with
arbitrary time independent potential
We consider numerical methods that solve the 1dimensional Schr¨ odinger equation with
given initial data, that is solving,
i
∂ψ
∂t
= −
∂
2
ψ
∂x
2
+ψφ. (5.1)
We note that Numerical Recipes on Fortran [20] and Goldberg et al [9] use a numerical
method to solve the Schr¨ odinger equation which they say needs to preserve the Hamiltonian
structure of the equation or equivalently the numerical method is time reversible. To see
why this is consider the case of an explicit method given by:
i
ψ
n+1
−ψ
n
δt
= D
2
(ψ
n
) +φψ
n
, (5.2)
where ψ
n
is the wavefunction at the nth time step and D
2
is the approximation for the
diﬀerentiation operator. Examples of what D
2
can be are ﬁnite diﬀerence or a Chebyshev
diﬀerentation matrix. Now we consider what happens to the mode e
ikx
, when
D
2
(e
ikx
) = −k
2
e
ikx
. (5.3)
Letting ψ
n+1
= λe
ikx
and substituting into (5.2) we get
λ = 1 −iδt(k
2
+φ). (5.4)
So here we have an ampliﬁcation factor of λ which provided k
2
+ φ = 0 has λ > 1 and
this method leads to a growth in the normalisation of the mode. If we used an implicit
method instead, that is:
i
ψ
n+1
−ψ
n
δt
= D
2
(ψ
n+1
) +φψ
n+1
. (5.5)
We get an amplifying factor of
λ =
1
1 + iδt(k
2
+φ)
. (5.6)
So in this case we note that λ < 1 provided k
2
+ φ = 0, that is, this method leads to
decaying modes.
We could consider renormalising the wavefunction as the numerical method progresses,
but this would have several draw backs. At each time step the modes get ampliﬁed by
47
diﬀerent amounts so the ratio in diﬀerent modes will change. Also for the SN equation
renormalisation would require rescaling the space axis which would need a interpolation at
each step.
To get a numerical method that will preserve the constants of the evolution of the
equation (namely the normalisation) we need to use a CrankNicholson method. With this
method the discretization of the Schr¨ odinger equation is:
i
ψ
n+1
−ψ
n
δt
= −
1
2
(D
2
(ψ
n+1
) +D
2
(ψ
n
)) +
φ
2
(ψ
n+1
+ψ
n
), (5.7)
which has an amplifying factor of
λ =
1 −
1
2
iδt(k
2
+φ)
1 +
1
2
iδt(k
2
+φ)
, (5.8)
which is unitary (λ = 1) since it is in the form of a number over its complex conjugate.
Since the boundary conditions are straightforward a spectral method, that is Chebyshev
diﬀerentation matrix, is seen to give better results for less computation time compared to
ﬁnite diﬀerence method.
5.3 Conditions on the time and space steps
We note that the CrankNicholson method might be unitary, but that does not mean that
the phase is correct. Suppose we consider the case of the linear Schr¨ odinger equation in one
dimension with φ = 0 which has solutions of the form ψ
k
(x, t) = e
−ik
2
t
e
ikx
. Now the wave
evolves with a change in phase factor (λ) of:
λ =
1 −
1
2
iδt(k
2
)
1 +
1
2
iδt(k
2
)
, (5.9)
which is just that of the factor obtained in (5.8). Expanding λ in powers of δt we have:
λ = (1 −
i
2
δtk
2
−
1
2
δt
2
k
4
+
i
4
δt
3
k
6
+. . .)(1 −
i
2
δtk
2
),
= 1 −iδk
2
t −
1
2
δt
2
k
4
−
iδt
3
k
6
2
+. . . . (5.10)
while the actual phase factor should be :
e
−ik
2
δt
= 1 −iδtk
2
−
δt
2
k
4
2
+
ik
6
δt
3
6
+h.o.t (5.11)
So to make the other terms small we can require
k
6
δt
3
12
to be small, as in [9]. The phase
factor is correct but only to this order.
48
Now suppose that ψ is a stationary state solution of the Schr¨ odinger equation with
potential. We expect the solution to evolve like ψ
n
= e
(−iEt)
ψ
0
where E is the energy of
the stationary solution. The discretized method in the Schr¨ odinger equation is :
i
ψ
n+1
+ψ
n
δt
=
1
2
[−D
2
(ψ
n+1
) −D
2
(ψ
n
) +φ(ψ
n+1
+ψ
n
)]. (5.12)
Now since ψ
n
and ψ
n+1
are stationary states we know that,
−D
2
(ψ
N
) +φψ
N
= Eψ
N
, (5.13)
for any N. Substitution into the method gives,
i(ψ
n+1
−ψ
n
) =
δtE
2
ψ
n+1
+
δtE
2
ψ
n
, (5.14)
and so we obtain,
ψ
n+1
i
= (1 +
iδtE
2
)
−1
(1 −
iδtE
2
)ψ
n
. (5.15)
Expanding out with 
δtE
2
 < 1 we have,
ψ
n+1
= [1 −δtE +
(−iδtE)
2
2!
+
(−iδtE)
3
4
+O(δt
4
)]ψ
n
. (5.16)
In order that the phase is calculated correctly we require that the error due to the term
δt
3
E
3
12
be small.
5.4 Boundary conditions and sponges
The boundary conditions which we want to impose on the Schr¨ odinger equation are that
the wavefunction must remain ﬁnite as r tends to zero and the wavefunction must tend to
zero as r tends to inﬁnity, since otherwise the wavefunction would not be normalisable. To
try to impose these boundary conditions numerically we can solve the Schr¨ odinger equation
in rψ = χ in which case it becomes:
iχ
t
= −χ
rr
+φχ. (5.17)
The boundary condition at r = 0 is just a matter of setting χ = 0 for the ﬁrst point in the
domain. The other boundary condition can be approximated in one of the following ways:
• We can at a given r = R say set that χ =
ψ
r
where ψ is the actual analytic solution.
The problem with this is that only in a certain few cases will we know the actual
solution.
49
• We set the condition that χ = 0 at r = R for R large so that the other boundary
condition is approximated. The problem with this boundary condition is that it will
reﬂect all the outward going probability and send it back towards the origin.
• The other thing we can do to reduce the eﬀect of probability bouncing back is to
introduce sponge factors at r = R were R is large. That is, we consider instead of the
Schr¨ odinger equation the equation,
(i +e(r))χ
t
= −χ
rr
+φχ, (5.18)
where the function e(r) is a strictly negative (one sign) so that the equation acts like
a heat equation to reduce the probability which is assumed to be heading oﬀ the grid.
There will also be a reduction in the normalisation integral. For example we take
e(r) = e
(0.1(r−R))
, (5.19)
so as give a smooth function which only has an eﬀect at the boundary since for small
r the sponge factor eﬀectively vanishes. There will still be some reﬂection oﬀ the
sponge but this is a smaller eﬀect.
5.5 Solution of the Schr¨ odinger equation with zero potential
To check that our numerical method is functioning correctly we aim to check it in the case
of spherical symmetry with the same boundary condition where a solution is known both
analytically and numerically. This also gives a check on the boundary conditions.
In the spherically symmetric case with zero potential a solution to the Schr¨ odinger
equation is:
rψ = χ =
C
√
σ
(σ
2
+ 2)
1
2
[exp(−
(r −vt −a)
2
2(σ
2
+ 2it)
+
ivr
2
−
iv
2
t
4
)
−exp(−
(r +vt +a)
2
2(σ
2
+ 2it)
−
ivr
2
−
iv
2
t
4
)], (5.20)
we shall call this a moving Gaussian shell, when t = 0. Here we have the boundary condition
that χ = 0 at r = 0 and χ = 0 at r = ∞. This is a wave bump “moving” at a velocity
v starting at a distance a from the origin. C is chosen such that the wavefunction is
normalised. That is:
1 = C
2
√
π[1 −exp(
−a
2
σ
2
−
v
2
σ
2
4
)]. (5.21)
We note that C will not exist when exp(−
a
2
σ
2
−
v
2
σ
2
4
) = 1 which is when v = 0 and a = 0,
but in this case we will have the limiting solution:
50
rψ = χ =
Ar
(σ
2
+ 2it)
3
2
exp(−
r
2
2(σ
2
+ 2it)
), (5.22)
where A is such that
A
2
=
2σ
3
√
π
. (5.23)
Note that the solution (5.20) will in the long term tend to zero everywhere, since there is
no gravity or other forces to stop the natural dispersion of the wave equation.
We note that we cannot numerically model the boundary condition at r = ∞ so we
can do the following things to check the numerical calculation of the solution, and also to
approximate the eﬀect of the boundary condition at inﬁnity:
• Check that normalisation is preserved, which should be the case since the numerical
method preserves this, when there are no sponge factors or when there is no probability
moving oﬀ the grid.
• We can set the boundary value at r = R to be the value of the analytical solution at
that point. We note that this will only work for the case where the analytical solution
is known; that is, it can only be done for testing.
• We can set the boundary condition to be χ = 0 at r = R where R is chosen so that it
is large compared to the initial data. This method will cause the wave to reﬂect back
oﬀ the boundary.
• We can use sponge factors, that is, change the equation so that is becomes a type of
heat equation, where the heat is absorbed at the boundary. We note that this will
reduce the wave which is reﬂected from the boundary, but that normalisation will not
be preserved.
5.6 Schr¨ odinger equation with a trial ﬁxed potential
We can consider the Schr¨ odinger equation with a potential which is constant with respect
to time. In this case we know that the energy E is preserved where E is given by
E = T +V, (5.24)
where
T =
∇ψ
2
d
3
x, (5.25)
51
and
V =
φψ
2
d
3
x, (5.26)
with the choice of potential φ =
1
(r+1)
we can work out the bound states, that is we can
solve the eigenvalue problem
∇
2
ψ +φψ = Eψ (5.27)
This gives us eigenvalues E
n
say with eigenfunctions ψ
n
where the ψ
n
are independent of
time. We know by general theory that
ψ
n
ψdr = c
n
e
iE
n
t
. (5.28)
where ψ is a general wavefunction and c
n
is a constant.
In summary we can check in this case,
• Normalisation.
• Energy of the wavefunction.
• The inner products with respect to the bound state to see that they are of constant
modulus and the phase change correctly
5.7 Numerical evolution of the SN equations
The method that Bernstein et al [4] use for numerical evolution of the wavefunction of the
SN equations consists of a CrankNicholson method for the timedependent Schr¨ odinger
equation and an iterative procedure to obtain the potential at the next time step, the
ﬁrst guess being that of the current potential. They transform the SN equations in the
spherically symmetric case into a simpler form which is:
i
∂u
∂t
= −
∂
2
u
∂r
2
+φu, (5.29)
d
2
(rφ)
dr
2
=
u
2
r
, (5.30)
where u = rψ.
Then the following procedure is used to calculate the wavefunction and potential at the
next time step:
1. Choose φ
n+1
= φ
n
to be an initial guess for φ at the (n + 1)th time step.
52
2. Calculate u
n+1
from the discretized version of the Schr¨ odinger equation, which is
2i
u
n+1
−u
n
δt
= −D
2
(u
n+1
) −D
2
(u
n
)
+[φ
n+1
u
n+1
+φ
n
u
n
], (5.31)
where this equation is solved on the region r ∈ [0, R] and R is large, where there are
no sponge factors. These could be added to stop outgoing waves being reﬂected from
the imposed boundary condition at r = R.
3. Calculate the Φ which is the solution of the potential equation for the u
n+1
. That is
Φ that solves:
d
2
(rΦ)
dr
2
=
u
n+1

2
r
. (5.32)
4. Calculate U by the discretized version of the Schr¨ odinger equation with Φ being the
potential at the (n + 1) step, that is:
2i
U −u
n
δt
= −D
2
(U) −D
2
(u
n
)
+[ΦU +φ
n
u
n
], (5.33)
where the boundary conditions are the same as in step 2.
5. Consider U − u
n+1
 if less than a certain tolerance stop else take φ
n+1
= Φ and
continue with step 2.
This outline of a general method is independent of the Poisson solver and numerical
method used to evolve the Schr¨ odinger equation. That is, we could use either a ﬁnite
diﬀerence method or spectral methods. We note that CrankNicholson is second order
accurate with respect to time and is such that it preserves the normalisation. This is the
reason why we need to work out the potential at the n + 1 time step instead of just using
an implicit method in the potential.
5.8 Checks on the evolution of SN equations
We recall that as we saw in section 1.2 for the SN equations the energy is no longer
conserved, but the action I = T +
1
2
V is. We also note that if ψ corresponds to a bound
state with energy eigenvalue E
n
then
I =
1
3
E
n
. (5.34)
Thus it is useful to call 3I the conserved energy E
I
. In the SN equation in general we can
check:
53
• Preservation of the normalisation, at least when it is not aﬀected by the boundary
condition due to the sponge factors.
• Preservation of the action I = T +
1
2
V , at least when not aﬀected by the sponge.
• That the change in the potential function satisﬁes φ
t
= i
r
0
(
¯
ψψ
r
−ψ
¯
ψ
r
)dr ,at least in
the case of spherical symmetry.
In the case of evolving about a stationary state we can check
• That the phase factor should evolve at a constant rate at the corresponding energy of
the bound state at least for a while.
5.9 Mesh dependence of the methods
To be sure that the evolution method produces a valid solution up to a degree of numerical
error, we need to show that as the mesh and time steps decrease that the method converges
to a solution. In the case of the ﬁnite diﬀerence method we expect errors to be second
order in time and space steps. We expect this since the truncation error in the case of just
solving the linear Schr¨ odinger equation is second order. We also know that the method is
unconditionally stable, that is there is no requirement on the ratio
∆t
∆x
2
, so we can reﬁne the
mesh in both direction independently and expect convergence. To test for this convergence
we can use “Richardson Extrapolation”. In a case where we expect the error to be like
O
h
∼ I +Ah
p
, (5.35)
where O
h
is the calculated value with space or time step size h, I actual value and p the
order of the error. To get just a function of p we can calculate:
O
h
1
−O
h
3
O
h
2
−O
h
3
, (5.36)
where h
1
,h
2
and h
3
are three diﬀerent values of the step size. We note in the case of the
spectral methods we do not expect the method to be second order in space, but we do still
expect it to be second order in time.
5.10 Large time behaviour of solutions
For the Schr¨ odinger equation with a ﬁxed potential, the general solution will consist of a
linear combination of bound states and a scattering state. For large times, the scattering
state will disperse leaving the bound states. For the SN equations, we have seen in chapter 4
54
that the bound states apart from the ground state are all linearly unstable, and we shall see
the same thing for nonlinear instability in chapter 6. Consequently, we might conjecture
that the solution at large values of time for the SN equations with arbitrary initial data
would be a combination of a multiple of the ground state, rescaled as in subsection 1.2.2 to
have total probability less than one, together with a scattering state containing the rest of
the probability but which disperses. We shall see support for this conjecture in chapter 6.
Assuming the truth of this picture, if the conserved energy is negative initially we can
obtain a bound for the probability remaining in the multiple of the ground state.
What we mean by a multiple of the ground state is the following: if ψ
0
(r) is the ground
state then consider ψ
0α
(r) = α
2
ψ
0
(
r
α
). It follows from the transformation rule for the SN
equations (subsection 1.2.2), that this is a stationary state but with:
∞
0
ψ
0α

2
= α, (5.37)
We also note that
E
0α
= α
3
E
0
. (5.38)
So if we start with an initial value E
I
of the action which is negative then we claim
E
I
= E
S
+α
3
E
0
, (5.39)
for some α, where E
S
is the scattered action and α
3
E
0
is the energy due to the remaining
probability near the origin. So then we have that
α
3
=
(E
S
+E
I
)
E
0

, (5.40)
since the scattered energy is positive, which leads to the inequality
α
3
>
E
I
E
0
. (5.41)
55
Chapter 6
Results from the numerical
evolution
6.1 Testing the sponges
To test the ability of the sponge factor to absorb an outward going wave, we use the initial
condition of an outward going wave from (5.5). We can use the exponential function of
(5.20) at t = 0,
0
50
100
150
200
0
50
100
150
200
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
time
Lump moving outwards
r

r
ψ

Figure 6.1: Testing the sponge factor with wave moving oﬀ the grid
56
0
50
100
150
200
0
50
100
150
200
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
time
Lump moving outwards
r

r
ψ

Figure 6.2: Testing sponge factor with wave moving oﬀ the the grid
rψ =
C
√
σ
(σ
2
+ 2it)
1
2
[exp(−
(r −vt −a)
2
2(σ
2
+ 2it)
+
ivr
2
−
iv
2
t
4
)
−exp(−
(r +vt +a)
2
2(σ
2
+ 2it)
−
ivr
2
−
iv
2
t
4
)] (6.1)
Here v is chosen so that this corresponds to an outward going wave and is suﬃciently
large so that the wavefunction will scatter, C is the normalisation factor and a and σ
determine the position and shape of the distribution.
In ﬁgure 6.1 we plot a Gaussian shell moving oﬀ the grid, here the sponge factor is
−10 exp(0.5(r − R
end
)). This sponge factor gives a gradual increase toward the boundary,
but we notice that there is some reﬂection oﬀ the boundary.
In ﬁgure 6.2 we plot an Gaussian shell moving oﬀ the grid, here the sponge factor is
−2 exp(0.2 ∗ (r − R
end
)). We note that this sponge factor work betters than one used for
ﬁgure 6.1 since it reﬂects less back from the boundary.
6.2 Evolution of the ground state
The ground state is a stationary state and is linearly stable by chapter 4 and we expect it to
be nonlinearly stable for small perturbations, since it has the lowest energy. We note that
due to inaccuracy in the calculation of the ground state and the numerical error that will be
57
0
20
40
60
80
100
120
140
0
5
10
15
20
25
30
−4
−2
0
2
4
r
time
evolution of the angle , starting with the ground state
p
h
a
s
e
Figure 6.3: The graph of the phase angle of the ground state as it evolves
made in the evolution, the numerical evolution will actually be the evolution of the ground
state with a small perturbation, but this shouldn’t make any diﬀerence to the overall end
result here.
Since the ground state is a solution to the timeindependent SN equations we expect
that there should be a constant change in phase corresponding to the energy of the state
and the absolute value of the wavefunction should be constant.
Here we do get a constant phase (see ﬁgure 6.3) and we can check that it is of the correct
value by calculating the energy of the wavefunction.
We can also check the conservation of energy and the conservation of normalisation.
The next thing to try is the long time evolution of the ground state. Now we evolve the
ground state for a long time, about 1E5 in the time scale which is about 5E5 time steps.
We ﬁnd a growing oscillation which reaches a bound and stabilises at a ﬁxed oscillation, see
ﬁgures 6.4, 6.5.
By adjusting the time step, we can reduce the amplitude of the limiting oscillation. We
also can adjust the sponge factor to absorb more at the boundary to reduce the amplitude
of the boundary. So therefore we know that this oscillation about the ground state is
introduced by numerical error. We also deduce that the errors occur due to the long tail
of the ground state at which the wavefunction is approximately zero and small error here
will have a great eﬀect on the normalisation and the energy so causing more error close to
58
0 0.5 1 1.5 2 2.5 3
x 10
4
0.038
0.04
0.042
0.044
0.046
0.048
0.05
0.052
graph to show the boundness of the error when evolving around ground state
time
a
b
s
w
a
v
e
−
f
u
n
c
t
i
o
n
n
e
a
r
o
r
i
g
i
n
Figure 6.4: The oscillation about the ground state
0
5
10
15
20
25
30
35
40
45
50
0
0.5
1
1.5
2
2.5
3
3.5
x 10
4
0
0.2
0.4
time
r
evolution about ground state
r

ψ

Figure 6.5: Evolution of the ground state
59
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
Eigenvalues of the perturbation equation
Figure 6.6: The eigenvalue associated with the linear perturbation about the ground state
the origin.
Now we want to consider an evolution of the ground state with deliberately added
perturbations. The aim here being to see if we can induce ﬁnite perturbation around the
ground state. We expect that these would be at the frequencies determined by the ﬁrst
order linear perturbation about the ground state and the corresponding higher order linear
perturbations, which are all imaginary.
We take a ground state, add on to it an exponential distribution and evolve. Then we
can Fourier transform the absolute value of the wavefunction near the origin to obtain the
diﬀerent frequencies that make up the oscillation. We can compare these observed frequen
cies to those which were obtained in chapter 4 doing the ﬁrst order linear perturbation.
We use in our evolution method, N = 50 Chebyshev points and dt = 1 as the time step
to obtain n = 170, 000 sampling points. With this amount of sampling points and time
step we should be able to obtain the frequencies to an accuracy of
π
ndt
= 3.6E − 5. In
ﬁgure 6.7 we plot a section of the oscillation about the ground state at a given point. In
ﬁgure 6.8 we plot the Fourier transform of the oscillation about the ground state at a given
point. In table 6.1 we present the frequencies observed (see ﬁgure 6.8) with those obtained
in chapter 4, which are plotted in ﬁgure 6.6.
60
0 500 1000 1500 2000 2500 3000 3500 4000
0.256
0.257
0.258
0.259
0.26
0.261
graph rψ near the origin, evolving about the ground state
time

r
ψ

a
t
r
=
3
.
6
Figure 6.7: The oscillation about the ground state at a given point
−3.6 −3.4 −3.2 −3 −2.8 −2.6 −2.4 −2.2 −2
−16
−15
−14
−13
−12
−11
−10
−9
−8
−7
log(frequency)
l
o
g
(
f
o
u
r
i
e
r
t
r
a
n
s
f
o
r
m
)
Fourier transform of oscillation about the ground state
Figure 6.8: The Fourier transform of the oscillation about the ground state
61
linearised frequencies frequencies log of size of peak .
±0.0341 0.0341 −8.300
±0.0603 0.0602 −7.632
±0.0688 0.0687 −8.265
±0.0731 0.0730 −9.038
±0.0765 0.0764 −11.603
±0.0810 0.0810 −13.431
±0.0867 0.0866 −14.466
±0.0934 0.0943 −13.102
±0.1012 0.1028 −14.377
±0.1099 0.1071 −15.115
±0.1196 0.1204 −16.322
Table 6.1: Linearised eigenvalues of the ﬁrst state and observed frequencies about it, n =
170, 000
6.3 Evolution of the higher states
6.3.1 Evolution of the second state
We note that whether or not the higher stationary states are nonlinearly stable they should
evolve for some time like stationary states before numerical errors grow suﬃciently to make
them decay.
For the evolution of the second state we ﬁnd that we have again the correct phase factor
(see ﬁgure 6.9).
There are a few issues to look at with the evolution of the higher stationary states when
the state are evolved for long enough.
1. Do they decay when perturbed into some scatter and some multiple of the ground
state? Numerical calculation of a stationary state is only correct up to numerical
errors, so if the state is nonlinearly unstable it is likely that it will decay from just
evolving.
2. How long does the state take to decay?
3. What is the amount of probability left in the ground state?
4. Is this above the estimate of section 5.10 based on the assumption that the energy
that is scattered is positive?
As we can see in ﬁgure 6.10 the second state is decaying, emitting scatter and leaving
something that looks like a multiple of the ground state.
62
0
20
40
60
80
100
0
10
20
30
40
50
−3
−2
−1
0
1
2
r
evolution of the angle , starting with the second state
time
p
h
a
s
e
Figure 6.9: The graph of the phase angle of the second state as it evolves
0
50
100
150
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
0
0.1
0.2
0.3
0.4
Figure 6.10: The long time evolution of the second state in Chebyshev methods
63
We can see in ﬁgure 6.11 the evolution of the second state. This is decaying into
something that looks like a multiple of the ground state, but here we have used in a lower
tolerance in the computation and the result is that the second state takes longer to decay
which would correspond to the error growing at a smaller rate.
As in the case of the ground state we can consider rψ at a particular point to obtain
a time series. The resulting time series divides up into three sections. The ﬁrst section is
the evolution of the second stationary state, that is the wavefunction is just a perturbation
about the second state. The next section is the part where it decays into the ground state
emitting scatter oﬀ the grid. The ﬁnal section corresponds to a “multiple” of the ground
state with a perturbation about it.
The ﬁrst section has an exponentially growing oscillation which we expect to be the
unstable growing mode obtained in the linear perturbation theory (chapter 4). To conﬁrm
the expectation that this is the growing mode we consider modifying the function so that
we would just have the oscillation with no growing exponential. To remove the growing
exponential we consider:
f(t) =
rψ(r
1
, t) −rψ(r
1
, 0)
exp(real(λ)t)
+rψ(r
1
, 0) (6.2)
where λ is the eigenvalue with nonzero real part obtained in solving (3.24) about the second
state and r
1
is the point at which the time series has been obtained, which is near the
origin. We deﬁne A to be such that A = rψ(r
1
, 0). We note that f(t) − A is almost just
an oscillation and we plot this in ﬁgure 6.12.
We then Fourier transform f(t) −A (see ﬁgure 6.13) and we obtain approximately one
frequency which is at 0.0095, which is approximately that of the growing mode in the linear
perturbation theory. Here we used n = 6000 sampling points and dt = 1 was the time step.
The growing mode here provides information about the time taken for the state to decay,
since the perturbation has to be signiﬁcantly large before the nonlinear eﬀects will cause
the decay of the wavefunction.
The second section of the time series we shall ignore, since it is neither perturbation
about the second state nor perturbation about the ground state.
We expect the third section to be an oscillation about a “multiple” of the ground state
and so we should get frequencies of oscillation but the linear perturbation frequencies should
be transformed to make up for the diﬀerence in normalisation. What we ﬁnd is the nor
malisation is still decreasing, so we are unable to obtain any consistent frequencies.
We can plot the resulting probability in the region in ﬁgure 6.14, with bound on the
action from section 5.10. In ﬁgure 6.14 the normalisation and the action bound seem to be
converging toward the same value.
64
0
2
4
6
8
10
x 10
4
0
50
100
150
200
0
0.05
0.1
0.15
0.2
0.25
r
Evolution of the second state
time

r
ψ

Figure 6.11: Evolution of second state tolerance E9 approximately
0 2000 4000 6000 8000 10000 12000
−3
−2
−1
0
1
2
3
x 10
−9
f(t)−A
time
Figure 6.12: f(t) −A
65
−7 −6 −5 −4 −3 −2 −1 0 1 2
−30
−29
−28
−27
−26
−25
−24
−23
−22
−21
log(frequency)
l
o
g
(
f
o
u
r
i
e
r
t
r
a
n
s
f
o
r
m
)
Fourier transform of f(t)−A
Figure 6.13: Fourier transformation of f(t) −A
0 1 2 3 4 5 6 7 8 9 10
x 10
4
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
norm
action bound
time
Decay of second state
p
r
o
b
a
b
i
l
i
t
y
Figure 6.14: Decay of the second state
66
0
500
1000
1500
2000
0
50
100
150
0
0.05
0.1
0.15
0.2
time
r
Evolution of second bound state

r
ψ

Figure 6.15: The short time evolution of the second bound state
Although we have found the second state to be unstable as expected and the second state
grows initially with an exponential mode, we can obtain more of the linear perturbation
modes. We consider the evolution of the second state for a short time with an added
perturbation to see if it does give any frequencies, and then check whether these agree with
the linear perturbation theory. We note that since we can only evolve for a short time
before the state decay the accuracy will be low. In ﬁgure 6.15 we plot the evolution of the
second state with an added perturbation, and in ﬁgure 6.16 we plot the oscillation about
the second state at a ﬁxed radius.
In table 6.2 we present the observed frequencies about the second state compared with
the eigenvalues of the linear perturbation. We note that the accuracy of the observed
frequencies here is 0.002.
In ﬁgure 6.17 we plot the Fourier transform of the second state with added perturbation,
note that the graph is not smooth due to the low accuracy.
6.3.2 Evolution of the third state
Now we consider the evolution of the third stationary state. In ﬁgure 6.18 we plot the
evolution of the third state, as expected we see that it is unstable and decays, emitting
scatter and leaving a “multiple” of the ground state.
67
0 50 100 150 200 250 300 350 400 450
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
x 10
−3
section of the section bound state frequency oscillation

r
ψ

a
t
f
i
x
e
d
p
o
i
n
t
time
Figure 6.16: Oscillation about second state at a ﬁxed radius
−6 −5.5 −5 −4.5 −4 −3.5 −3
−14
−13.5
−13
−12.5
−12
−11.5
−11
−10.5
−10
log(frequency)
l
o
g
(
f
o
u
r
i
e
r
t
r
a
n
s
f
o
r
m
)
fourier transform of freqency oscillation about second state
Figure 6.17: Fourier transform of the oscillation about the second state
68
linearised eigenvalues frequencies
0.0000
0 −0.0030i
0 −0.0086i
−0.0014 −0.0100i
0.0014 + 0.0100i 0.0114
0 −0.0153i
0 −0.0211i 0.0190
0 −0.0276i 0.0266
0 −0.0351i 0.0342
0 −0.0434i 0.0416
0 −0.0526i 0.0530
0 −0.0627i 0.0643
0 −0.0737i 0.0726
0 −0.0855i 0.0853
0 −0.0982i 0.0987
0 −0.1118i 0.1123
0 −0.1263i 0.1247
0 −0.1417i 0.1419
0 −0.1579i 0.1577
Table 6.2: Linearised eigenvalue of the second state and frequencies about it.
We can again consider the perturbation about the third state initially as with the second
state before it decays, we ﬁnd that it is exponentially growing. We now transform as in
(6.2) but with λ now being the eigenvalue of the linear perturbation about the third state
with the maximum real part. In ﬁgure 6.19 we plot f(t) − A, now we Fourier transform
f(t) −A (see ﬁgure 6.20) and obtain two frequencies at 0.0022 and 0.0050 which correspond
to the imaginary parts of the two complex eigenvalues of the linear perturbation.
We plot the resulting probability in the region in ﬁgure 6.21, with bound on the ac
tion from section 5.10. In ﬁgure 6.21 the normalisation and the action bound seem to be
converging toward the same value.
6.4 Evolution of an arbitrary spherically symmetric Gaus
sian shell
In this section we consider the evolution of Gaussian shells or lumps (5.20) varying the
three associated parameters. The mean velocity v the rate the shell is expanding outwards,
we note the centre of mass is not moving. The mean position a, the radial position of the
peak of the shell. The width σ of the shell. Since varying these factors will produce varied
initial conditions, we can perform a mesh analysis and a time step analysis on the evolution
69
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
x 10
4
0
50
100
150
200
250
300
350
0
0.1
0.2
0.3
0.4
r
Evolution of third state
time

r
ψ

Figure 6.18: Evolution of third state
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x 10
4
−3
−2
−1
0
1
2
3
x 10
−9
time
f
(
t
)
−
A
Figure 6.19: f(t) −A with respect to the third state
70
−10 −8 −6 −4 −2 0 2
−34
−32
−30
−28
−26
−24
−22
−20
log(frequency)
l
o
g
(
f
o
u
r
i
e
r
t
r
a
n
s
f
o
r
m
)
Figure 6.20: Fourier transform of growing mode about third state
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 10
4
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
time
Decay of third state
p
r
o
b
a
b
i
l
i
t
y
norm
action bound
Figure 6.21: Probability remaining in the evolution of the third state
71
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
v
r
e
m
a
i
n
i
n
g
p
r
o
b
a
b
i
l
i
t
y
time = 2500
time = 750
time = 1250
time = 2000
time = 2250
Evolution of lump with speed v, σ = 6,a = 50, N =40
Figure 6.22: Progress of evolution with diﬀerent velocities and times
as outlined in section 5.9. We also look at how much of the wavefunction is left at the
origin which we expect to form a “multiple” of the ground state in the sense of section 5.10.
We expect that the more energy in the system to begin with the less energy and therefore
probability eventually will remain near the origin.
6.4.1 Evolution of the shell while changing v
In ﬁgure 6.22 we plot the remaining probability at diﬀerent times and at diﬀerent initial
mean velocities v, while a and σ are constant initially. We notice that the negative values
of v correspond to an outward going wavefunction moving oﬀ the grid and a positive v is
an inward going wavefunction. We also notice that the graph is not symmetric about v = 0
even though the wavefunction energy is symmetric about v = 0.
We compare the diﬀerence in diﬀerent time steps to the result in ﬁgure 6.23 and note that
the diﬀerence is small. We calculate the “Richardson fraction” for diﬀerent values of h
i
’s
we plot the diﬀerent Richardson fractions in ﬁgure 6.24.
We also show in table 6.3 the mean calculated value against the values expected for
quadratic convergence in time. Since the values are close to the expected values we are
justiﬁed in supposing that the convergence is quadratic in the time step.
We can also plot the variation with in the number of Chebyshev points to show conver
gence with increasing N (this time it does not make sense to do Richardson Extrapolation).
72
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
−2.5
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
x 10
−3
dt = 2.5
dt = 5/3
dt = 5/4
v
d
i
f
f
e
r
e
n
c
e
i
n
p
r
o
b
a
b
i
l
i
t
y
t
o
d
t
=
1
Evolution of lump with speed v, σ = 6,a = 50, N =40
Figure 6.23: Diﬀerence of the other time steps compared with dt = 1
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
2
2.5
3
3.5
4
4.5
5
5.5
v
R
i
c
h
a
r
d
s
o
n
f
r
a
c
t
i
o
n
three different Richardson fractions
h
1
= 5,h
2
= 2.5,h
3
= 1
h
1
= 2.5,h
2
= 1.6,h
3
= 1
h
1
= 1.6,h
2
= 1.25,h
3
= 1
Figure 6.24: Richardson fraction done with diﬀerent h
i
’s
73
h
1
h
2
h
3
expected value median value calculated
5 2.5 1 4.5714 4.7094
2.5 1.667 1 2.9531 2.9761
1.667 1.25 1 3.1605 3.1652
Table 6.3: Richardson fractions
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
−3
−2
−1
0
1
2
3
x 10
−4
v
N=40
N=50
Evolution with a=50, σ=6, time = 2500
D
i
f
f
e
r
e
n
c
e
i
n
p
r
o
b
a
b
i
l
i
t
y
t
o
N
=
6
0
Figure 6.25: Diﬀerence in N value
The plot is in ﬁgure 6.25.
6.4.2 Evolution of the shell while changing a
We plot in ﬁgure 6.26 the remaining probability at diﬀerent times with diﬀerent mean
positions a initially, while v = 0 and σ is constant initially. We also plot in ﬁgure 6.27 the
values for the remaining probability with diﬀerent time steps.
We can again calculate the “Richardson fraction” as in section 5.9 in the time steps,
in ﬁgure 6.28 we plot the “Richardson fraction” at diﬀerent values of a. In table 6.4 we
show the mean calculated value against the expected values and again we see that we have
quadratic convergence in the time step.
We can also plot the variation with in the number of Chebyshev points to show conver
gence with increasing N (this time it does not make sense to do Richardson Extrapolation).
The plot is in ﬁgure 6.29.
74
0 20 40 60 80 100 120 140 160 180
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
a
p
r
o
b
a
b
i
l
i
t
y
r
e
m
a
i
n
i
n
g
Evolution of lump with v=0, σ = 6, dt = 0.625, N= 40
time = 2500
time = 2000
time = 1500
time = 1250
time = 750
time = 500
Figure 6.26: Evolution of lump at diﬀerent a
0 20 40 60 80 100 120 140 160 180
−0.01
0
0.01
0.02
a
Evolution of lump with v=0, σ = 6, N= 40
d
i
f
f
e
r
e
n
c
e
i
n
r
e
m
a
i
n
i
n
g
p
r
o
b
a
b
i
l
i
t
y
w
i
t
h
d
t
=
0
.
6
2
5
dt = 2.5
dt = 1.25
dt = 0.833
Figure 6.27: Comparing diﬀerences with diﬀerent time steps
75
0 50 100 150
3
3.5
4
4.5
5
5.5
6
a
R
i
c
h
a
r
d
s
o
n
f
r
a
c
t
i
o
n
Richardson fraction at different values
h
1
= 2.5, h
2
= 1.25, h
3
=0.625
h
1
= 1.25, h
2
= 0.833, h
3
=0.625
Figure 6.28: Richardson fraction with diﬀerent h
i
’s
0 20 40 60 80 100 120 140 160
−5
−4
−3
−2
−1
0
1
2
3
4
5
x 10
−3
a
D
i
f
f
e
r
e
n
c
e
i
n
p
r
o
b
a
b
i
l
i
t
y
t
o
N
=
8
0
Evolution with v=0,σ =6 time =2500
N=20
N=40
N=60
Figure 6.29: Diﬀerence in N value
76
h
1
h
2
h
3
expected value median value calculated
2.5 1.25 0.625 5 5.0055
1.25 0.833 0.625 3.8571 3.8588
Table 6.4: Richardson fractions
5 6 7 8 9 10 11 12 13 14 15
0.4
0.45
0.5
0.55
0.6
0.65
0.7
0.75
σ
r
e
m
a
i
n
i
n
g
p
r
o
b
a
b
i
l
i
t
y
Evolution of lump with σ,v = 0,a = 50, N =40
time = 2500
time = 2250
time = 2000
time = 1250
Figure 6.30: Evolution of lump at diﬀerent σ
6.4.3 Evolution of the shell while changing σ
We plot in ﬁgure 6.30 the remaining probability at diﬀerent times with diﬀerent widths σ
initially, while v = 0 and a = 50. We also plot in ﬁgure 6.31 the values for the remaining
probability at diﬀerent time steps.
We can again calculate the “Richardson fraction”, using data obtain at diﬀerent time
steps. In ﬁgure 6.32 we plot the “Richardson fraction” at diﬀerent values of σ. In table 6.5
we show the mean calculated value against the expected values again we have quadratic
convergence.
h
1
h
2
h
3
expected value median value calculated
0.72 0.625 0.55 2.455 2.458
0.83 0.625 0.55 1.908 1.914
Table 6.5: Richardson fractions
77
5 6 7 8 9 10 11 12 13 14 15
−2
−1.5
−1
−0.5
0
0.5
1
1.5
x 10
−4
σ
d
i
f
f
e
r
e
n
c
e
i
n
p
r
o
b
a
b
i
l
i
t
y
t
o
d
t
=
0
.
5
5
Evolution of lump with σ,v = 0,a = 50, N =40
dt = 0.625
dt = 0.72
dt = 0.83
Figure 6.31: Comparing diﬀerences with diﬀerent time steps
5 6 7 8 9 10 11 12 13 14 15
1.5
2
2.5
3
3.5
4
4.5
5
5.5
h
1
= 0.72,h
2
= 0.625,h
3
= 0.55
h
1
= 0.83,h
2
= 0.72,h
3
= 0.55
Richardson fractions
σ
R
i
c
h
a
r
d
s
o
n
f
r
a
c
t
i
o
n
Figure 6.32: Richardson fractions
78
5 6 7 8 9 10 11 12 13 14 15
−6
−4
−2
0
2
4
6
8
x 10
−4
σ
Evolution with v=0,a=50, time = 2500
D
i
f
f
e
r
e
n
c
e
i
n
p
r
o
b
a
b
i
l
i
t
y
t
o
N
=
6
0
N=50
N=40
Figure 6.33: Diﬀerence in N value
We can also plot the variation with in the number of Chebyshev points to show conver
gence with increasing N (this time it does not make sense to do Richardson Extrapolation).
The plot is in ﬁgure 6.33.
6.5 Conclusion
In this chapter, we have evolved the SN equations using diﬀerent initial condition, using the
methods of Chapter 5. We have checked the ability of the sponge factors to absorb outward
going wavefunctions moving oﬀ the grid. We have checked that the evolution method
converges to the solution by evolving the SN equations with diﬀerent initial conditions
and changing the time steps and the number of Chebyshev points in the grid. To check
convergence with respect to the time step we have used Richardson extrapolation and found
it to be converging like the second order in the time steps.
The numerical calculations show that:
• The ground state is stable under full nonlinear evolution.
• Perturbations about the ground state oscillate with the frequencies obtained by the
linear perturbation theory.
79
• Higher states are unstable and will decay into a “multiple” of the ground state, while
emitting some scatter oﬀ the grid.
• The decay time for the higher states is controlled by the growing linear mode obtained
in linear perturbation theory.
• Perturbations about higher states will oscillate for a little while (until they decay)
according to the linear oscillation obtained by the linear perturbation theory.
• The evolution of diﬀerent Gaussian shells indicates that any initial condition appears
to decay as predicted in section 5.10, that is they scatter to inﬁnity and leave a
“multiple” of the ground state.
80
Chapter 7
The axisymmetric SN equations
7.1 The problem
There are two ways of adding another spatial dimension to the problems we are solving.
In this chapter, we consider solving the SN equations for an axisymmetric system in 3
dimensions. Then the wavefunction is a function of the polar coordinates r and θ but
independent of the polar angle. In chapter 8, we consider the SN equations in Cartesian
coordinates with a wavefunction independent of z.
For the axisymmetric problem, we shall ﬁrst ﬁnd stationary solutions. This will give
some stationary solutions with nonzero total angular momentum including a dipolelike
solution that appears to minimise the energy among wave functions that are odd in z. We
then consider the timedependent problem. In particular we evolve the dipolelike state,
which turns out to be nonlinearly unstable  the two regions of probability density attract
each other and fall together leaving as in chapter 6 a multiple of the ground state.
7.2 The axisymmetric equations
We consider the SN equations which in the general nondimensionalised case are:
iψ
t
= −∇
2
ψ +φψ, (7.1a)
∇
2
φ = ψ
2
. (7.1b)
Now consider spherical polar coordinates:
x = r cos αsinθ, (7.2a)
y = r sinαsinθ, (7.2b)
z = r cos θ, (7.2c)
81
where r ∈ [0, ∞) and α ∈ [0, 2π) , θ ∈ [0, π].
The Laplacian becomes:
∇
2
ψ =
1
r
∂
2
(rψ)
∂r
2
+
1
r
2
sinθ
∂(sinθ
∂ψ
∂θ
)
∂θ
+
1
r
2
sin
2
θ
∂
2
ψ
∂α
2
, (7.3)
so in the case where ψ is only dependent on θ and r the SN equations become:
iψ
t
= −
1
r
∂
2
(rψ)
∂r
2
+
1
r
2
sinθ
∂(sinθ
∂ψ
∂θ
)
∂θ
+φψ, (7.4a)
ψ
2
=
1
r
∂
2
(rφ)
∂r
2
+
1
r
2
sinθ
∂(sinθ
∂V
∂θ
)
∂θ
. (7.4b)
Now setting u = rψ the equations become:
iu
t
= −u
rr
+
1
r
2
sinθ
(sinθu
θ
)
θ
+φu, (7.5a)
u
2
r
= φ
rr
+
1
r
2
sinθ
(sinθφ
θ
)
θ
. (7.5b)
For the boundary conditions we still want ψ to be ﬁnite at the origin so that the wave
function is wellbehaved, and to satisfy the normalisation condition we require that ψ →0
as r → ∞. So we have that u(0, θ) = 0 for all θ ∈ [0, π] and that u(r, θ) → 0 for all θ
as r → ∞. We note that these boundary conditions are spherically symmetric so that the
solutions in the spherically symmetric case are still solutions. We know that ψ
θ
= 0 at
θ = 0 and θ = π since otherwise we would have a singularity in the Laplacian.
The stationary SN equations are in the general nondimensionalised case:
E
n
ψ = −∇
2
ψ +φψ, (7.6a)
∇
2
φ = ψ
2
, (7.6b)
so the axially symmetric stationary SN equation are:
E
n
u = −u
rr
+
1
r
2
sinθ
(sinθu
θ
)
θ
+φu, (7.7a)
u
2
r
= φ
rr
+
1
r
2
sinθ
(sinθφ
θ
)
θ
. (7.7b)
7.3 Finding axisymmetric stationary solutions
To ﬁnd axisymmetric solutions of the stationary SN equations we proceed to adapt Jones
et al [3] as follows,
1. Take as an initial guess for the potential φ =
−1
(1 +r)
.
2. Using the potential φ we now solve the timeindependent Schr¨ odinger equation that
is the eigenvalue problem ∇
2
ψ −φψ = −E
n
ψ with ψ = 0 on the boundary.
82
3. Select an eigenfunction in such a way that the procedure will converge to a stationary
state. (This needs care, see below.)
4. Once the eigenfunction has been selected calculate the potential due to that eigen
function.
5. Let the new potential be the one obtained from step 4 above but symmetrise potential
since we require that φ should be symmetric around the α =
π
2
so that numerical errors
do not cause the wavefunction to move along the axis (the solution we want should
have its centre of gravity at the centre of our grid.) Small errors in symmetry will
cause the solution to move along the zaxis and oﬀ the grid.
6. Now provided that the new potential does not diﬀer from the previous potential by a
ﬁxed tolerance in the norm of the diﬀerence, stop else continue from step 2.
In step 2 the method we use to solve the eigenvalue problem involves Chebyshev diﬀer
entiation in two directions that is in r and θ.
In the case of the Hydrogen atom we know explicitly the wavefunction, that is a solution
of Schr¨ odinger equation with a
1
r
potential. In this case the number of zeros in the radial
direction and the axial directions follow a characteristic pattern, because the solutions are
separable. We can use this in our selection procedure in step 3 to get started, but as the
iteration process continues the pattern of zeros wont be ﬁxed and we may need another
method. We also note in the case of the Schr¨ odinger equation with a
1
r
potential that each
stationary state in the axially symmetric case has diﬀerent values of constants E and J
2
.
These values do not change much under iteration of the stationary state problem and we
can use this to obtain the next solution with values of E, J
2
which diﬀer only by a small
amount. Here E is :
E =
R
3
−
¯
ψ∇
2
ψ +φψ
2
d
3
x, (7.8)
and J
2
is :
J
2
=
R
3
¯
ψ[
1
sinθ
∂(sinθ
∂ψ
∂θ
)
∂θ
+
1
sin
2
θ
∂
2
ψ
∂α
2
]d
3
x. (7.9)
We use a combination of the above properties to select solutions at step 3.
7.4 Axisymmetric stationary solutions
We present in table 7.2 the ﬁrst few stationary states of the axially symmetric solution of
the SN equations called axp1 to axp8 together with energy and angular momentum. For
83
comparison we give the energy of the ﬁrst few spherically symmetric stationary states in
table 7.1.
Axp1 in ﬁgure 7.1 is the ground state which is from table 7.2, has vanishing J
2
. Axp2
in ﬁgure 7.3 is a dipole state. It is odd as a function of z, and minimises the energy among
odd functions. It was previously found in [22] and has nonzero J
2
. Axp3 and axp6 are the
second and third spherical states. The others are higher multipoles, but we are not able
to ﬁnd a simple classiﬁcation based on, say, energy and angular momentum as there would
be for the states of the Hydrogen atom.
We plot the graphs the stationary states that appear in table 7.2, in ﬁgures 7.1, 7.3, 7.5,
7.7, 7.9, 7.11, 7.13, 7.15 and the contour plots of these stationary states in ﬁgures 7.2, 7.4,
7.6, 7.8, 7.10, 7.12, 7.14, 7.16.
Number of state Energy Eigenvalue
0 0.16276929132192
1 0.03079656067054
2 0.01252610801692
3 0.00674732963038
4 0.00420903256689
Table 7.1: The ﬁrst few eigenvalues of the spherically symmetric SN equations
Energy J name
0.1592 zero axp1
0.0599 5.1853 axp2
0.0358 0.002 axp3
0.0292 2.3548 axp4
0.0263 17.155 axp7
0.0208 3.1178 axp8
0.0162 5.2053 axp5
0.0115 1.9E6 axp6
Table 7.2: Table for E and J
2
of the axially symmetric SN equations
7.5 Timedependent solutions
Now we consider the evolution of the axisymmetric timedependent SN equations (7.4).
We use the same method as in section 5.7, but instead of solving the Schr¨ odinger equation
and the Poisson equation in one space dimension we now need to solve them in two space
dimensions.
84
−60
−40
−20
0
20
40
60
0
20
40
60
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
z
R
ψ
Figure 7.1: Ground state, axp1
−50 −40 −30 −20 −10 0 10 20 30 40 50
0
10
20
30
40
50
R
z
Contour plot
Figure 7.2: Contour plot of the ground state, axp1
85
−100
−80
−60
−40
−20
0
20
40
60
80
100
0
20
40
60
80
100
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
z
R
ψ
Figure 7.3: “Dipole” state next state after ground state , axp2
−80 −60 −40 −20 0 20 40 60 80
0
10
20
30
40
50
60
70
80
90
Contour Plot
Figure 7.4: Contour Plot of the dipole, axp2
86
−150
−100
−50
0
50
100
150
0
20
40
60
80
100
120
−0.01
−0.005
0
0.005
0.01
0.015
0.02
0.025
0.03
z
R
ψ
Figure 7.5: Second spherically symmetric state, axp3
−100 −80 −60 −40 −20 0 20 40 60 80 100
0
20
40
60
80
100
Contour plot
z
R
Figure 7.6: Contour Plot of the second spherically symmetric state, axp3
87
−200
−100
0
100
200
0
50
100
150
200
−0.01
0
0.01
0.02
z
R
ψ
Figure 7.7: Not quite the 2nd spherically symmetric state ,axp4
−150 −100 −50 0 50 100 150
0
50
100
150
z
R
contour plot
Figure 7.8: Contour plot of the not quite 2nd spherically symmetric state, axp4
88
−200
−100
0
100
200
0
50
100
150
200
−4
−2
0
2
4
6
8
10
x 10
−3
R
z
ψ
Figure 7.9: Not quite the 3rd spherically symmetric state E = −0.0162, axp5
−150 −100 −50 0 50 100 150
0
50
100
150
z
R
contour plot
Figure 7.10: Contour plot of Not quite the 3rd spherically symmetric state E = −0.0162,
axp5
89
−200
−100
0
100
200
0
50
100
150
200
−4
−2
0
2
4
6
8
10
x 10
−3
z
R
ψ
Figure 7.11: 3rd spherically symmetric sate ,axp6
−150 −100 −50 0 50 100 150
0
50
100
150
z
R
contour plot
Figure 7.12: Contour plot of 3rd spherically symmetric state ,axp6
90
−100
−80
−60
−40
−20
0
20
40
60
80
100
0
20
40
60
80
100
−0.015
−0.01
−0.005
0
0.005
0.01
0.015
z
R
ψ
Figure 7.13: Axially symmetric state with E = −0.0263, axp7
−80 −60 −40 −20 0 20 40 60 80
0
10
20
30
40
50
60
70
80
90
z
R
Contour Plot
Figure 7.14: Contour plot of axp7  double dipole
91
−200
−150
−100
−50
0
50
100
150
200
0
50
100
150
200
−5
0
5
10
15
x 10
−3
R
z
ψ
Figure 7.15: State with E = −0.0208 and J = 3.1178 ,axp8
−150 −100 −50 0 50 100 150
0
20
40
60
80
100
120
140
160
180
z
R
Contour plot
Figure 7.16: Contour plot of the state with E = −0.0208 and J = 3.1178 ,axp8
92
We want to use ADI (alternating direction implicit) method to solve the Schr¨ odinger
equation, so we need to split the Laplacian into two parts dependent on each coordinate.
We take the operator for the r direction to be:
L
1
=
1
r
∂
2
∂r
2
, (7.10)
which is the r derivative part of the equation which acting on u = rψ. This leaves the
remaining operator to be:
L
2
=
1
r
[
∂
2
∂θ
2
+cot(θ)
∂
∂θ
], (7.11)
which is the θ derivative part of the equation, and which acts on u. We note that the
last operator is not just a function of one variable, but the important part is that it is
diﬀerential operators in one variable, which is enough to use a ADI(LOD) method to solve
the Schr¨ odinger equation as used by Guenther [10]. The ADI scheme is:
(1 −iL
1
)S = (1 +iL
2
)U
n
, (7.12a)
(1 −iL
2
)T = (1 +iL
1
)S, (7.12b)
(1 +iφ
n+1
)U
n+1
= (1 −iφ
n
)T, (7.12c)
Now we also need a similar way to solve the Poisson equation for the potential. Let
w = rV then the discretized equation for the potential is:
−L
1
w
i,j
−L
2
w
i,j
= f
i,j
, (7.13)
where f
i,j
=
u
i,j

2
r
2
i
is the density distribution. Rather than solving this equation as it
stands (that is N
2
× N
2
matrix problem where N is the number of grid points), we use a
PeacemanRachford ADI iteration [1] on the equation. Let φ
0
i,j
= 0 be an initial guess for
the solution of the equation and deﬁne the iteration to be:
−L
1
φ
n+1
i,j
+ρφ
n+1
i,j
= ρφ
n
i,j
+L
2
φ
n
i,j
+f
i,j
, (7.14a)
−L
2
φ
n+2
i,j
+ρφ
n+2
i,j
= ρφ
n+1
i,j
+L
1
φ
n+1
i,j
+f
i,j
, (7.14b)
where ρ is small factor chosen such that the iteration converges. We continue with the
iteration until it converges.
We can now evolve the SN equations using the above method starting with the dipole
(ﬁrst axially symmetric state) and evolving with the same sponge factors as that used
in Chapter 6. We ﬁnd that again the state behaves like that of a stationary state that
is evoling which constant phase, before it decays. It decays in the same way as in the
93
−100
0
100
0
50
100
0
0.01
0.02
0.03
z
t = 0
R

ψ

−100
0
100
0
50
100
0
0.02
0.04
z
t = 1400
R

ψ

−100
0
100
0
50
100
0
0.01
0.02
z
t = 2000
R

ψ

−100
0
100
0
50
100
0
0.02
0.04
0.06
z
t = 2600
R

ψ

Figure 7.17: Evolution of the dipole
spherically symmetric states emitting scatter oﬀ the grid and leaving a “multiple” of the
ground state behind. One diﬀerence here is that the angular momentum is lost with the
wave scattering oﬀ the grid, to leave a “multiple” of the ground state. In ﬁgure 7.17 we
plot the result of the evolution.
In this we can also check the method for convergence, using Richardson fraction in
time step. We consider three diﬀerent time steps for evolution of the dipole, where dt =
0.25, 0.5, 1. We plot in ﬁgure 7.18 the average Richardson fraction, which if the convergence
is quadratic should be 0.2 and if it is linear it should be 0.33. Here we note that the
convergence is linear in time. We also plot the Richardson fraction in the case of dt =
0.125, 0.25, 0.5 in ﬁgure 7.19 and note that again this shows that the convergence is linear
in time.
We can also check how the method converges with decreasing the mesh size. In ﬁg
ure 7.20 we plot the evolution of the dipole with diﬀerent mesh size N = 30, N2 = 25
instead of N = 25, N2 = 20. Where N is the number of Chebyshev poins in the r direction
and N2 is the number of point in the θ direction. We note that the state still decays into
a “multiple” of the ground state.
94
0 500 1000 1500 2000 2500 3000 3500 4000
0.25
0.26
0.27
0.28
0.29
0.3
0.31
0.32
0.33
0.34
0.35
time
Richardson fraction
R
i
c
h
a
r
d
s
o
n
f
r
a
c
t
i
o
n
Figure 7.18: Average Richardson fraction with dt = 0.25, 0.5, 1
0 500 1000 1500 2000 2500 3000 3500 4000
0.305
0.31
0.315
0.32
0.325
0.33
0.335
0.34
Richardson fraction
R
i
c
h
a
r
d
s
o
n
f
r
a
c
t
i
o
n
time
Figure 7.19: Average Richardson fraction with dt = 0.125, 0.25, 0.5
95
−100
0
100
0
50
100
0
0.01
0.02
z
t = 0
R

ψ

−100
0
100
0
50
100
0
0.01
0.02
z
t = 1400
R

ψ

−100
0
100
0
50
100
0
0.01
0.02
z
t = 2000
R

ψ

−100
0
100
0
50
100
0
0.01
0.02
z
t = 2600
R

ψ

Figure 7.20: Evolution of the dipole with diﬀerent mesh size
96
Chapter 8
The TwoDimensional SN
equations
8.1 The problem
In this chapter, we consider the SN equations in a plane, that is in Cartesian coordinates x,
y. We shall ﬁnd a dipolelike stationary solution, and some solutions which are like rigidly
rotating dipoles. These rigidly rotating solutions are unstable, however, and will merge,
radiating angular momentum.
8.2 The equations
Consider the 2dimensional case, that is, the case where the Laplacian is just
∇
2
ψ = (
∂
2
∂x
2
+
∂
2
∂y
2
)ψ, (8.1)
with the boundary conditions of zero at the edge of a square, instead of on the sphere. So
the nondimensionalized Schr¨ odingerNewton equations then in 2D are :
iψ
t
= −ψ
xx
−ψ
yy
+φψ (8.2a)
φ
xx
+φ
yy
= ψ
2
(8.2b)
To evolve the full nonlinear case we require to use the same method as section 5.7, that
is, we need a method to solve the Schr¨ odinger equation and Poisson equation. That is, we
require a method to solve the Schr¨ odinger equation in 2D as well as the Poisson equation.
To model the Schr¨ odinger equation on this region, we still, as in the case of the one
dimension need a numerical scheme that will preserve the normalisation as it evolves.
We can use a CrankNicholson method here, but we need to modify this since the
matrices become large and no longer tridiagonal in the case of ﬁnite diﬀerences. Therefore
we use an ADI (alternating direction implicit) method to solve the Schr¨ odinger equation.
97
This works when V = 0. The ADI method approximates CrankNicolson to the second
order.
We can introduce to deal with a nonzero potential an extra term in the scheme. We
the same scheme as used in Guenther [10] to intoduce the potential so we have the scheme:
(1 −iD2
x
)S = (1 +iD2
y
)U
n
(8.3a)
(1 −iD2
y
)T = (1 +iD2
x
)S (8.3b)
(1 +iV
n+1
)U
n+1
= (1 −iV
n
)T (8.3c)
The last equation (8.3c) deals with the potential, and D2 is the denotes a Chebyshev
diﬀerentiation matrix or a ﬁnitediﬀerence method for approximating the derivative in the
direction indicated, by the subscript. We also need a quicker way to solve the Possion
equation in 2D. The discretised version of the Possion equation we want to solve is:
−D2
x
φ
i,j
−D2
y
φ
i,j
= f
i,j
(8.4)
where f
i,j
= ψ
2
.
Rather than solving this equation as it stands (that is, N
2
×N
2
matrix problem where
N is the number of grid points), we use a PeacemanRachford ADI iteration [1] on the
equation as described below. Let φ
0
i,j
= 0 be an initial guess for the solution of the equation
and then we deﬁne the iteration to be,
−D2
x
φ
n+1
i,j
+ρφ
n+1
i,j
= ρφ
n
i,j
+D2
y
φ
n
i,j
+f
i,j
, (8.5a)
−D2
y
φ
n+2
i,j
+ρφ
n+2
i,j
= ρφ
n+1
i,j
+D2
x
φ
n+1
i,j
+f
i,j
, (8.5b)
where ρ is a small number chosen to get convergence. We proceed with the iteration until
it converges, that is, φ
n+2
−φ
n
 is less than some tolerance.
8.3 Sponge factors on a square grid
Here we consider sponge factors on the square grid in two dimensions, we choose sponge
factor to be
e = min[1, e
0.5(
√
(x
2
+y
2
)−20)
], (8.6)
since we do not want any corners in the sponge factors. We plot a graph of this sponge
factor in ﬁgure 8.1.
98
−30
−20
−10
0
10
20
30
−30 −20 −10 0 10 20 30
0
0.2
0.4
0.6
0.8
1
y
x
sponge factor for square grid
Figure 8.1: Sponge factor
Again to test the sponge factors we send an exponential lump oﬀ the grid, where the
initial function is of the form:
ψ = e
−0.01((x−a)
2
+(y−b)
2
))
e
ivy
, (8.7)
a wavefunction centred initially at x = a and y = b and moving in the y direction with
velocity v. We plot in ﬁgures 8.2, 8.3 and 8.4 the lump moving oﬀ the grid as a test of the
sponge factor. We note that most of the lump is absorbed.
8.4 Evolution of dipolelike state
As in chapter 6 and chapter 7 we can consider now the evolution of a higher state, that is,
a state with higher energy than the ground state. We can ﬁnd such stationary states by
using the same method as for axially symmetric state section 7.3, but modiﬁed to the 2D
case by changing the diﬀerential operators. In this case we ﬁnd that the ﬁrst stationary
state after the ground state has two lumps like that of the dipole (see ﬁgure 8.5).
Now we can use the method of section 8.2 to evolve this stationary state with sponge
factors. We see that the dipolelike state evolves for a while like a stationary state, that is
rotating at constant phase, and then it decays and becomes just one lump, approximately
the ground state (see ﬁgure 8.6).
99
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 0
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 12
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 24
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 36
Figure 8.2: Lump moving oﬀ the grid with v = 2, N = 30 and dt = 2
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 48
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 60
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 72
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 84
Figure 8.3: Lump moving oﬀ the grid with v = 2, N = 30 and dt = 2
100
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 96
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 108
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 120
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 132
Figure 8.4: Lump moving oﬀ the grid with v = 2, N = 30 and dt = 2
−40
−30
−20
−10
0
10
20
30
40
−40
−20
0
20
40
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
x
y
ψ
Figure 8.5: Stationary state for the 2dim SN equations
101
−40
−20
0
20
40
−40
−20
0
20
40
0
0.05
0.1
x
t = 0
y
ψ
−40
−20
0
20
40
−40
−20
0
20
40
0
0.05
0.1
x
t = 4400
y
ψ
−40
−20
0
20
40
−40
−20
0
20
40
0
0.05
0.1
x
t = 4800
y
ψ
−40
−20
0
20
40
−40
−20
0
20
40
0
0.05
0.1
Figure 8.6: Evolution of a stationary state
8.5 Spinning Solution of the twodimensional equations
To get a spinning solution we could try setting up two ground states or lumps distance apart
and set them moving around each other by choosing the initial velocities of the lumps. We
could instead try to seek a solution, which satisﬁes:
ψ(r, θ, t) = e
−iEt
ψ(r, θ +ωt, 0), (8.8)
where ω is a real scalar and r and θ are the polar coordinates. This is redeﬁning the
stationary state so that rotating solution are considered. In Cartesian coordinates (8.8)
becomes:
ψ(x, y, t) = e
−iEt
ψ(xcos(ωt) +y sin(ωt), −xsin(ωt) +y cos(ωt), 0), (8.9)
Now we let,
X = xcos(ωt) +y sin(ωt), (8.10a)
Y = −xsin(ωt) +y cos(ωt), (8.10b)
102
which is a rotation of ωt. We note that
dX
dt
= Y and
dY
dt
= −X.
Then i
∂ψ
∂t
becomes:
i
∂ψ
∂t
= e
−iEt
[Eψ(X, Y, 0) + iωY ψ
X
(X, Y, 0) −iωXψ
Y
(X, Y, 0)], (8.11)
where ψ
X
denotes partial diﬀerentiation of ψ in the ﬁrst variable and ψ
Y
in the second
variable.
So substituting (8.9) into 2dimensional SN equations we have:
Eψ + iωY ψ
X
−iωXψ
Y
= −∇
2
ψ +φψ, (8.12a)
∇
2
φ = ψ
2
, (8.12b)
we note that these equations are dependent on X and Y only. Note that
¯
ψ is a solution
with −ω instead of ω, which corresponds to a solution rotating in the other direction. If ψ
is a function of r then [Y ψ
x
− Xψ
y
] vanishes and we get the ordinary stationary states in
that case regardless of what value ω is.
We can modify the methods used to calculate the stationary state solution in the axially
symmetric case (section 7.3), to calculate the numerical solution of (8.12) at a ﬁxed ω. We
note that ψ(X, Y, 0) is no longer going to be of just one phase.
Do nontrivial solutions of these equations exist that is a solution such that [Y ψ
x
−Xψ
y
]
does not vanish, hence a solution that will spin? We can ﬁnd such a solution with ω = 0.005
which we plot the real part in ﬁgure 8.7 and the imaginary part in ﬁgure 8.8.
We can use the timedependent evolution method as in section 8.2, with the initial data
being a spinning solution. In ﬁgure 8.9 we plot the evolution of the state before it decays
showing that the state does spin about x = 0 and y = 0. In ﬁgure 8.10 we plot the section
of the evolution showing that the state does decay, emitting scatter oﬀ the grid and leaving
a “multiple” of the ground state.
8.6 Conclusion
For the 2dimensional SN equations we can conclude that:
• They behave as the spherically symmetric and axisymmetric equations that is the
ground state is stable and the higher states are unstable.
• Spinning solutions do exist and they decay like the other solutions to a “multiple” of
the ground state.
103
−20
−10
0
10
20
−20
−10
0
10
20
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
x
spinning solution ω =0.005
y
r
e
a
l
(
ψ
)
Figure 8.7: Real part of spinning solution
−20
−10
0
10
20
−20
−10
0
10
20
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
x
spinning solution ω =0.005
y
i
m
a
g
(
ψ
)
Figure 8.8: Imaginary part of spinning solution
104
−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
x
t = 0
y

ψ

−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
x
t = 200
y

ψ

−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
x
t = 400
y

ψ

−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
x
t = 600
y

ψ

Figure 8.9: Evolution of spinning solution
−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
0.08
x
t = 5200
y

ψ

−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
0.08
x
t = 6400
y

ψ

−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
0.08
x
t = 7600
y

ψ

−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
0.08
x
t = 8800
y

ψ

Figure 8.10: Evolution of spinning solution
105
Bibliography
[1] W.F.Ames, Numerical Methods for Partial Diﬀerential Equations 2nd ed (1977)
[2] E.R.Arriola, J.Soler, Asymptotic behaviour for the 3D Schr¨ odingerPoisson system in
the attractive case with positive energy App.Math.Lett.12 (1999) 16
[3] D.H.Bernstein, E.Giladi, K.R.W.Jones, Eigenstate of the Gravitational Schr¨ odinger
Equation. Modern Physics Letters A13 (1998) 23272336
[4] D.H.Bernstein, K.R.W.Jones, Dynamics of the SelfGravitating BoseEinstein Conden
sate. (Draft)
[5] J.Christian, Exactly soluble sector of quantum gravity Phys.Rev. D56 (1997) 4844.
[6] L.Diosi, Gravitation and the quantummechanical localization of macroobjects.
Phys.Lett 105A (1984) 199202
[7] L.Diosi, Permanent State Reduction: Motivations, Results and ByProducts. Quantum
Communications and Measurement. Plenum (1995) 245250
[8] L.C.Evans, “Partial diﬀerential equations” AMS Graduate Studies in Mathematics 19
AMS (1998)
[9] A.Goldbery, H.Schey, J.Schwartz, Computergenerated motion pictures of one
dimensional quantum mechanical transmission and reﬂection phenomena Am.J.Phys
35 3 (1967) 177186
[10] R.Guenther, “A numerical study of the time dependent Schr¨ odinger equation coupled
with Newtonian gravity” Doctor of Philosophy thesis for University of Texas at Austin
(1995).
[11] H.Lange, B.Toomire, P.F.Zweifel, An overview of Schr¨ odingerPoisson Problems. Re
ports on Mathematical Physics 36 (1995) 331345
[12] H.Lange, B.Toomire, P.F.Zweifel, Timedependent dissipation in nonlinear Schr¨ odinger
systems 36 (1995) 12741283 Journal of Mathematical Physics.
106
[13] R.Illner, P.F.Zwiefel, H.Lange, Global existence, uniqueness and asymptotic behaviour
of solutions of the WignerPoisson and Sch¨ odingerPoisson systems. Math.Meth in
App.Sci.17 (1994) 349376
[14] I.M.Moroz, K.P.Tod, An Analytical Approach to the Schr¨ odingerNewton equations.
Nonlinearity 12 (1999) 20116
[15] I.M.Moroz, R.Penrose, K.P.Tod, Sphericallysymmetric solutions of the Schr¨ odinger
Newton equations. Class Quantum Grav. 15 (1998) 27332742
[16] K.W.Morton, D.F.Mayers, Numerical Solution of Partial Diﬀerential Equations (1994)
[17] R.Penrose, On gravity’s role in quantum state reduction. Gen.Rel.Grav. 28 (1996)
581600
[18] R.Penrose, Quantum computation, entanglement and state reduction Phil.Trans.R.Soc.
(Lond) A 356 (1998) 1927
[19] D.L.Powers, Boundary Value Problems, Academic Press 1972.
[20] W.Press, S.Teukolsky, W.Vetterting, B.Flannery Numerical Recipes in Fortran (second
edition)
[21] R.Ruﬃni, S.Bonazzola, Systems of SelfGravitating Particles in General Relativity and
the concept of an Equation of State. Phys.Rev.187 (1969) 1767
[22] B.Schupp, J.J. van der Bij, An axiallysymmetric Newtonian boson star. Physics Let
ters B 366 (1996) 8588
[23] H.Schmidt, Is there a gravitational collapse of the wavepacket ? (preprint)
[24] H.Stephani, “Diﬀerential equations: their solution using symmetries” Cambridge: CUP
(1989)
[25] K.P.Tod, The ground state energy of the Schr¨ odingerNewton equations Phys.Lett.A
280 (2001) 173176
[26] Private correspondence with K.P.Tod.
[27] L.N.Trefethen, Spectral Methods in MATLAB, SIAM 2000.
[28] L.N.Trefethen, Lectures on Spectral Methods.(Oxford Lecture Course)
107
Appendix A
Fortran programs
A.1 Program to calculate the bound states for the SN equa
tions
! integrates the spherically symmetric SchrodingerNewton equations
! Integrates the system of 4 coupled equations
! dX/dt = FN(X,Y)
! dY/dt = FN(X,Y)
!
! using the NAG library routine D02PVF D02PCF
! using the NAG library routines X02AJF XO2AMF
!
! Program searchs for bound states and the values of
! r for which they blow up
!
Program SN
Implicit None
External D02PVF
External D02PCF
External OUTPUT
External X02AJF
External X02AMF
External FCN
External COUNT
Double precision, dimension(4) :: Y,YP,YMAX,THRES,YS
Double precision, dimension(4) :: PY,PY2
Double Precision, dimension(32*4) :: W
Double Precision :: RATIO,INSIZE,Y1P,INT1,INT2
Double Precision :: X,XEND,STEP,TOL,XE,HSTART
Double Precision :: X02AJF,X02AMF
Double Precision :: Y1,STEP2,XBLOW,XBLOWP
INTEGER :: DR,DR2,DOUT,GOUT,ENCO,ENCO2
INTEGER :: N, I,IFAIL,METHOD , LENWRK
CHARACTER*1 :: TASK
Logical :: ERRAS
108
! compute a few things !
N = 4
INSIZE = 0.01
Y1P = 1.2
X = 0.0000001
XEND = 2000
XE = X
TOL = 100000000000*X02AJF()
THRES(1:4) = Sqrt(X02AMF())
METHOD = 2
TASK =’U’
ERRAS = .FALSE.
HSTART = 0.0
LENWRK = 32*4
IFAIL = 0
GOUT = 0
! Ask for user input .
! to find root below given value
!
Y(3) = 1.0
Y(2) = 0.0
Y(4) = 0.0
! Ask for upper value
Print*, " TOL ", TOL
Print*, " Enter upper value "
Y1 = 1.09
Y(1) = Y1
PY(1:4) = Y(1:4)
PY2(1:4) = Y(1:4)
! Step size
STEP = 0.0005
OPEN(Unit=’10’,file=’fort.10’,Status=’new’)
OPEN(Unit=’11’,file=’fort.11’,Status=’new’)
DO WHILE ( GOUT == 0 )
DR = 0
!
! D02PVF is initially called to give initial starting values to
! subroutine ,
! One has too determine the initial direction in which our
! first value blows up
Step2 = INSIZE ! Initial trial step in Y(1)
DR2 = 1 ! direction of advance in stepping
! DR is direction of blow up
CALL D02PVF(N,X,Y,XEND,TOL,THRES,METHOD,TASK,ERRAS,HSTART,W,LENWRK,IFAIL)
DO WHILE (DR == 0)
XE = XE + STEP
PY2(1:4) = PY(1:4)
PY(1:4) = Y(1:4)
109
IF (XE < XEND) THEN
CALL D02PCF(FCN,XE,X,Y,YP,YMAX,W,IFAIL)
END IF
IF (XE > 3*STEP) THEN
RATIO = (XE2*STEP)*PY2(1)/((XESTEP)*PY(1))
RATIO = RATIO  (XESTEP)*PY(1)/(XE*Y(1))
IF ((RATIO*RATIO) < 1E15) THEN
RATIO = (XESTEP)*PY(1)/(XE*Y(1))
IF (RATIO < 1) THEN
IF (RATIO > 1) THEN
IF (Y(1) > 0) THEN
DR = 1
END IF
IF (Y(1) < 0) THEN
DR =  1
END IF
END IF
END IF
END IF
END IF
IF (Y(1) > 2) THEN
DR = 1
ENDIF
IF (Y(1) < 2) THEN
DR = 1
ENDIF
IF (XE > XEND) THEN
DR = 10
ENDIF
ENDDO
! Do while is ended ...
XBLOWP = 0
XBLOW = 0
DO WHILE (DR /= 10)
Y1 = Y1STEP2*REAL(DR2)
!Print*, "Y(1) ", Y1
Y(3) = 1.0
Y(2) = 0.0
Y(1) = Y1
Y(4) = 0.0
XBLOWP = XBLOW
XBLOW = 0
ENCO = 0
ENCO2 = 0
DOUT = 0
X = 0.0000001
HSTART = 0.0
XE = X
110
CALL D02PVF(N,X,Y,XEND,TOL,THRES,METHOD,TASK,ERRAS,HSTART,W,LENWRK,IFAIL)
DO WHILE (DOUT == 0)
XE = XE + STEP
YS = Y
PY2(1:4) = PY(1:4)
PY(1:4) = Y(1:4)
IF (XE < XEND) THEN
CALL D02PCF(FCN,XE,X,Y,YP,YMAX,W,IFAIL)
END IF
CALL COUNT(YS,Y,ENCO,ENCO2)
IF (Y(1) < 0.0001) THEN
IF (Y(1) > 0.0001) THEN
XBLOW = XE
ENDIF
ENDIF
IF (XE > 3*STEP) THEN
RATIO = (XE2*STEP)*PY2(1)/((XESTEP)*PY(1))
RATIO = RATIO  (XESTEP)*PY(1)/(XE*Y(1))
IF ((RATIO*RATIO) < 1E15) THEN
RATIO = (XESTEP)*PY(1)/(XE*Y(1))
IF (RATIO < 1) THEN
IF (RATIO > 1) THEN
IF (Y(1) > 0) THEN
DOUT = 1
!Print*, "eup", XBLOW
IF (DR /= 1) THEN
STEP2 = STEP2/10
DR = 1
DR2 = DR2*(1)
END IF
END IF
IF (Y(1) < 0) THEN
DOUT = 1
!Print*, "edown", XBLOW
!Print*, "zeros", ENCO
IF (DR /= 1) THEN
STEP2 = STEP2/10
DR = 1
DR2 = DR2*(1)
END IF
END IF
END IF
END IF
END IF
END IF
IF (Y(1) > 1.5) THEN
DOUT = 1
!Print*, "up", XBLOW
111
IF (DR /= 1) THEN
STEP2 = STEP2/10
DR = 1
DR2 = DR2*(1)
ENDIF
ENDIF
IF (Y(1) < 1.5) THEN
DOUT = 1
!Print*, "down " ,XBLOW
!Print*, "zeros ", ENCO
IF (DR /= 1) THEN
STEP2 = STEP2/10
DR = 1
DR2 = DR2*(1)
ENDIF
ENDIF
IF (XE > XEND) THEN
DR = 10
DOUT = 1
ENDIF
IF (STEP2+Y1 == Y1 ) THEN
DR = 10
DOUT = 1
ENDIF
ENDDO
ENDDO
! obtained value for bound state now write it in file
! And look for another
Print*, " bound state ",Y1
PRINT*, " Number of zeros is ", ENCO
Print*, " Blow up value is ", XBLOW
Write(10,*),Y1 ,XBLOW,ENCO
! Now consider the intergation to get A / B^2 !!
Y(3) = 1.0
Y(2) = 0.0
Y(1) = Y1
Y(4) = 0.0
XBLOWP = XBLOW
XBLOW = 0
ENCO = 0
ENCO2 = 0
DOUT = 0
X = 0.0000001
HSTART = 0.0
XE = X
INT1 = 1
INT2 = 0
112
CALL D02PVF(N,X,Y,XEND,TOL,THRES,METHOD,TASK,ERRAS,HSTART,W,LENWRK,IFAIL)
DO WHILE (XE < XBLOWP)
XE = XE + STEP
PY2(1:4) = PY(1:4)
PY(1:4) = Y(1:4)
IF (XE < XEND) THEN
CALL D02PCF(FCN,XE,X,Y,YP,YMAX,W,IFAIL)
END IF
INT1 = INT1  STEP*XE*Y(1)*Y(1)
INT2 = INT2 + STEP*XE*XE*Y(1)*Y(1)
IF (XE > 3*STEP) THEN
RATIO = (XE2*STEP)*PY2(1)/((XESTEP)*PY(1))
RATIO = RATIO  (XESTEP)*PY(1)/(XE*Y(1))
IF ((RATIO*RATIO) < 1E15) THEN
RATIO = (XESTEP)*PY(1)/(XE*Y(1))
IF (RATIO < 1) THEN
IF (RATIO >  1) THEN
Print*, RATIO
IF (Y(1) > 0) THEN
DR = 1
END IF
IF (Y(1) < 0) THEN
DR =  1
END IF
END IF
END IF
END IF
END IF
IF (Y(1) > 2) THEN
DR = 1
ENDIF
IF (Y(1) < 2) THEN
DR = 1
ENDIF
IF (XE > XEND) THEN
DR = 10
ENDIF
ENDDO
Print*, "A is : ", INT1
Print*, "B is : ", INT2
Write(11,*) INT1,INT2
X = 0.0000001
XE = X
INSIZE = (Y1PY1)/10
Y1P = Y1
Y1 = Y1  0.0000001
Y(1) = Y1
Y(2) = 0.0
113
Y(3) = 1.0
Y(4) = 0.0
ENDDO
END PROGRAM SN
!
! FUNCTION EVALUATOR
!
SUBROUTINE FCN(S,Y,YP)
Implicit None
Double Precision :: S,Y(4),YP(4)
YP(1) = Y(2)
YP(2) =2.D0*Y(2)/SY(3)*Y(1)
YP(3) = Y(4)
YP(4) =2.D0*Y(4)/SY(1)*Y(1)
RETURN
END
!
! OUTPUT  subroutine
SUBROUTINE OUTPUT(Xsol,y)
Implicit None
Double Precision :: Xsol
Double Precision, DIMENSION(4) :: Y
PRINT*, Xsol,y(1),y(3)
WRITE(2,*), Xsol,y(1),y(3)
END
!
! Subroutine Count to count the bumps in our curve
SUBROUTINE COUNT(Yold,Ynew,Number,way)
Implicit None
Double Precision, DIMENSION(4) :: Yold, Ynew
Integer :: Number, way
If (Yold(1) > 0) THEN
If (Ynew(1) < 0) THEN
! we have just reached a turing point
way = 1
Number = Number + 1
ENDIF
ENDIF
114
If (Yold(1) < 0) THEN
If (Ynew(1) > 0) THEN
way = 1
Number = Number + 1
ENDIF
ENDIF
END
115
Appendix B
Matlab programs
B.1 Chapter 2 Programs
B.1.1 Program for asymptotically extending the data of the bound state
calculated by RungeKutta integration
% To work out the asymptotic solution %
% r the point to match to !!
% the nb2 is the solution matching to at the moment !!
% matching with exp(r)/r
% 1/r
%
b = 5000;
%step = nb2(2,1)  nb2(1,1);
yr = interp1(nb2(:,1),nb2(:,2),r);
yr2 = interp1(nb2(:,1),nb2(:,2),r+1);
d = real(log(yr*r/(yr2*(r+1))));
s = exp(d*r)/r;
c = yr/s;
vr = interp1(nb2(:,1),nb2(:,3),r);
vr2 = interp1(nb2(:,1),nb2(:,3),r+1);
f = (vr2*(r+1)vr*r)
e = vr*r r*f;
% then work out values of the function
for i = 1:b
tail(i,1) = nb2(1,1) + (i1)*step ;
tail(i,2) = c*exp(d*tail(i,1))/tail(i,1);
tail(i,3) = e/tail(i,1) + f;
end
B.1.2 Programs to calculate stationary state by Jones et al method
function [f,evalue] = cpffsn(L,N,n);
% function [f,evalue] = cpffsn(L,N,n);
%
[D,x] = cheb(N);
116
x2 = (1+x)*(L/2);
phi = 1./(1+x2);
[f,evalue] = cpfsn(L,N,n,x2,phi);
function [f,evalue] = cpfsn(L,N,n,x,phi)
% [f,evalue] = CPFSN(L,N,n,x,phi)
%
% solve the bounds states of the SchrodingerNewton Equation
% L length of interval
% N number of cheb points
% n the nth bound state
% x the grid points wrt function is required
% phi initial guess for potential
%
[D,xp] = cheb(N);
dist = L/2;
cx = (xp+1)*dist;
D2 = D^2/(dist^2);
D2 = D2(2:N,2:N);
newphi = interp1(x,phi,cx);
oldphi = newphi;
res = 1;
while (res > 1E13)
[ft,evalue] = cpesn(L,N,n,newphi,cx);
nom = inprod(ft,ft,cx,N);
ft = ft/sqrt(nom);
g = abs(ft(2:end1)).^2./cx(2:end1);
pu = D2\g;
pu = [0;pu;0];
newphi = pu(1:N)./cx(1:N);
newphi(N+1) = newphi(N);
res = max(abs(newphioldphi))
oldphi = newphi;
end
fx = interp1(cx,ft,x);
f =fx;
%sq = inprod2(fx,fx,x,N);
%f = interp1(sq*x,fx/sq,x);
%sq2 = inprod2(f,f,x,N);
%sq = sq*sq2;
%f = interp1(sq*x,fx/sq,x);
function [f,evalue] = cpesn(L,N,n,phi,x)
% [f,evalue] = CPESN(L,N,n,x)
%
% works out nth eigenvector of schrodinger equation with selfpotential
% initial guess of 1/(r+1) with the length of the interval L ,
% N is number of cheb points
117
% interpolates to x
%
[V,eigs] = cpsn(L,N,phi,x);
cx(1) = 0;
cx(N+1) = L;
dist = (cx(N+1)cx(1))/2;
for i = 1:(N+1)
cx(i) = (cos((i1)*pi/(N)))*dist+dist+cx(1);
end
E = diag(eigs);
E0 = 0*E;
for a = 1:n
[e,i] = max(EE0);
E0(i) = NaN;
end
ev = [0; V(:,i); 0];
f = interp1(cx,ev,x);
evalue = e;
function [V,eigs] = cpsn(L,N,phi,x)
% [V,eigs] = cpsn(L,N,phi,x)
%
% potential of phi wrt x with usual boundary conditions.
% using cheb differential operators
%
% N = number of cheb points
% L = length of interval
%
cx(1) = 0;
cx(N+1) = L;
dist = (cx(N+1)cx(1))/2;
for i = 1:(N+1)
cx(i) = (cos((i1)*pi/(N)))*dist+dist+cx(1);
end
%
% working out the cheb difff matrix
%
D = cheb(N);
D2W = D^2/dist^2;
D2 = D2W(2:N,2:N);
ZE = zeros(N,N);
I = diag(ones(N1,1));
%
%Matrices C1 and C2
%
% Matrices C1 and C2 to solve
% D2 + 1/rR = ER
%
118
cphi = interp1(x,phi,cx(2:N));
C1 = D2 + diag(cphi);
C2 = I;
%Compute eigenvalues
[V,eigs] = eig(C1,C2);
%cheb.m  (N+1)*(N+1) chebyshev spectral differentiaion
% Matrix via explicit formulas
function [D,x] = cheb(N)
if N==0 D=0; x=1; break, end;
ii = (0:N)’;
x = cos(pi*ii/N);
ci = [2;ones(N1,1);2];
D = zeros(N+1,N+1);
for j = 0:N; %compute one column at a time
cj = 1; if j==0  j==N cj = 2; end
denom = cj*(x(ii+1)x(j+1)); denom(j+1) = 1;
D(ii+1,j+1) = ci.*(1).^(ii+j)./denom;
if j > 0 & j< N
D(j+1,j+1) = .5*x(j+1)/(1x(j+1)^2);
end
end
D(1,1) = (2*N^2+1)/6;
D(N+1,N+1) = (2*N^2+1)/6;
B.2 Chapter 4 Programs
B.2.1 Program for solving the Schr¨ odinger equation
% schrodinger equation with potential from the bound state
% To create function of nb1
% with cheb point spaces
% cos(i*pi/2k)
% N = number of points
clear bx1 bv1 br1
dist = (nb1(l,1)nb1(1,1))/2;
for i = 1:(N+1)
bx1(i) = (cos((i1)*pi/(N)))*dist+dist+nb1(1,1);
br1(i) = interp1(nb1(:,1),nb1(:,2),bx1(i));
bv1(i) = interp1(nb1(:,1),nb1(:,3),bx1(i));
br2(i) = interp1(nb2(:,1),nb2(:,2),bx1(i));
br3(i) = interp1(nb3(:,1),nb3(:,2),bx1(i));
end
% working out the cheb difff matrix
%
D = cheb(N);
D2 = D^2/dist^2;
119
D2 = D2(2:N,2:N);
D = D(1:N,1:N);
V0 = diag(bv1(2:N)+Estate);
I = diag(ones(N1,1));
%Matrices C1 and C2
%for the schrodinger equation
%D2 + V0 = e
%
C1 = D2 + V0;
C2 = I;
%
%Compute eigenvalues
[V,eigs] = eig(C1,C2);
E = diag(eigs);
hold off
subplot(1,1,1);
plot(diag(eigs),’r*’);
grid;
pause;
[y,i] = min(E)
subplot(3,2,1);
plot(bx1(2:N),V(1:N1,i)./bx1(2:N)’)
f = V(1:N1,i).^2;
f(2:N) = f(1:N1);
f(1) = 0;
int = dist*(D\f);
int(1)
title(’first eigenvector ’);
grid on;
subplot(3,2,2);
ratio = (V(1,i)./bx1(2))/br1(2);
ratio2 = ratio.^2/int(1);
ratio2 = ratio2^(1/3);
error2 = interp1(bx1(2:N),br1(2:N)’*ratio,ratio2*bx1(2:N));
error = error2’  V(1:N1,i)./bx1(2:N)’;
plot(bx1(2:N),error)
title(’error to nb1’)
E(i) = max(E)+1;
[y2,i2] = min(E)
subplot(3,2,3);
plot(bx1(2:N),V(1:N1,i2)./bx1(2:N)’)
f = V(1:N1,i2).^2;
f(2:N) = f(1:N1);
f(1) = 0;
int = dist*(D\f);
int(1)
title(’second eigenvector ’);
grid on;
120
subplot(3,2,4);
ratio = (V(1,i2)./bx1(2))/br2(2);
ratio2 = ratio.^2/int(1);
ratio2 = ratio2^(1/3);
error2 = interp1(nb2(:,1),nb2(:,2)*ratio,ratio2*bx1(2:N));
error = error2’  V(1:N1,i2)./bx1(2:N)’;
plot(bx1(2:N),error)
title(’error to nb2’)
E(i2) = max(E)+1;
[y2,i3] = min(E)
subplot(3,2,5);
plot(bx1(2:N),V(1:N1,i3)./bx1(2:N)’)
f = V(1:N1,i3).^2;
f(2:N) = f(1:N1);
f(1) = 0;
int = dist*(D\f);
int(1)
title(’third eigenvector is:’);
grid on;
subplot(3,2,6);
ratio = (V(1,i3)./bx1(2))/br3(2);
ratio2= ratio.^2/int(1);
ratio2 = ratio2^(1/3);
error2 = interp1(nb3(:,1),nb3(:,2)*ratio,ratio2*bx1(2:N));
error = error2’  V(1:N1,i3)./bx1(2:N)’;
plot(bx1(2:N),error)
title(’error to nb3’)
B.2.2 Program for solving the O() perturbation problem
% first order perturbation
% To create function of nb1
% with cheb point spaces
% cos(i*pi/2k)
% N = number of points
clear bx1 bv1 br1
dist = (nb1(l,1)nb1(1,1))/2;
for i = 1:(N+1)
bx1(i) = (cos((i1)*pi/(N)))*dist+dist+nb1(1,1);
br1(i) = interp1(nb1(:,1),nb1(:,2),bx1(i));
bv1(i) = interp1(nb1(:,1),nb1(:,3),bx1(i));
end
%
% working out the cheb difff matrix
%
D = cheb(N);
D2W = D^2/dist^2;
D2 = D2W(2:N,2:N);
121
V0 = diag(bv1(2:N));
R0 = diag(br1(2:N));
ZE = zeros(N1,N1);
I = diag(ones(N1,1));
%Matrices C1 and C2
%for the S_N equations
% subject to the equations
% 2R0A + D2W = 0
% D2B + V0B = eA
% D2A  V0A + R0W = eB
% with boundary conditions
% A = 0 , B =0 W = 0 at endpoints
% DA = 0 at infinity
% DB = 0 at infinity
C1 = [2*R0,ZE,D2;ZE,D2+V0,ZE;D2V0,ZE,R0];
C2 = [ZE,ZE,ZE;I,ZE,ZE;ZE,I,ZE];
%Compute eigenvalues
[V,eigs] = eig(C1,C2);
j = sqrt(1);
hold off
E = j*diag(eigs);
Enew = sort(E);
plot(Enew(1:N),’r*’);
grid
title(’Eigenvalues of the perturbed SN equation ’)
B.2.3 Program for the calculation of (4.10)
% abint.m  program to work out integral of A*B
%
D = cheb(N);
D = D(1:N,1:N);
f = abs(V(1:N1,i)).^2;
f(2:N) = f(1:N1);
f(1) = 0;
A = dist*(D\f);
g = abs(V(N:2*N2,i)).^2;
g(2:N) = g(1:N1);
g(1) = 0;
B = dist*(D\g);
f = V(1:N1,i);
g = V(N:2*N2,i);
f = conj(f);
fg = f.*g;
fg(2:N) = fg(1:N1);
fg(1) =0;
int = dist*(D\fg);
122
ratio = sqrt(A(1))*sqrt(B(1))\abs(int(1))
B.2.4 Program for performing the RungeKutta integration on the O()
perturbation problem
% see the solution of the first order perturbation equations
% intial data
%
global nb1;
e = eigenvalue;
global e;
y0 = [0 1 0 1 0 1]’;
y0(2) = initial value for y0(2);
y0(4) = initial value for y0(4);
y0(6) = initial value for y0(6);
X0 = 0.000001;
Xfinal = L  length of the interval;
F = ’yfo’;
Xspan = [X0,Xfinal];
[xp,yp] = ode45(F,Xspan,y0);
subplot(3,1,1)
plot(xp,imag(yp(:,3)./xp))
title(’A/r’)
subplot(3,1,2)
plot(xp,imag(yp(:,1)./xp))
title(’B/r’)
subplot(3,1,3)
plot(xp,imag(yp(:,5)./xp))
title(’W/r’)
function Yprime = yfo(X,Y)
% this is function for the first order pert equations
% where V , R are the vaule of the potential and wave function
% related to the zero th order perturbation.
% e is the eigenvalue ( e = i\lambda )
global e
Yprime(1) = Y(2);
Yprime(2) = V(X)*Y(1)+e*Y(3);
Yprime(3) = Y(4);
Yprime(4) = V(X)*Y(3)+e*Y(1)+R(X)*Y(5);
Yprime(5) = Y(6);
Yprime(6) = 2*R(X)*Y(3);
Yprime = Yprime’;
B.3 Chapter 6 Programs
B.3.1 Evolution program
%snevc2.m
123
%Iteration scheme for the schrodingernewton equations
%using crank nicolson + cheb ,
% Modified 13/3/2001  for perturbation about ground state.
clear i; clear norm; clear Eu;
clear fgp; clear g; clear psi; clear psi1; %clears used data formats
hold off
N = 70;
N2 = N;
R = 350; count = 1;
dt = 0.5; one = 1;
[D,x] = cheb(N);
D2 = D^2;
D2 = D2(2:N,2:N);
x = 0.5*((1x)*R);
u = cpfsn(R,N2,3,x,1./(x+1)); %calculate stationary solution
x = x(2:N); u = u(2:N);
dist = 0.5*R;
D2 = D2/dist^2;
t = 0; nu = dt;
a = 50; v = 0;%0.1;
av = a+v*t;
vs = (v^2*t)/4;
sigma = 15;
rsig = sqrt(sigma);
brack = 1/(sigma^2);
rbrack = sqrt(brack);
wave = rbrack*rsig*exp(0.5*brack*(xav).^2+(0.5*i*v*x)i*vs);
wave2= rbrack*rsig*exp(0.5*brack*(x+av).^2(0.5*i*v*x)i*vs);
pert = wavewave2;
% alternative initial data !
uold = u; %+0.01*pert;
u = uold;
fwe = u;
rho = abs(u(1:end)).^2./x(1:end);
pu = D2\rho;
phi = pu./x;
% solve the potential equation
% sponge factors
e = 2*exp(0.2*(xx(end)));
%e = zeros(N1,1);
o = zeros(N1,1);
Eu(count) = abs(u(5));
fgp(count) = abs(u(1));
fgp2(count) = abs(u(15));
% start of the iteration scheme
for zi2 = 1:100
for zi = 1:1000
count = count +1;
124
phi1 = phi; % initial guess !
Res = 9E9;
while abs(Res) > 3E8
u = uold;
for z = 1:N1
g(z) = (i+e(z))*nu/2;
psi1(z) = one*(i*dt/2)*(phi1(z)+o(z));
psi(z) = one*(i*dt/2)*(phi(z) +o(z));
end
maindaig = ones(N1,1)+psi1’;
d = (ones(N1,1)psi’).*u  g.’.*(D2*u);
b = diag(maindaig) + diag(g)*D2;
u = b\d;
unew = u;
% this was the solving of the P.D.E !
% work out potential for u^n+1 %%
rho = abs(u(1:end)).^2./x(1:end);
pu = D2\rho;
phi1 = pu./x;
%then recalution of the result ! to check
% Now to check the L infinty norm of the residual
u = uold;
for z = 1:N1
g(z) = (i+e(z))*nu/2;
psi1(z) = one*(i*dt/2)*(phi1(z)+o(z));
psi(z) = one*(i*dt/2)*(phi(z) +o(z));
end
maindaig = ones(N1,1)+psi1’;
d = (ones(N1,1)psi’).*u  g.’.*(D2*u);
b = diag(maindaig) + diag(g)*D2;
u = b\d;
Res = max(abs(uunew));
%Res = max(abs(phi1phi))
% end of checking the L inifinity norm .%
end
Eu(count) = abs(u(5));
fgp(count) = abs(u(1));
fgp2(count) = abs(u(30));
norm(zi2) = inprod(u,u,x,N1);
pe(zi2) = pesn(u,x,phi,N);
t = t + dt;
%fwe(:,end+1) = u;
uold = unew;
phi = phi1;
%save fp.mat fgp
%save Ep.mat Eu
%save nn.mat norm
end
125
fwe(:,end+1) = u;
save fp.mat fgp fgp2 %Eu
save nn.mat fwe norm
%tone = t*ones(N1,1);
%plot3(tone,x,abs(u),’b’);
%drawnow
%hold on
%hold on
%plot(x,phi1,’g’);
%plot(nb1(:,1),abs(nb1(:,1).*nb1(:,2)),’r’)
%plot(x,angle(u),’k’);
end
B.3.2 Fourier transformation program
N = max(size(f))/2;
dt = 0.5;
freq = pi/(N*dt);
n = 2*N;
Fn = fft(f);
Fn = [conj(Fn(N+1)) Fn(N+2:end) Fn(1:N+1)]; % rearrange frequency points
Fn = Fn/n; % scale results by fft length
idx = N:N;
Fn(N+1) = 0;
idxs = idx*freq;
plot(log(idxs(N+1:2*N+1)),log(abs(Fn(N+1:2*N+1))))
xlabel(’log(frequency)’);
ylabel(’log(fourier transform)’);
B.4 Chapter 7 Programs
B.4.1 Programs for solving the axisymmetric stationary state problem
function [f,aphi,evalue,lres,count] = cp2fsn(dis,N,N2,n,l)
% function [f,aphi,evalue] = cp2fsn(dis,N,N2,n,l)
%
%
warning off
[D,x] = cheb(N);
[E,y] = cheb(N2);
dis2 = pi/2;
x2= dis*(x+1); % x is the radius points.
y2 = dis2*(y+1);
x2= x2(2:N);
D = D/dis;
E = E/dis2;
D2 = D^2;
E2 = E^2;
126
D = D(2:N,2:N);
D2 = D2(2:N,2:N);
I = eye(N2+1);
fac1 = diag(1./x2.^2);
fac2 = diag((cot(y2eps)+cot(y2+eps))/2);
% initial guess for potential
for i = 1:N2+1
for j = 1:N1
phi((N1)*(i1)+j) = 1./(1000*x2(j)+1);
end
end
aphi = phi;
L = kron(I,D2) + kron(E2,fac1)+ kron(fac2*E,fac1); %Laplacian kinda of !
% actually Le Laplacian for axially symetric case !
newphi = phi;
oldphi = newphi;
res = 1;
mang = 0; mener = 0;
count = 0;
% start of a iteration
while(res > 1E4)
count = count +1;
[ft,evalue,mener,mang] = cp2esnea(dis,N,N2,mener,mang,newphi,n,l);
% get nth evector correponding to potential
% next aim to work out the potential, corresponding to evector !
data = ft;
norms
ft = ft/nom;
clear g;
for i = 1:N2+1
for j = 1:N1
g((N1)*(i1)+j) = abs(ft((N1)*(i1)+j))^2/x2(j);
end
end
g = g.’;
pu = L\g;
for i = 1:N2+1
for j = 1:N1
nwphi((N1)*(i1)+j) = pu((N1)*(i1)+j)/x2(j);
end
end
for i = 1:N2+1
for j = 1:N1
newphi((N1)*(i1)+j) = 0.5*(nwphi((N1)*(i1)+j)+nwphi((N1)*(N2+1i)+j));
end
end
res2 = abs(max(newphinwphi));
127
res = abs(max(newphioldphi))/abs(max(newphi))
oldphi = newphi;
data = newphi;
%axpl
%drawnow;
lres = res;
if count > 50
lres = res;
res = 0;
end
end
f = ft;
aphi = newphi;
function [f,evalue,ener,ang] = cp2esnea(dis,N,N2,pener,pang,phi,n,l);
%
%
[V,eigs] = cp2sn2(dis,N,N2,phi);
En = diag(eigs);
[e,ii] = sort(real(En));
W = 0;
b = 0;
vclose = 10E8;
found = 0;
limit = floor(max(ii)/4);
for as = 1:limit
data = V(:,ii(as));
norms2
if (nom > 0.04)
data = data/nom;
enr;
[n2,l2] = fnl(data,N,N2);
close = (penerener)^2 + 1E4*(pangang)^2+ 0.1*((n2n)^2+ 10*(l2l)^2);
if (close < vclose)
vclose = close;
end
end
end
while(W == 0);
b = b+1;
data = V(:,ii(b));
norms2
if (nom > 0.04)
data = data/nom;
enr;
W = 1;
[n2,l2] = fnl(data,N,N2);
close = (penerener)^2 + 1E4*(pangang)^2+ 0.1*((n2n)^2+ 10*(l2l)^2);
128
end
if (W == 1)
W = 0;
if (close < vclose*1.001)
%axpl2
%drawnow;
W = 1;
end
end
%if (W == 1)
%W = input(’correct state’);
%end
end
evalue = En(ii(b));
f = V(:,ii(b));
% find n and l number of the given grid ..
function [n,l] = fnl(data,N,N2);
% where the u is r\phi
%
for i = 1:N2+1
for j = 1:N1
u(j,i) = data((N1)*(i1)+j);
end
end
clear no noo;
j = 0;
for i = 1:N1
j = j+1;
no(i) = cz2sn(u(i,1:N2+1));
if (no(i) == 1)
j = j 1;
end
if (j > 0)
noo(j) = no(i);
end
end
l = mean(no);
%if (j > 0)
% l = mean(noo);
%end
clear no2 noo2;
i = 0;
for j = 1:N2+1
i = i +1;
no2(j) = cz2sn(u(1:N1,j));
if (no2(j) == 1)
i = i 1;
129
end
if (i > 0)
noo2(i) = no2(j);
end
end
n = mean(no2);
%if (i > 0)
%n = mean(noo2);
%end
%l = ceil(l);
%n = ceil(n);
%l = round(l);
%n = round(n);
% to calculate the norm of a
% axially sysmetric data set!
%
% Jab = r^2sin \alpha
%
clear ener
clear ang
ener = 0;
ang = 0;
[E,y] = cheb(N2);
dis2 = pi/2;
y2 = pi/2*(y+1);
[D,x] = cheb(N);
x2= dis*(x+1);
x2= x2(2:N);
D = D/dis;
E = E/dis2;
D2 = D^2;
E2 = E^2;
D = D(2:N,2:N);
D2 = D2(2:N,2:N);
I = eye(N2+1);
I2 = eye(N1);
fac1 = diag(1./x2.^2);
fac2 = diag((cot(y2+eps)+cot(y2eps))/2);
L = +kron(I,D2)+kron(E2,fac1)+kron(fac2*E,fac1); %Laplacian kinda of !
J = kron(E2,I2) + kron(fac2*E,I2);
%data2 = data;
%for i = 1:N2+1
%for j = 1:N1
%data2((N1)*(i1)+j) = data((N1)*(i1)+j)/x2(j);
%end
130
%end
nab2 = L*data;
aab2 = J*data;
for i = 3:N2+1
for j = 2:N1
darea = (x2(j)x2(j1))*(y2(i)y2(i1));
jab = sin(y2(i));
jab2 = sin(y2(i));
ener = ener  jab*abs(data((N1)*(i1)+j))^2*darea*phi((N1)*(i1)+j);
ener = ener  jab*conj(data((N1)*(i1)+j))*nab2((N1)*(i1)+j)*darea;
ang = ang  jab*conj(data((N1)*(i1)+j))*aab2((N1)*(i1)+j)*darea;
end
end
norms2
% to calculate the norm of a
% axially sysmetric data set!
%
% Jab = r^2sin \alpha
%
clear nom nom2
% Setup the grid
nom = 0; nom2 = 0;
[E,y] = cheb(N2);
y2 = pi/2*(y+1);
[D,x] = cheb(N);
x2= dis*(x+1); % x is the radius points.
x2= x2(2:N);
for i = 2:N2+1
for j = 2:N1
darea = (x2(j)x2(j1))*(y2(i)y2(i1));
jab = sin(y2(i));
nom = nom + jab*abs(data((N1)*(i1)+j))^2*darea;
end
end
nom = nom/2;
nom = sqrt(nom);
% cz2sn.m
% function to work out number of crossing given matrix array...
function number = cz2sn(W);
%
a = max(size(W));
count = 0;
posne = 2;
eps2 = 1E4*eps;
if (W(1) > eps2)
posne = 1;
131
end
if (W(1) < eps2)
posne = 0;
end
for i = 2:a
if (posne == 1)
if W(i) < eps2
posne = 0;
count = count+1;
end
end
if (posne == 0)
if W(i) > eps2
posne = 1;
count = count+1;
end
end
if (posne == 2)
if W(i) > eps2
posne = 1;
end
if W(i) < eps2
posne = 0;
end
end
end
if posne == 2
count = 1;
end
number = count;
function [V,eigs] = cp2sn2(dis,N,N2,phi);
% [V,eigs] = cp2sn2(dis,N,N2,phi);
%
% program to solve the stationary schrodinger in axially symmetric case.
% r,\alpha are the variable in which we solve for at the
% present both has N points !
% boundary condition are different on the \alpha !
% i.e we expect phi to be N1*N1 vector !
%
[E,y] = cheb(N2);
y2 = pi/2*(y+1);
dis2 = pi/2;
[D,x] = cheb(N);
x2= dis*(x+1); % x is the radius points.
x2= x2(2:N);
D = D/dis;
E = E/dis2;
132
D2 = D^2;
E2 = E^2;
D = D(2:N,2:N);
D2 = D2(2:N,2:N);
I = eye(N2+1);
fac1 = diag(1./x2.^2);
fac2 = diag((cot(y2eps)+cot(y2+eps))/2);
L = +kron(I,D2)+kron(E2,fac1) + kron(fac2*E,fac1); %Laplacian kinda of !
L = L + diag(phi);
%
[V,eigs] = eig(L);
B.4.2 Evolution programs for the axisymmetric case
% nsax2.m SchrodingerNewton equation
% on an axially symmetric grid 
% full nonlinear method using chebyshev, ADI(LOD) wave equation
% that the wave equation done in chebyshev differention
% matrix in 2d
clear xx2 yy2
load dpole
[DD,x3] =cheb(N);
x3 = dis*(x3+1);
x3 = x3(2:N);
[EE,y3] = cheb(N2);
y3 = pi/2*(y3+1);
clear DD EE
for cc = 1:N1
for cc2 = 1:N2+1
f2(cc2,cc) = f((cc21)*(N1)+cc);
fred(cc2,cc) = aphi((cc21)*(N1)+cc);
xx2(cc2,cc) = x3(cc)*cos(y3(cc2));
yy2(cc2,cc) = x3(cc)*sin(y3(cc2));
end
end
dis = 50;
grav = 1;
clear xx yy vv
% grid setup %
N = 25;
N2 = 20;
[E,y] = cheb(N2);
dis2 = pi/2;
y2 = pi/2*(y+1);
[D,x] = cheb(N);
x2 = dis*(x+1);
133
x2 = x2(2:N);
D = D/dis;
E = E/dis2;
% create xx,yy
for i = 1:N2+1
for j = 1:N1
xx(i,j) = x2(j)*cos(y2(i));
yy(i,j) = x2(j)*sin(y2(i));
end
end
% initial data %
%vv = interp2(xx2,yy2,f2,xx,yy);
vv = f2;
%vv = 0.1*sqrt(xx.^2+yy.^2).*exp(0.01*((xx).^2+(yy).^2));
%vva = 0.9*sqrt(xx.^2+yy.^2).*exp(0.01*((xx+30).^2+(yy).^2));
%vv = vv + vva;
% time step !
dt = 0.5;
% plotting the function on the screen at intervals !
plotgap = round((2000/10)/dt); dt = (2000/10)/plotgap;
vvold = vv;
vvnew = 0*vv;
phi = 0*vv;
pot = phi;
% sponge factors
%e = 0*vv;
e = 1*exp(0.3*(sqrt(xx.^2+yy.^2)  2*dis)); %i.e no sponge factors
%% calculate of differentiation matrices
D2 = D^2;
D2 = D2(2:N,2:N);
DD = D(2:N,2:N);
E2 = E^2;
I = eye(N2+1);
I2 = eye(N1);
fac1 = diag(x2);
fac2 = diag((cot(y2+eps)+cot(y2eps))/2);
fac3 = diag(1./x2);
fac2(1,1) = 1E10;
fac2(end,end) = 1E10;
L1 = E2 + fac2*E;
L2 = D2;
% calculate the starting potential %
u = grav*vv; % wavefunction
v = 0*pot; % initial guess of new pot for potadi
phi = padi(dis,u,v,N,N2); % calculation of the potential
%phi = 0*pot;
time =0;
pic = vv;
134
% evolution loop begin here ....
for n = 0:30*plotgap
t = n*dt;
if rem(n,plotgap) == 0, %  Plot results at multiples of plotgap
%subplot(2,2,n/plotgap+1),
%[xxx,yyy] = meshgrid(dis:dis*.02:dis,dis:dis*.02:dis);
%vvv = interp2(xx,yy,vv,xxx,yyy,’cubic’);
%mesh(xxx,yyy,abs(vvv)), % colormap([0 0 0])
%mesh(xx,yy,phi)
time(end+1) =t;
pic(:,:,end+1) =vv;
%view(that);
%hold off
%mesh(xx,yy,abs(vv)./sqrt(xx.^2+yy.^2));
%surf(xx,yy,imag(vv))
%hold on
%surf(xx,yy,abs(vv)) %./sqrt(xx.^2+yy.^2));
%shading interp
%axis([1 1 1 1 1 1]),
%surf(xx,yy,abs(uaa))
%axis([2*dis 2*dis 0 2*dis 0.2 0.2])
%title([’t = ’ num2str(t)]), drawnow
% calculate of the norm..
%cn2d
%cnorm
end
Res = 1;
pot = phi;
npot = phi;
while (abs(Res) > 1E4) % set up for the iteration ..
% calculate of the partials
urr = zeros(N2+1,N1);
% for a = 1:N2+1
% urr(a,1:N1) = (L2*vv(a,1:N1).’).’;
% end
uaa = zeros(N2+1,N1);
for a = 1:N1
uaa(1:N2+1,a) = (1/x2(a)^2)*L1*vv(1:N2+1,a);
end
%  ADI(LOD)  %
vv2 = 0*vv;
% (1+dr)Un2 = (1da)Un1
for a = 1:N2+1
sv2 = vv(a,1:N1);
ds = sqrt(1)*uaa(a,1:N1) + e(a,1:N1).*uaa(a,1:N1);
d = (sv2 + 0.5*dt*ds).’;
135
bs2 = sqrt(1)*L2 + diag(e(a,1:N1))*L2;
b = eye(N1)  0.5*dt*bs2;
vv2(a,1:N1) = (b\d).’;
end
% end part1 %
% caculate urr,uaa
% for a = 1:N1
% uaa(1:N2+1,a) = (1/x2(a)^2)*L1*vv2(1:N2+1,a);
% end
for a = 1:N2+1
urr(a,1:N1) = (L2*vv2(a,1:N1).’).’;
end
% (1+da)Un3 = (1dr)Un2
vv3 = 0*vv2;
for a = 1:N1
sv = vv2(1:N2+1,a);
ds = sqrt(1)*urr(1:N2+1,a) + e(1:N2+1,a).*urr(1:N2+1,a);
d = sv + 0.5*dt*ds;
bs2 = (1/x2(a)^2)*sqrt(1)*L1 + (1/x2(a)^2)*diag(e(1:N2+1,a))*L1;
b = eye(N2+1)  0.5*dt*bs2;
vv3(1:N2+1,a) = b\d;
end
% end part2 %
% (1V)Un4 = (1+V)Un3
vvnew = 0*vv3;
for a = 1:N1
d = vv3(1:N2+1,a) + 0.5*sqrt(1)*dt*vv3(1:N2+1,a).*pot(1:N2+1,a);
bpm2 = sqrt(1)*diag(npot(1:N2+1,a));
b = eye(N2+1)  0.5*dt*bpm2;
vvnew(1:N2+1,a) = b\d;
end
% end part3 %
% now the calculate of the potential of the new potential
% due to the new wavefuntion
u = grav*vvnew; % wavefunction
v = 0*pot; % initial guess of new pot for potadi
n2pot = padi(dis,u,v,N,N2); % calculation of the potential
%n2pot = 0*pot;
Res = norm(n2potnpot);
npot = n2pot; % +0.5*pot;
%pot = npot;
end
phi = n2pot;
vv = vvnew;
%inaxprod(abs(vv),abs(vv),dis,N,N2,1)
save axtest pic time N N2 xx yy dis
end
136
function phi = padi(dis,u,v,N,N2)
% padi.m
% Aim: To use PeacemanRachford ADI iteration
% to find the solution to the poisson equation in 2D,with axially
% sysmetric case.
%
% u is the wavefunction !
% v is the potential !
% grid setup !
[E,y] = cheb(N2);
y2 = pi/2*(y+1);
dis2 = pi/2;
[D,x] = cheb(N);
x2 = dis*(x+1); % x is the radius points.
x2 = x2(2:N);
D = D/dis;
E = E/dis2;
% calculation of the differnetiation matrics
D2 = D^2;
E2 = E^2;
D2 = D2(2:N,2:N);
D = D(2:N,2:N);
I = eye(N2+1);
I2 = eye(N1);
rho = 0.017;
rho2 = 0.017;
fac1= diag(1./x2);
fac2= diag((cot(y2+eps)+cot(y2eps))/2);
fac2(1,1) = 1E5;
fac2(end,end) = 1E5;
L1 = E2 + fac2*E;
L2 = D2;
% readjust density function !
uu = abs(u).^2*fac1;
umod = uu;
% uu =u;
% start of the iteration
Res = 1E12;
while (abs(Res) > 1E3)
% calculate wrr,waa
wrr = zeros(N2+1,N1);
%for a = 1:N2+1
% wrr(a,1:N1) = (L2*v(a,1:N1).’).’;
%end
waa = zeros(N2+1,N1);
137
for a= 1:N1
waa(1:N2+1,a) = (1/x2(a)^2)*L1*v(1:N2+1,a);
end
%  ADI  %
% (L2+rho)S = (L1+rhoumod)Wn
A = L2+rho*I2;
vv = 0*v;
for a = 1:N2+1
sv2 = rho*v(a,1:N1) + waa(a,1:N1)  umod(a,1:N1); %abs(uu(a,1:N1)).^2;
vv(a,1:N1) = (A\(sv2.’)).’;
end
% end part1 of ADI
% calculate waa,wrr
%for a = 1:N1
%waa(1:N2+1,a) = L1*vv(1:N2+1,a);
%end
for a = 1:N2+1
wrr(a,1:N1) = (L2*vv(a,1:N1).’).’;
end
newv = 0*v;
% part2 ADI
%A = L1+rho2*I;
% (L1+rho2)Wn+1 = (L2+rho2umod)S
for a = 1:N1
sv = rho2*vv(1:N2+1,a) + wrr(1:N2+1,a)  umod(1:N2+1,a); %abs(uu(1:N2+1,a)).^2;
A = L1*(1/x2(a)^2) + rho2*I;
newv(1:N2+1,a) = A\sv;
end
% end of ADI
test = abs(newvv);
Res = norm(test(1:N2+1,1:N1));
padires = Res;
v = newv;
end
fac = diag(1./x2);
phi = v*fac;
B.5 Chapter 8 Programs
B.5.1 Evolution programs for the two dimensional SN equations
% ns2d.m SchrodingerNewton equation
% on a 2d grid 
% full nonlinear method using chebyshev, ADI(LOD) wave equation
% that the wave equation done in chebyshev differention
% matrix in 2d
dis = 40;
grav = 1;
138
speed =0;
% grid setup %
N = 36;
[D,x] = cheb(N);
x = dis*x;
y = x’;
D = D*1/dis;
[xx,yy] = meshgrid(x,y);
load twin
[DD,x2] = cheb(N2);
x2 = dis2*x2(2:N2);
y2 = x2’;
for cc = 1:N21
for cc2 = 1:N21
f2(cc,cc2) = f((cc1)*(N21)+cc2);
end
end
%f3 = 0.5*(f2+f2(N21:1:1,:));
[xx2,yy2] = meshgrid(x2,y2);
vv = interp2(xx2,yy2,f2,xx,yy);
for cc = 1:N+1
for cc2 = 1:N+1
if isnan(vv(cc,cc2)) == 1
vv(cc,cc2) = 0;
end
end
end
dt = 1;
% plotting the function on the screen at intervals !
plotgap = round(400/(2*dt)); dt = 400/(2*plotgap);
% initial data %
%vv = 0.07*exp(0.01*((xx15).^2+(yy+15).^2))*exp(sqrt(1)*0.5);
%vva = 0.05*exp(0.01*((xx+35).^2+(yy35).^2));
%vv = vv + vva;
% to add initial velocity
%vv = 0.07*exp(0.02*((xx).^2+yy.^2)).*exp(sqrt(1)*speed*(yy+xx)/sqrt(2));
%vva = 0.07*exp(0.01*((xx+10).^2+yy.^2)).*exp(sqrt(1)*speed*yy);
%vv = vv+vva;
%initial velocity perhaps %.*exp(sqrt(1)*0.1*xx);
vvold = vv;
vvnew = 0*vv;
phi = 0*vv;
pot = phi;
% sponge factors
e = 0*vv;
e = min(1,1*exp(0.5*(sqrt(xx.^2+yy.^2)(dis10))));
%% calculate of differentiation matrices
D2 = D^2;
139
D2 = D2(2:N,2:N);
% calculate the starting potential %
u = grav*vv; % wavefunction
v = 0*pot; % initial guess of new pot for potadi
phi = potadi(dis,u,v,N); % calculation of the potential
time = 0;
pic = vv;
% evolution loop begins here
for n = 0:50*plotgap
t = n*dt;
if rem(n,plotgap) == 0, %  Plot results at multiples of plotgap
%subplot(2,2,n/plotgap+1),
time(end+1) = t;
pic(:,:,end+1) = vv;
%[xxx,yyy] = meshgrid(dis:dis*.02:dis,dis:dis*.02:dis);
%vvv = interp2(xx,yy,vv,xxx,yyy,’cubic’);
%mesh(xxx,yyy,abs(vvv)), % colormap([0 0 0])
%mesh(xx,yy,phi)
%view(that);
%mesh(xx,yy,imag(vv));
%axis([1 1 1 1 1 1]),
%axis([dis dis dis dis 0.03 0.03])
%title([’t = ’ num2str(t)]), drawnow
% calculate of the norm..
%cn2d
%cnorm
end
Res = 1;
pot = phi;
npot = phi;
while (abs(Res) > 1E3) % set up for the iteration ..
% calculate uxx
uyy = zeros(N+1,N+1);
for a = 2:N
uyy(a,2:N) = (D2*vv(a,2:N).’).’;
end
uxx = zeros(N+1,N+1);
for a = 2:N
uxx(2:N,a) = D2*vv(2:N,a);
end
%  ADI(LOD)  %
vv2 = 0*vv;
% (1+dy)Un2 = (1dx)Un1
for a = 2:N
sv2 = vv(a,2:N);
ds = sqrt(1)*uxx(a,2:N) + e(a,2:N).*uxx(a,2:N);
d = (sv2 + 0.5*dt*ds).’;
140
bs2 = sqrt(1)*D2 + diag(e(a,2:N))*D2;
b = eye(N1)  0.5*dt*bs2;
vv2(a,2:N) = (b\d).’;
end
% end part1 %
% caculate uxx,uyy
for a = 2:N
uxx(2:N,a) = D2*vv2(2:N,a);
end
for a = 2:N
uyy(a,2:N) = (D2*vv2(a,2:N).’).’;
end
% (1+dx)Un3 = (1dy)Un2
vv3 = 0*vv2;
for a = 2:N
sv = vv2(2:N,a);
ds = sqrt(1)*uyy(2:N,a) + e(2:N,a).*uyy(2:N,a);
d = sv + 0.5*dt*ds;
bs2 = sqrt(1)*D2 + diag(e(2:N,a))*D2;
b = eye(N1) 0.5*dt*bs2;
vv3(2:N,a) = b\d;
end
% end part2 %
% (1V)Un4 = (1+V)Un3
vvnew = 0*vv3;
for a = 2:N
d = vv3(2:N,a)  0.5*sqrt(1)*dt*vv3(2:N,a).*pot(2:N,a);
bpm2 = sqrt(1)*diag(npot(2:N,a));
b = eye(N1) + 0.5*dt*bpm2;
vvnew(2:N,a) = b\d;
end
% end part3 %
% now the calculate of the potential of the new potential
% due to the new wavefuntion
u = grav*vvnew; % wavefunction
v = 0*pot; % initial guess of new pot for potadi
n2pot = potadi(dis,u,v,N); % calculation of the potential
Res = norm(n2potnpot);
npot = n2pot; % +0.5*pot;
%pot = npot;
end
phi = n2pot;
%vv = 0.5*(vvnew  vvnew(:,N+1:1:1));
vv = vvnew;
end
save test3 pic time N xx yy dis
function phi = potadi(dis,u,v,N)
141
% potadi.m
% Aim: To use PeacemanRachford ADI iteration
% to find the solution to the poisson equation in 2D
%
% u is the wavefunction !
% v is the potential !
% grid setup !
[D,x] = cheb(N);
x = dis*x;
y = x’;
[xx,yy] = meshgrid(x,y);
D = D*1/dis;
% calculation of the differnetiation matrics
D2 = D^2;
D2 = D2(2:N,2:N);
I = eye(N1);
rho = +0.01;
% start of the iteration
Res = 1;
while (abs(Res) > 1E4)
% calculate uxx,uyy
uyy = zeros(N+1,N+1);
for a = 2:N
uyy(a,2:N) = (D2*v(a,2:N).’).’;
end
uxx = zeros(N+1,N+1);
for a= 2:N
uxx(2:N,a) = D2*v(2:N,a);
end
%  ADI  %
A = D2+rho*I;
vv = 0*v;
for a = 2:N
sv2 = rho*v(a,2:N) + uxx(a,2:N)  abs(u(a,2:N)).^2;
vv(a,2:N) = (A\(sv2.’)).’;
end
% end part1 of ADI
% calculate uxx,uyy
for a = 2:N
uxx(2:N,a) = D2*vv(2:N,a);
end
for a = 2:N
uyy(a,2:N) = (D2*vv(a,2:N).’).’;
end
newv = 0*v;
142
% part2 ADI
for a = 2:N
sv = rho*vv(2:N,a) +uyy(2:N,a)  abs(u(2:N,a)).^2;
newv(2:N,a) = A\sv;
end
% end of ADI
test = newvv;
Res = norm(test(2:N,2:N));
v = newv;
end
phi = v;
143
Acknowledgements
I would like to thank my supervisors Paul Tod and Irene Moroz, Roger Penrose for the idea behind my thesis, Nick Trefethen for his help in use of spectral methods and his excellent lectures, Ian Sobey for his helpful comments and Lionel Mason for getting me started with this project. I would also like to thank the EPSRC for their ﬁnancial support. Last but not least I would like thank my mother, father and sister for their support.
Abstract
The Schr¨dingerNewton (SN) equations were proposed by Penrose [18] as a model for o gravitational collapse of the wavefunction. The potential in the Schr¨dinger equation is o the gravity due to the density of ψ2 , where ψ is the wavefunction. As with normal Quantum Mechanics the probability, momentum and angular momentum are conserved. found numerically by Moroz et al [15] and Jones et al [3]. The ground state which has the lowest energy has no zeros. The higher states are such that the (n + 1)th state has n zeros. We consider the linear stability problem for the stationary states, which we numerically solve using spectral methods. The ground state is linearly stable since it has only imaginary eigenvalues. The higher states are linearly unstable having imaginary eigenvalues except for n quadruples of complex eigenvalues for the (n + 1)th state, where a quadruple consists ¯ ¯ of {λ, λ, −λ, −λ}. Next we consider the nonlinear evolution, using a method involving an iteration to calculate the potential at the next time step and CrankNicolson to evolve the Schr¨dinger equation. To absorb scatter we use a sponge factor which reduces the reﬂection o back from the outer boundary condition and we show that the numerical evolution converges for diﬀerent mesh sizes and time steps. Evolution of the ground state shows it is stable and added perturbations oscillate at frequencies determined by the linear perturbation theory. The higher states are shown to be unstable, emitting scatter and leaving a rescaled ground state. The rate at which they decay is controlled by the complex eigenvalues of the linear perturbation. Next we consider adding another dimension in two diﬀerent ways: by considering the axisymmetric case and the 2D equations. The stationary solutions are found. We modify the evolution method and ﬁnd that the higher states are unstable. In 2D case we consider rigidly rotationing solutions and show they exist and are unstable. We ﬁrst consider the spherically symmetric case, here the stationary solutions have been
. . . . . . . . . .2. . . . . . .2. . . . . . . . . . . . . . . . . . . . . Existence and uniqueness . An inequality on Re(λ) . . . . . . .2 The equations . . . . .3. . . . . . . . Boundary Conditions .2 1. . . . . . . . . . . . .2 The equations motivated by quantum statereduction . . . . . . . . Some analytical properties of the SN equations . . . . . .2. . . . . . . . . . . . . . Alternative method . . . . . . . . .1 2. . . Rescaling . . . . . .1 1. . . 1 1 4 4 4 4 5 7 7 8 8 8 9 10 10 12 12 13 13 14 15 18 18 20 22 24 25 Analytic results about the timeindependent case . . . . . . Restriction on the possible eigenvalues .3 1. . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . .6 1. . . . . . . . . . . . . . . . . . .3 1. . . i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lagrangian form . . . . . .3. .3 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Linear stability of the sphericallysymmetric solutions 3. . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dispersion .1 1. . . . Approximations to the energy of the bound states . . . . . . . . . . . . .4 1. . . . . . . . . . . .2. . . . . Plan of thesis . Lie point symmetries . . . . . . . . . . . . . . . .2 3. . . . . . . . . . . . . Conserved quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 3. . .2. . . . . . . .1 1. . . . . . . . . . . . .1 2. . . . . . . . . .4 3. . . . . . . . . . . . . . . . . . . .5 Existence and uniqueness . . . . . . . . . . 1. . . . . . .5 Linearising the SN equations . . . . . . Negativity of the energyeigenvalue . . . .5 1. . Conclusion . . . . . . . 2 Sphericallysymmetric stationary solutions 2. . .2. . . . . . . . . . . . . . . . . . . .Contents 1 Introduction: the Schr¨dingerNewton equations o 1. . . . . . . . . . 2. . . . . . . . . . . .3 3. . . . . . .3 RungeKutta integration . . . . . . . . . . Separating the O( ) equations . . . . . . . . . . . . . . . . .2 1. Computing the sphericallysymmetric stationary states . . . . . . . .2. . . . . . Variational formulation . . . . .2 2. . . . . . . .4 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . .3 Testing the sponges . . . . . .4 6. . . . . . .2 6. .1 5. . . . . . . . Boundary conditions and sponges . Evolution of the third state . . . . . . . . . . . . . . . . . . . . . . . . . . . Bound on real part of the eigenvalues . o Numerical evolution of the SN equations . . . . . . . . . . . . . . . Testing the numerical method by using RungeKutta integration . . . .2 4. . .3. The perturbation about the ground state . .4. . .7 5. . . . . . . . . . . . . . . Evolution of the shell while changing σ . . . . . . . . . . . 26 26 28 33 36 39 39 41 46 46 47 48 49 50 51 52 53 54 54 56 56 57 62 62 67 69 72 74 77 79 5 Numerical methods for the evolution 5. . . . . . . . . . . . . . . . . . . . . . . . . . .6 4. . . . .4 4. . Conclusion . . . . . . . Perturbation about the second state . . . . . . . . . Mesh dependence of the methods . .2 5. . . .10 Large time behaviour of solutions 6 Results from the numerical evolution 6. . . . . . . . . . . . . . . .1 6. . . . . Evolution of the higher states . . . .3. . . . . . . . . . . . . . . . .3 5. . . . .2 6. . . . . . . . .2 6. . . .7 The method . . . . .4. . . Conditions on the time and space steps . . . . . . . 5. . . . . . . . . . . . .4 5. . . . . . . . . . . . . . . . . Perturbation about the higher order states . . . . .4. . . . . . . . . . . . . . ii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 6. . .5 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evolution of an arbitrary spherically symmetric Gaussian shell . . . . . . . . . . . . Checks on the evolution of SN equations . . Solution of the Schr¨dinger equation with zero potential . Evolution of the ground state .9 The problem . . . . .1 4. . . .8 5. . .5 4. . . . . . . . . . . . . . . .3 4. . . . . . . . . . . . . . . . . . . . . . . . . . Numerical methods for the Schr¨dinger equation with arbitrary time indeo pendent potential . . . . o Schr¨dinger equation with a trial ﬁxed potential . . . . . . . . . . . . . . . . . . . . . . . . . .5 Evolution of the second state .1 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evolution of the shell while changing a . . Conclusion . . . . . . Evolution of the shell while changing v . . . . . . . . . . . . . . . . . . . . .3 6. . . . .4 Numerical solution of the perturbation equations 4. . . . . . . . . . . .
. . . . . . . . B. . . B. . . . . . B. . . . . . . . . B. . . . . . . . . . . . . . . . . . . .5 Chapter 8 Programs . . . . .2 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Fortran programs A. . . . . . . . . . . . . . . . . . . . . . .2 Evolution programs for the axisymmetric case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 81 81 82 83 84 97 97 97 98 99 102 103 108 . . . . . . . . . .3. . . . B. . . . .2. . . . . . . Sponge factors on a square grid . . . . . . . . . .2. . . . . .6 The problem . . . . . . . . . . . . . . . . . . . . .4 7. . . . . . 108 116 116 116 116 119 119 121 122 123 123 123 126 126 126 133 138 138 8 The TwoDimensional SN equations 8. . . . . . . . . .7 The axisymmetric SN equations 7. . . . . . . . . . . . .3 Chapter 6 Programs . . . . . . . . Timedependent solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Program for performing the RungeKutta integration on the O( ) perturbation problem . . . . . . Finding axisymmetric stationary solutions . . . . . . . . .1 Programs for solving the axisymmetric stationary state problem .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Program for solving the Schr¨dinger equation . . . . . . . . .1 Program to calculate the bound states for the SN equations B Matlab programs B. . . . . . . . . . . .3. . . . . . . . . B. . . . . . . .1 8. . . . . . . .1. . . . . . . . .1 Program for asymptotically extending the data of the bound state calculated by RungeKutta integration . . . . . . o B. . . . . . . B. . . . . . . . .3 Program for the calculation of (4.3 7. . . . . . . . . . . . . . . .2 Program for solving the O( ) perturbation problem . . . . . . . . . . . .1 7. . .1 Evolution program . . . . . . . . . . . Spinning Solution of the twodimensional equations . . . . . B. .4. . . . . . . . . . . . . . B. . . . . . . . . . . . .1 Evolution programs for the two dimensional SN equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Chapter 2 Programs . . . . . B. . . . . .5 8. . . . . . . . B. . . . . . . . .5 The problem . .2. . . .2 Fourier transformation program . . . . B. .4 Chapter 7 Programs . . . . . . . The equations . . . . . . .4 8. . . . The axisymmetric equations . . . . . . . . . . . . B. . . . . . . . . .4. . . iii . . Evolution of dipolelike state . . Conclusion . . . . . . Axisymmetric stationary solutions . .3 8. . .5. . . . . . . . . . . . . . . . .2 Chapter 4 Programs . . . . . . . . . . . .10) . . B.2. . . . . . . . . . . . . .2 Programs to calculate stationary state by Jones et al method . .2 7. . . .
Bibliography 143 iv .
. .15 The third eigenvector of the perturbation about the third bound state . . The second eigenvector of the perturbation about the ground state . . . . 35 35 36 37 37 38 38 40 41 42 4. . . . . . . . . . . . 4. . . . .1 4. . . . . Loglog plot of the energy values with log(n) . . The change in the sample eigenvalue with increasing values of N (L = 150) The change in the sample eigenvalue with increasing values of L (N = 60) . . . . . . . . . 4. . . . 4. . .18 The ﬁrst eigenvector of the perturbation about the ground state. . . . The lowest eigenvalues of the perturbation about the second bound state .3 2. . .12 The third eigenvector of the perturbation about the second bound state . . . 30 31 31 32 32 34 17 17 29 29 15 16 All the computed eigenvalues of the perturbation about the second bound state 34 the scales . . . . . . . . . .19 The second eigenvector of the perturbation about the ground state. . The smallest eigenvalues of the perturbation about the ground state . . . . . . . . . . . .17 The eigenvalues of the perturbation about the fourth bound state . . compare with ﬁgure 4. The least squares ﬁt of the energy eigenvalues . The ﬁrst eigenvector of the perturbation about the ground state.16 The third eigenvector of the perturbation about the third bound state . . . . . . . . . . . . 4. . . . . . . . . . . . . .3 . . . . . . . . . . .4 . . . 4. . . . . . compare with ﬁgure 4.14 The second eigenvector of the perturbation about the third bound state . . . . .7 4. . . . .3 4. .1 2. . . . . . . . Note the scales .6 4. . 4. . .4 4. . . . . . . . . . . . .9 First four eigenfunctions .11 The second eigenvector of the perturbation about the second bound state . . . . . . . . . . . using RungeKutta integration. . . . . . . . . . . . . . . . . . . . . . . . . 4. . Note 4. . . . .4 4. . . .2 4. . . . . . . . . .List of Figures 2. . . . . . . . . . .8 4. . . . . . . The number of nodes with the Moroz et al [15] prediction of the number of nodes . .2 2.13 The eigenvalues of the perturbation about the third bound state . . All the computed eigenvalues of the perturbation about the ground state . . . 4.5 4. . . v . . . . . . . . .10 The ﬁrst eigenvector of the perturbation about the second bound state. . . using RungeKutta integration. . . . . . . . The third eigenvector of the perturbation about the ground state . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . .14 . . .12 f (t) − A . . . . . . . . . .18 Evolution of third state . 6. . . . The Fourier transform of the oscillation about the ground state . .23 The real part of third eigenvector of the perturbation about the third bound state. . . . . . .21 The second eigenvector of the perturbation about the second bound state. . . . vi . .2 6. . . . . . . . . . . . . . . . . .14 Decay of the second state . using RungeKutta integration. . . . .1 6. . . . . . . . . . . compare with ﬁgure 4. . . . . . . . 6. . . . . Evolution of the ground state . compare with ﬁgure 4. . . . . . . . . . . . . . . .24 Richardson fraction done with diﬀerent hi ’s . . . . . using RungeKutta integration.11 . . . . . . . . . . . . . . . using RungeKutta integration. . . . . . . . . . . . . . . . . . 6. . . using RungeKutta integration. . . . . . . . . . 6. . . . .20 The ﬁrst eigenvector of the perturbation about the second bound state. compare ﬁgure 4. . . . . . . . . . .22 Progress of evolution with diﬀerent velocities and times . . . . .10 . . . . . . . . . . . . . . . . . 6. .15 The short time evolution of the second bound state . . . . . .4 6. . . . . . . . . . . . . . . . . . .6 6. . 4. . . .20 Fourier transform of growing mode about third state . . . . The oscillation about the ground state . 6. . . 6. . . . . . . . .9 Testing the sponge factor with wave moving oﬀ the grid . .3 6. . . 6. 4. . . . .11 Evolution of second state tolerance E9 approximately . .15 . . . . . . . . 44 56 57 58 59 59 61 61 63 63 65 65 66 66 67 68 68 70 70 71 71 72 73 73 74 44 43 43 42 The eigenvalue associated with the linear perturbation about the ground state 60 6. . . . . . . .4. . . . . . . . .7 6. . . . . . . . . . . 6.25 Diﬀerence in N value . . . . . . 6. . . . . . . .22 The second eigenvector of the perturbation about the third bound state. using RungeKutta integration. . . . . . . . compare with ﬁgure 4. . .21 Probability remaining in the evolution of the third state . . . . . . . . . .13 Fourier transformation of f (t) − A . . . 6. . . . . . . . 6. . . . . . . . .10 The long time evolution of the second state in Chebyshev methods . . . . . . . . .5 6. . . .16 Oscillation about second state at a ﬁxed radius . . . . . 6. . . . . . .23 Diﬀerence of the other time steps compared with dt = 1 . . 6.24 The imaginary part of third eigenvector of the perturbation about the third bound state. . . . . . . . . . . . . . . . The oscillation about the ground state at a given point .19 f (t) − A with respect to the third state . . . .16 . . . . .17 Fourier transform of the oscillation about the second state . . . . 6. . .8 6. . . . . . . . . . . . . . . The graph of the phase angle of the second state as it evolves . Testing sponge factor with wave moving oﬀ the the grid . . . 4. . . . . . . . . . . . The graph of the phase angle of the ground state as it evolves . . . . 4. . . . . compare with ﬁgure 4.
. . . . . . . . . .10 Contour plot of Not quite the 3rd spherically symmetric state E = −0.5. . . . . . . . . . . 1 . .2 8. . . . .1 7. . . . 6. N = 30 and dt = 2 . .25. . . .6 7. . . 7. . . . . Evolution of a stationary state . Contour plot of the not quite 2nd spherically symmetric state. . . . . . . . . . . . . . . . . . . . . . . N = 30 and dt = 2 . . . . . . . . . . . . . . . . . . . . . . . . . .17 Evolution of the dipole . . . . . .2 7. . . . . .6 Sponge factor . . . . . . . . . . . axp5 . axp7 . . . . . . . . . . . 7. . . . . . . . . .4 8. . . . . . . . . . . . . . . . . . . . . 7. . . . vii . . . .0162. . . N = 30 and dt = 2 . . . . . . . . . . . . . .11 3rd spherically symmetric sate . axp1 .30 Evolution of lump at diﬀerent σ . . .axp4 . . . . . . . .27 Comparing diﬀerences with diﬀerent time steps . . . . . . . . . . . .0162. .29 Diﬀerence in N value . .5 7. . . . . . . 7. . . . . .5 . . . . . 7. . . . . . . . . . . . 75 75 76 76 77 78 78 79 85 85 86 86 87 87 88 88 89 89 90 90 91 91 92 92 94 95 95 96 99 100 100 101 101 102 7. 8. . .13 Axially symmetric state with E = −0. . . . . . . . . . . . . . Contour Plot of the dipole. . . . . . . . . . . . . . . Second spherically symmetric state. . . . .1178 . . . . . . 6. . . . . axp2 . 0. . . Stationary state for the 2dim SN equations . . . .19 Average Richardson fraction with dt = 0. . . . . . . .axp6 . . . . . . . . 7. .25. . . . . .20 Evolution of the dipole with diﬀerent mesh size . . . . . . . .26 Evolution of lump at diﬀerent a . . . . . . . . . . . . . . . . . . . . . . .8 7. . . . . . axp2 . . . . . . 6. . . . . . . . . . 6. . . . . Lump moving oﬀ the grid with v = 2. . . . .125. . . 7. . . . Contour Plot of the second spherically symmetric state. . . . . 7. . . . . . . . . 6. . 6. . . . 6. . . . . . 7. . .5 8. . .1178 . . .15 State with E = −0. . . . .6. . . . Lump moving oﬀ the grid with v = 2. . . . 0. . . . . . . . . . . . . . . . . . . . . .32 Richardson fractions . . axp1 . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . .9 Ground state. .16 Contour plot of the state with E = −0.4 7. . . .28 Richardson fraction with diﬀerent hi ’s . axp3 . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . .33 Diﬀerence in N value . . .axp8 . . . . . . “Dipole” state next state after ground state . . . .axp6 . . . . . . . . . . . . 0. . . . . . . . . . . .7 7. . . . . . . Not quite the 2nd spherically symmetric state . . .1 8. . . . . . . . . . . . . . . . . .3 7. . .31 Comparing diﬀerences with diﬀerent time steps . . . axp3 . . . . . axp4 . axp5 . . . . . .0263. . .12 Contour plot of 3rd spherically symmetric state .3 8. . . . . . . . Contour plot of the ground state. .double dipole . . . . . Not quite the 3rd spherically symmetric state E = −0. . .0208 and J = 3. . . . . . . .14 Contour plot of axp7 . . . . .18 Average Richardson fraction with dt = 0. . .0208 and J = 3. . . . . . . . . . . . . . . . . Lump moving oﬀ the grid with v = 2. . . . . . . . . . . . .axp8 . . . . . . . . . .
. . . Evolution of spinning solution . . . . . . . .8 8. . . . . . Imaginary part of spinning solution . . . . . . . . . .9 Real part of spinning solution . . . . . . . . . . . . . . . . . . . . .8. . . . . . . . . . . . . . . . . viii . 104 104 105 105 8. . . . .7 8. . . . . . . . . . . . . . . . . . . . . . . . . . .10 Evolution of spinning solution . . . . . . . .
˜ 2m ∂t 2 Φ = 4πGmΨ2 . [18]. y.1a) (1. t) = Ψ(x.2a) (1.3a) (1. we also require the potential function to be smooth and zero at large distances. z)e−i Φ(x.1) we consider the substitution Ψ(x. (1. i. z. (1.Chapter 1 Introduction: the Schr¨dingerNewton equations o 1.1 The equations motivated by quantum statereduction The Schr¨dingerNewton equations (abbreviated as SN equations) are : o i ∂Ψ − 2 2 = Ψ + mΦΨ. which gives the timeindependent SN equations. m is the mass of the particle.1b) where is Planck’s constant. 2 ˜ E˜ t . This argument is motivated by a conﬂict between the basic 1 . Φ is the potential. Ψ is the wave function. 2 (1. z). The idea was that a superposition of two or more quantum states. y.2b) − 2 2m ˜ Ψ + mΦΨ = EΨ. z. which have a signiﬁcant amount of mass displacement between the states. One idea for a theory of quantum state reduction was put forward by Penrose [17].e we require the wave function to be smooth for all x ∈ R3 and normalisable. To get the timeindependent form of the SN equations (1. y. ought to be unstable and reduce to one of the states within a ﬁnite time. The boundary conditions for the equations are the same as those for the usual Schr¨dinger o equation and the Poisson equation.3b) Φ = 4πGmΨ2 . y. t) = Φ(x. G is the gravitational constant and t is time.
They have also been considered by Bernstein. t. These equations have also been considered by Moroz. φ = βΦ. Moroz and Tod [14] prove some analytic properties of the SN equations. Bernstein and Jones [4] have started considering a method for the dynamical evolution. φ.] These equations were ﬁrst considered by Ruﬃni and Bonazzola [21] in connection with the theory of selfgravitating bosonstars. superpositions of these states ought to decay within a certain characteristic average time of TG . Giladi and Jones [3] who developed a better way than a shooting method for calculating the stationary solutions in the case of spherical symmetry. Φ. [The SN equations are a ﬁrst approximation we can consider the case where gravity is taken to be Newtonian and spacetime is nonrelativistic. Penrose [18] states that “the phenomenon of quantum state reduction is a gravitational phenomenon. via a transformation (Ψ. Ruﬃni and Bonazzola [21] considered the problem of ﬁnding stationary bosonstars in the nonrelativistic spherically symmetric case which corresponds to ﬁnding stationary spherically symmetric solutions of the SN equations. (ψ.” According to Penrose.principles of quantum mechanics and those of general relativity. r = δR. and that essential changes are needed in the framework of quantum mechanics in order that its principles can be adequately married with the principles of Einstein’s general relativity. There the SN equations are the nonrelativistic limit of the governing KleinGordon equations. (1. where ψ = ψ1 − ψ2 the diﬀerence of the two superposed states 2 R3 mass distributions. where TG = EG and EG is the gravitational selfenergy of the diﬀerence bewteen the mass distributions of the two superposed states. This idea requires that there be a special set of quantum states which collapse no further. r) where: ˜ We can consider the nondimensionalized SN equations. This preferred basis would be the stationary quantum states. Our object in this thesis is to study the SN equations in the time dependent and time independent cases. R) → ψ = αΨ. This would be called a preferred basis. That is. t. Penrose and Tod [15] where they computed stationary solutions in the case of spherical symmetry. Bosonstars consist of a large collection of bosons under the gravitational force of their own combined mass. ˜ t = γ t. the 1 selfenergy is  ψ2 dV .4) 2 .
5b) then becomes.6) Φ = α2 Ψ2 . im m Ψ˜ = − 2 γβ t δ β so we then deduce that m = γβ and 2 Ψ + mΨΦ.12b) 2G2 m5 ˜ where E = E . β δ2 from this we deduce that becomes. E − φ = V.13a) (1.12) to 2 2 φ = ψ2 . The (1. (1.5a) (1. ∂t 2 φ = ψ2 .14a) (1. Ψ2 d3 x = 1 and 2 (1.14b) V = −S2 .12) by letting 2 ψ = S.8) 2 m = so.5) the nondimensionalized time independent SN equations are Eψ = − 2 2 ψ + φψ.such that the SN equation become : i ∂ψ = − 2 ψ + φψ.9) and δ= For γ we have γ= 32π 2 G2 m5 3 8πGm3 2 . which reduces (1. (1. (1. i. (1.13b) S = −SV.e gravitation equation (1. (1. (1. δ2β 2m 2 4πGm2 = . 1 α2 δ 2 = 4πGm.12a) (1. ψ2 d3 X = 1 if α2 = δ −3 . δ 2m (1. which was the form of the equations consider in [14].5b) Normalisation is preserved.10) . We can eliminate the E term from (1.7) which becomes. (1. The Schr¨dinger equation o β 4πGmδ iα α Ψt = − 2 ˜ γ δ 2 Ψ + αβΨΦ. thus β = . 3 .11) Now from (1. [15].
0) = χ(x) and regularity properties for ψ and φ. ˆ ˆ Then ψ and φ satisfy the SN equations with ˆ ˆ ψ2 dV = 1. [ ψ· 1 ¯ ψ− 8π ψ(x)2 ψ(y)2 3 i ¯ ¯ d y + (ψ ψt − ψψt )]d3 xdt. Illner et al [13] give 1.2.5) has a unique. 2 2 2φ (1.16b) V We also note that the rescaling of the solutions will also cause the rescaling of the energy eigenvalues (as well as the action). the system (1. x − y 2 = ψ2 . x.1. t) is a solution of the SN equations then (λ2 ψ.2.18) 1. Suppose that ψ2 dV = λ−1 . Alternatively. Given χ(x) ∈ H 2 ((R)3 ) with L2 norm equal to 1. λ−1 x.19) where it is understood that φ is the solution of the Poisson equation with the appropriate Green’s function and consider the Lagrangian. (1.16a) (1. ˆ φ = λ2 φ(λx. strong ψ2 = 1. 4 . λ2 φ. λ2 t).5) can be obtained from the Lagrangian: [ ψ· i ¯ 1 ¯ ¯ ψ + φψ2 + (ψ ψt − ψψt )]d3 xdt. λ−2 t) is also a solution where λ is any constant.2.3 Lagrangian form Note that (1. solution ψ(x. that is: Enew = λE. λ2 t).2 Rescaling Note that if (ψ. φ. (1. t) global in time with ψ(x. (1. one may solve (1.15) V where λ is a real constant.1 Some analytical properties of the SN equations Existence and uniqueness Existence and uniqueness for solutions is established by the following simpliﬁed version of the theorem of Illner et al [13]. and consider ˆ ψ = λ2 ψ(λx.17) (1.20) See Christian [5] and Diosi [7] for details.2 1.
(1. Deﬁne the centre of mass < xi > by < xi >= then ˙ < xi > = P i .5) admits several conserved quantities. (1. it follows that ˙ Li = 0. Sij where ψi = ∂φ ∂ψ and φi = then ∂xi ∂xi ¯ ¯ ¯ ¯ = −ψ ψij − ψψij + ψi ψj + ψi ψj + 2φi φj − δij φk φk .29) 5 . Next deﬁne the total momentum Pi by Pi = Then ˙ Pi = 0. by the symmetry of Sij .22a) (1.i .21c) ¯ ¯ Ji = −i(ψψi − ψ ψi ).j . ˙ Ji = −Sij. the centreofmass follows a straight line.22b) (1.1. ρ = −Ji. where the dot denotes diﬀerentiation with respect to time.25) Ji d3 x. (1.21b) (1.21a) (1.4 Conserved quantities The system (1. (1. Therefore P = ρd3 x = constant. (1. The total angular momentum is Li = ijk xj Jk d 3 x.23) that is. (1. Deﬁne ρ = ψ2 .26) (1.27) ρxi d3 x. all of which are to be expected from linear quantum mechanics. ˙ (1.24) so that.2. as expected. the total probability is conserved.28) and then.
..j = (n) (n) (n) (n) ρ xi . . We shall sometimes call this the conserved energy or the action.. 6 .35) and other tensors pi. .5b) that ˙ φρd3 x = It follows from this that the quantity 1 E = T + V. φψ2 d3 x.jkm = Then it follows from (1.33) is a consequence of (1.jkm by pi. ˙ (n) (n) In particular this means that pi.. .j by qi..37a) (1.30b) is conserved..... xj d3 x.22b) that qi.e that < Fi >= φi ρd3 x = 0.jk and si.32) ˙ φρd3 x.jk ..We deﬁne the kinetic energy T and potential energy V in the obvious way by T = V = Note from (1.j ˙ (n) (n) (n) (n) x(i ... (1. i.36b) x(i . The identity 1 ρφi = (φi φj − δij φk φk )i 2 (1. n (1. xj Jk) d3 x. We may deﬁne a sequence of moments qi.jk and si.36a) (1. It is closely related to the action for the timeindependent equations..jkm are always zero for steady solutions. .jk = si.jk = nsi. The total energy E = T + V is consequently not conserved..31)  ψ2 d3 x.30a) (1. (1.34) which may be thought of as Newton’s Third Law... n = npi....jkm . (1.37b) pi... (n−1) (n−1) (1. 2 (1. . n (1. .. xj Skm) d3 x..5b) and it follows from this that the averaged ‘selfforce’ is zero..
38a) ˙ P 2 dt]. These clearly are all Lie point symmetries. we can reduce the total momentum to zero.6 Dispersion One of the moments is particularly signiﬁcant.38b) (1. These consist of rotations and translations together. The analysis leading to the claim that there are no more Lie pont symmetries is straightforward but has not been published. t) + x.2. ˆ (1. 2 x → x = x + P (t). (1. (1.5). The Galilean invariance of (1.x + 2 4 1 ¨ ˆ φ → φ = φ(x + P .2.P . 1. dt2 (1. we ﬁnd ¨ qii = 2( so that (2) (2) x2 ρd3 x.1.39) ˙ xi Ji d3 x) = 2 Sii d3 x = (8 ψ2 + 2φψ2 )d3 x. 7 .38c) ¨ which is a Galilean transformation if and only if P = 0.40) d2 < x2 > = 4E + 4T = 8E − 2V.g.5) was noted by Christian [5]. dt2 (1. (1. t) exp[ P . namely the dispersion: < x2 >= qii = Following Arriola and Soler [2]. but we cannot conclude this if E is negative.5 Lie point symmetries Following the general method of Stephani [24] we can ﬁnd the Lie point symmetries of (1.42) The dispersion grows at least quadratically with time if E is positive.41) Now recall that. and then by a translation we can place the centreofmass at the origin. as a consequence of the maximum principle (see e. [8]) φ is everywhere negative. By a Galilean transformation. with a generalised Galilean transformation which can be expressed as follows: −i ˙ i ˆ ψ → ψ = ψ(x + P . Thus d2 < x2 > > 8E.
43) is δI = 1 ¯ δ ψ(− 2 2 ψ + φψ − Eψ) + c.12) is elliptic) and that the minimising function will be the ground state found numerically by Moroz et al [15] and proved to exist by Moroz and Tod [14]. and the energyeigenvalues increase monotonically in n to zero. x − y 2 (1.43) φ = ψ2 . for each integer n there is a unique (up to sign) real sphericallysymmetric solution with n zeroes.3.47) By exploiting various standard inequalities (Tod [25]) would show that the action is bounded below: E≥− 1 .45) If we vary ψ → ψ + δψ. Note that.c]d3 x. by seeking stationary points of the action 1 I = (E − E) = 2 subject to 2 1 1 1 [  ψ2 + φψ2 − Eψ2 ]d3 x. all with negative energyeigenvalue and ψ real. subject to (1.46) from which we obtain (1.44) or by solving (1. the second variation (1.1.2 Variational formulation The system (1. (1. (1. 8 . (It is easy to see that ψ may always be assumed real in stationary solutions.47) cannot be negative. but it is believed that.3.12b).3 1. at the ground state. 54π 4 (1.48) One expects that the direct method in the Calculus of Variations should now prove that the inﬁmum of I is attained. φ → φ + δφ then the ﬁrst variation of (1.) These authors did not show. I= 1 1 [  ψ2 − 2 16π ψ(x)2 ψ(y)2 3 1 d y − Eψ2 ]d3 x.12b) with the relevant Green’s function and considering.1 Analytic results about the timeindependent case Existence and uniqueness In Moroz and Tod [14] it was shown that the system (1. is δ2I = 1 [ δψ2 + φδψ2 −  δψ2 − Eδψ2 ]d3 x. that the minimising function is analytic (since the system (1. 2 4 2 (1. 2 (1.12) can be obtained from a variational problem. while the second variation.12) has inﬁnitely many sphericallysymmetric solutions.12a) as the expected EulerLagrange equations. 1.
56b) These results still hold at each instant for the timedependent case.54c) (1. 2 Then as a consequence of (1. so that the kinetic and potential energies are separately bounded in terms of the action or conserved energy at all times. We now prove that for any stationary state the E energyeigenvalue is negative. Since 3 (1. T2 ≤ + [E + 2 27π 4 3 3π 1 1 1 √ ≥ ]2 .49) (1.52) (1. Tod [25] shows that 1 4 0 ≤ (−V ) ≤ √ T 2. whence 0= so that 0= and 5 3E = T + V. (1. Following Tod [25]. 3 4 V = E. − [E + 2 27π 4 3 3π −E0 1 T 2 on the right hand side.j d3 x = Tii d3 x.3.j = 0. 3 as T > 0 by deﬁnition it follows that E < 0.56a) (1.3 Negativity of the energyeigenvalue We write E0 for the energyeigenvalue of the ground state. 2 With E = T + V this shows that for a steady solution 1 T = − E. 2 (1. 3 1 E = E.50) (xi Tij ).53) (1.1. with 4 1 E = T + V we may solve for T to ﬁnd the bounds 2 1 1 1 1 √ ]2 .51) 5 [− ψ2 − φψ2 + 3Eψ2 ]d3 x. we deﬁne the tensor 1 ¯ ¯ ¯ Tij = ψi ψj + ψi ψj + φi φj − δij (ψk ψk + φk φk + φψ2 − Eψ2 ). 9 . 2 3 3π (1.12) it follows that Tij.55) Arriola and Soler [2] have a stronger result.54b) (1.54a) (1.
5 Conclusion The results obtained from considering the spherically symmetric case are: • The ground state is linearly stable. has n quadruples of complex eigenvalues as well as pure imaginary pairs.24) to solve. (See section 6.3) 10 state. obtaining an eigenvalue problem (3. 4. Also in chapter 5 we consider adding a small heat term. In chapter 5 we consider the problem of ﬁnding a numerical method to evolve the timedependent SN equations and we consider the boundary conditions to put on the numerical problem. In chapter 4 we solve the eigenvalue problem using spectral methods for the ﬁrst few stationary solutions and check the results using RungeKutta integration as well as conﬁrming the conditions on the eigenvalues.5) • Perturbations about the ground state oscillate with the frequencies obtained by the linear perturbation theory. (See . while emitting some scatter oﬀ the grid. (See section 6. In chapter 8 we consider the 2dimensional SN equations (8. The evolution is considered as well as the concept of ﬁnding two spinning lumps orbiting each other.2) or equivalently the translationally symmetric case.(See section 4. Then in chapter 3 we consider the case of the linear perturbation about the spherically symmetric stationary solutions. called a sponge factor to the Schr¨dinger equation to absorb scatter waves. We consider the evolution of the stationary states to see if they are stable and we also look at the stationary states with added perturbation and compare the frequencies of oscillation with the linear perturbation theory.5) • The higher states are unstable and will decay into a “multiple” of the ground state. We consider the axisymmetric SN equations in chapter 7 and look at the evolution as well as the timeindependent equations. Also in chapter 3 we obtain restrictions on the eigenvalues analytically such that the eigenvalues are purely real or imaginary or an integral condition on the eigenvectors vanishes (see section 3. 1.2) • The linear perturbation about the nth excited state.3.4 Plan of thesis In this thesis we start with a review of the methods which can be used to compute the stationary solutions in the case of spherical symmetry (chapter 2). which is to say the (n + 1)th sections 4.1.4). (See section 6. In chapter 6 we evolve o numerically the problem with diﬀerent initial conditions and check the convergence of the evolution method with diﬀerent mesh and time step sizes.4) • The ground state is stable under the full (nonlinear) evolution.
4) • There exist rotating solutions.5) • Lumps of probability density attract each other and come together emitting scatter and leave a “multiple” of the ground state. (See section 6. (See section 8.3) • Perturbations about higher states will oscillate for a while (until they decay) according to the linear oscillation obtained by the linear perturbation theory. (See section 8. (See section 8.4) pears to decay. (See section 6.4) • Evolution of the dipolelike solution shows that it is unstable in the same way as the leaving a “multiple” of the ground state and that lumps of probability density attract each other. (See spherically symmetric stationary solutions are. that is they scatter and leave a “multiple” of the ground state. (See section 7.• The decay time for higher states is controlled by the growing linear mode obtained in the linear perturbation theory.5) The results obtained from considering the 2dimensional case are: • Evolution of the higher states are unstable. emitting scatter and leaving a “multiple” of the ground state.4) The results obtained from considering the axially symmetric case are: • Stationary solutions that are axisymmetric exist and the ﬁrst one is like the dipole of the Hydrogen atom. that is it emits scatter oﬀ the grid 11 . but these are unstable. (See section 7.3) • The evolution of diﬀerent exponential lumps indicates that any initial condition apsection 6.
1b) note that if (S. (2. r) is a solution then so is (λ2 S.3b) x2 S 2 dx. 12 .14) becomes (rS) (rV ) = −rSV.Chapter 2 Sphericallysymmetric stationary solutions 2. We also B .e.2b) C −kr e . r xS 2 dx.2a) (2. (2. r (2.1) decay like V =A− S= where A = V0 − B= ∞ 0 ∞ 0 We have the boundary conditions S → 0 as r → ∞ and S = 0 = V at r = 0. λ−1 r). V0 = V (0)).3a) (2. where V0 is the initial value of the potential V . = −rS 2 .1a) (2. So (1.1 The equations In the case of spherical symmetry and timeindependence we can assume without loss of generality ψ is real. V. λ2 V. At large r bounded solutions to (2. (i.
The method used by Moroz et al [15] was to integrate up to a ﬁxed value in the domain.1 Computing the sphericallysymmetric stationary states RungeKutta integration Now (2. The problem with this routine is that as the function blows up the step size decreases so that the tolerance remains low. r = Y4 . The states are obtained by reﬁning the initial guess so that a solution which tends to zero when r is large is obtained. starting at r = 0 and integrating outwards towards inﬁnity. which involves integration until some value for r and looking at the value of S at that point.4a) (2. The normalisation invariance allows for either V (0).2 2. Also the routine will fail if the routine takes to long. Another is a shooting method.4). −2Y2 = − Y 2 Y4 . −2Y4 − Y42 . which is the exponential blow up of solutions. I have used a modiﬁed shooting method. This increases the computation time needed. to avoid the problem which occurs with RungeKutta integrations. The numerical technique used by Moroz et al [15] to obtain the stationary states uses fourthorder RungeKutta integration on (2. The initial values are picked so that the boundary conditions at r = 0 are satisﬁed and then other values are guessed. S(0) be set equal to a chosen constant. 13 = Y2 .4d) . It will also terminate if the solution for S blows up exponentially. To calculate the stationary states. After each step it will terminate if the solution is too large in absolute magnitude.2. There are various methods for obtaining the correct initial values which correspond to stationary states. waiting for the routine to fail or blow up.4b) (2. then plotting the solution so far.4c) (2. so the value for r cannot be too large. and reﬁne the guess based on which way the function blows up. From information on which side of the axis the solution becomes unbounded we can reﬁne the initial conditions to obtain the eigenfunction. then modifying the initial values such that the S at that point is zero. In the case where V0 = 1 we say that 1.5 is large since the ﬁrst state occurs around 1.1) can be rewritten as a system of four ﬁrst order ODEs Y1 Y2 Y3 Y4 where Y1 = S and Y3 = V . I have modiﬁed the integration in such way that the program will integrate over small steps using a 4th order RungeKutta NAG routine.2. = r (2. Using this method we are able to obtain the ﬁrst 50 values for S0 or equivalently the ﬁrst 50 energy levels.088.
For the stationary spherically symmetric case it was shown in Moroz et al [15] that the eigenvalue is 2G2 m5 A ˜ , E= 2 B2 (2.5)
where A and B are given by (2.3), and V0 is the initial value of the potential function V . 2A Using the above method we calculated A, B and which is the energy up to a factor B 2 m5 G and which compare with the ﬁrst 16 eigenvalues of Jones et al [3]. The ﬁrst of 2 20 eigenvalues calculated with the above routine are given in Table 2.1. We also plot in ﬁgure 2.1 the ﬁrst four spherically symmetric states normalised such that Note that the nth state has (n − 1) zeroes or “nodes”. Number of zeros 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Energy Eigenvalue 0.16276929132192 0.03079656067054 0.01252610801692 0.00674732963038 0.00420903256689 0.00287386420271 0.00208619042678 0.00158297244845 0.00124207860434 0.00100051995162 0.00082314193054 0.00068906850493 0.00058527053127 0.00050327487416 0.00043737620824 0.00038362194847 0.00033920111442 0.00030207158301 0.00027072080257 0.00024400868816 0.00022106369652 ψ2 d3 x = 4π.
Jones et al [3] Eigenvalues 0.163 0.0308 0.0125 0.00675 0.00421 0.00288 0.00209 0.00158 0.00124 0.00100 0.000823 0.000689 0.000585 0.000503 0.000437 0.000384
Table 2.1: The ﬁrst 20 eigenvalues
2.2.2
Alternative method
An alternative method, which we use later on in chapters 4 and 6 to compute the eigenfunction at the Chebyshev points is given below. Jones et al [3] used an iterative numerical scheme for computing the nnode stationary states, instead of using a shooting method. An outline of their method is as follows:
14
0.1 0.08 0.06
first state
20 15 10
x 10
−3
second state
ψ
0.04 0.02 0
ψ
5 0 −5 0
−3
10
20 r third state
30
40
0
20
−3
40
60 r fourth state
80
100
120
8 6 4
x 10
6 4 2 0 −2
x 10
ψ
2 0 −2
0
50
100
r
150
200
ψ
0
100
r
200
300
Figure 2.1: First four eigenfunctions 1. Set an outer radius R. 2. Supply an initial guess for un = rψn . 3. Solve for Φ in ∂ 2 Φ 2 ∂Φ + = r−2 u2 . n ∂r2 r ∂r
(2.6)
4. Solve for the nnode eigenvalue and eigenfunction of ∂ 2 un = 2un (Φ − ). ∂r2 (2.7)
5. Iterate the previous two steps until the eigenvalue converges suﬃciently, a typical criterion being that the change in
n
from one iteration to the next is less than 10−9 .
n
6. Iterate the previous ﬁve steps, increasing R until in R.
stops changing with the change
2.3
Approximations to the energy of the bound states
2A closely follow a leastsquares ﬁt of the B2 (2.8)
Jones et al [3] claim, that the energy eigenvalues formula: En = −
α , (n + β)γ
15
0 −0.5 −1 −1.5 −2 Energy −2.5 −3 −3.5 −4 −4.5 −5
x 10
−3
2A/B2
least squares fit
0
5
10
15
20
25 30 number of nodes
35
40
45
50
Figure 2.2: The least squares ﬁt of the energy eigenvalues where α = 0.096, β = 0.76 and γ = 2.00. This appears to be a very good ﬁt of the ﬁrst 50 eigenvalues. The eigenvalues and the ﬁt are plotted in ﬁgure 2.2. Moroz et al [15] claim V2 V2 that the the number of zeros is of the order of exp( 0 ). Figure 2.3 shows that exp( 0 ) is 2 2 S0 S0 an over estimate for the number of nodes, which appears to converge as n gets large. The loglog plot ﬁgure 2.4 shows that we have a gradient of 2 as n gets large. This is −α what we expect since En = is such a good ﬁt when γ = 2. (n + β)γ
16
5 1 1.4: Loglog plot of the energy values with log(n) 17 .5 4 4.60 50 40 exp(V0/S0) 2 2 30 20 10 0 0 10 20 30 number of nodes 40 50 60 Figure 2.5 Figure 2.3: The number of nodes with the Moroz et al [15] prediction of the number of nodes −1 −2 −3 −4 −5 log(−E ) n −6 −7 −8 −9 −10 −11 0 0.5 2 log n 2.5 3 3.
1b) φ2 (r.2b) ¯ ¯ ¯ (ψ0 ψ2 + ψ1 ψ1 + ψ0 ψ2 ). 2 φ0 + 2 φ1 + 2 2 φ2 = (3.3b) 2 1: iψ1t + 2 ψ1 = ψ0 φ1 + ψ1 φ0 . .3a) (3.4b) 2 . t) + . t) + ψ1 (r. (3.Chapter 3 Linear stability of the sphericallysymmetric solutions 3.5) of the form : ψ = ψ0 (r. (3. . t) + . ready for numerical solution in chapter 4. .1) into the SN equations (1. ¯ ¯ φ1 = ψ0 ψ1 + ψ0 ψ1 .4a) (3. . . Substitution of (3.2a) ψ0 φ0 + (ψ0 φ1 + ψ1 φ0 ) + 2 2 ¯ ¯ ψ0 + (ψ0 ψ1 + ψ0 ψ1 ) + (ψ0 φ2 + ψ1 φ1 + ψ2 φ0 ).5) gives: iψ0t + 2 ψ0 + [i ψ1t + 2 ψ1 ] + 2 2 [i ψ2t + 2 ψ2 ] = (3. φ0 = ψ0 2 . .1 Linearising the SN equations In this chapter 3 we shall set up the linear stability problem. Finally equating the powers of 0: we obtain iψ0t + 2 ψ0 = ψ0 φ0 . t) + φ1 (r. We look for a solution to (1. t) + φ = φ0 (r. 1.1a) (3. t) + where 2 2 ψ2 (r. 18 (3.
t)e−iEt . (3.4). then (3. For convenience we introduce P = rφ1 . R = rR1 . 2 (rV0 )rr = −rR0 .6b) (3. (3. the O( ) problem we have i(rψ1 )t + (rψ1 )rr = r(E − V0 )ψ1 + rφ1 R0 (r)e−iEt . 19 .10b) (3.8b) Substituting into (3.10a) (3. To eliminate e−iEt .2: iψ2t + 2 ψ2 = ψ0 φ2 + ψ1 φ1 + ψ2 φ0 .12a) (3. (3.11b) (3.9a) (3.8a) (3. we seek solutions of the form ψ1 = R1 (r.5b) 2 We consider the case of spherical symmetry so that 1 1 2 f = 2 (r2 fr )r = (rf )rr .9) simpliﬁes to give i(rR1 )t + (rR1 )rr = R0 (rφ1 ) − V0 (rR1 ). iRt + Rrr = R0 P − V0 R. ¯ ¯ ¯ φ2 = ψ0 ψ2 + ψ1 ψ1 + ψ0 ψ2 .12b) ¯ ¯ ¶rr = R0 R + R0 R. Note that P and R must vanish at the origin. where R0 is real.3) becomes r r i(rψ0 )t + (rψ0 )rr = rψ0 φ0 . φ1 = φ1 . (3.7b) (3. φ0 = E − V0 (r). The O( ) problem then becomes. ¯ (rφ0 )rr = rψ0 ψ0 . Since we are interested in the stability of the stationary problem we take ψ0 = R0 (r)e−iEt . so that (rR0 )rr = −rR0 V0 .5a) (3.9b) ¯ (rφ1 )rr = rR0 (r)e−iEt ψ1 + rR0 (r)eiEt ψ1 . ¯ ¯ (rφ1 )rr = R0 (rR1 ) + R0 (rR1 ).11a) (3.7a) (3. where R1 is complex and φ1 is real so that (3.6a) (3.
and ¯ ¯ ¯ ¯ ¯ ¯ Wrr = R0 (A + B) + R0 (A − B). ¯ (3.16) Again equating coeﬃcients of eλt .18) 20 . ¯ Since R0 = R0 we have Wrr = 2R0 A. so we can let W = W1 = W2 .17) ¯ ¯ ¯ ¯ (3. (3. (3. ¯ Wrr = R0 (A − B) + R0 (A + B). eλt and noting that we can do this only if λ = λ so that λ ¯ ¯ is not real. ¯ ¯ ¯ ¯ ¯ i(λ(A + B)eλt + λ(A − B)eλt ) + (Arr + Brr )eλt + (Arr − Brr )eλt = ¯ ¯ ¯ ¯ ¯ R0 (W eλt + W eλt ) − V0 ((A + B)eλt + (A − B)eλt ). P = W1 eλt + W2 eλt .20a) (3. while the coeﬃcient of eλt gives ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ iλ(A − B) + (Arr − Brr ) = R0 W − V0 (A − B). the coeﬃcient of eλt gives iλ(A + B) + (Arr + Brr ) = R0 W − V0 (A + B).19) (3. (3. Now since P is real we note that W1 = W2 .14) ¯ ¯ Equating the coeﬃcients of eλt .20b) (3.12a) gives.15) (3. B.2 Separating the O( ) equations We look for a solution of the O( ) problem in the form ¯ ¯ ¯ R = (A + B)eλt + (A − B)eλt . As we are considering the spherical symmetric case they depend upon r ¯ ¯ only.3. W1 and W2 are functions which are time independent. eλt gives. Substituting into (3.12b) we obtain ¯ ¯ ¯ Wrr eλt + Wrr eλt = R0 ((A + B)eλt + (A − B)eλt ) + ¯ ¯ ¯ ¯ R0 ((A + B)eλt + (A − B)eλt ).13a) (3.13b) where we assume for now that λ is not real and A. Substituting into (3. ¯ ¯ Wrr = 2R0 A.
If λ is real. Brr + V0 B = −iλA. arr + V0 a = R0 W − λb.24b) (3.21a) (3.26a) (3. we note that (3.24a) (3.28a) (3.21b) To summarise. and R0 W = iλB + Arr + V0 A. ¯ Wrr = R0 (A + A). iλAeλt + Arr eλt = R0 W eλt − V0 Aeλt . so we have. ¯ (R0 W ) = R0 W = −iλ(A − B) + (Arr − Brr ) + V0 (A − B).27a) (3.27) we obtain iλ(a + ib) + (ar r + ibrr ) = R0 W − V0 (a + ib). 21 (3. brr + V0 b = −λa. R0 W = iλ(A + B) + (Arr + Brr ) + V0 (A + B).27b) (3.E’s for the perturbation: Wrr = 2R0 A. so that equating real and imaginary parts gives Wrr = 2R0 a. the O( ) problem leads to three coupled linear O.29b) (3. (3. Equating coeﬃcients of eλt we obtain iλA + Arr = R0 W − V0 A.29a) (3. Therefore Brr + V0 B = −iλA.29c) (3.D. Arr + V0 A = R0 W − iλB.which is just one equation.12) gives.28b) .25a) (3. (3. Wrr = 2R0 a.22) (3. P = W eλt . and substitute into (3.26b) If we let A = a + ib where a and b are real functions of r. (3.25b) (3.23) (3.13b) reduces to. ¯ Wrr eλt = R0 Aeλt + R0 Aeλt . R = Aeλt .24c) Substituting into the O( ) equations (3.
but this corresponds to just a rotation in the phase factor. ¯ Arr + V0 A = R0 W − iλB.We consider the real eigenvalues of Wrr = 2R0 A.−λ ¯ and −λ.33b) (3. B. which are the same as (3. Arr + V0 A = R0 W − iλB.30b) (3.32b) (3. B. so (3.3 Boundary Conditions 1 We now consider the boundary conditions for the O( ) problem (3. We note that (3. otherwise 3.31c) equations become ¯ ¯ ¯ so if λ is eigenvalue then −λ is a eigenvalue. arr + V0 a = R0 W − λb. −B. W ) then the Wrr = 2R0 A. Then (3. This implies that B is imaginary.b are real functions of r. (3. Brr + V0 B = −iλA.30c) Suppose that the eigenvalues are such that A is real.12). Wrr = 2R0 A. W ) → (A.33c) they exist in pairs or the singleton λ = 0. b = iR0 and W = 0. Arr + V0 A = R0 W + iλB.28). brr + V0 b = −λa. W ) becomes. with a = 0.28).32a) (3.30a) (3. So if λ is an eigenvalue then so are λ. eigenvalues exist in groups of four.33a) (3. ¯ Brr + V0 B = −iλA.30) become Wrr = 2R0 a. B. We note that λ = 0 will be an eigenvalue of (3.31a) (3. Brr + V0 B = iλA.24) covers both. while W is real. ¯ ¯ and if λ is an eigenvalue then −λ is an eigenvalue. Since φ1 = P and r 1 R1 = R and we require φ0 + φ1 to be a physically representable potential function and r 22 . (3. Also if of (A. That is in the case of λ complex. (3.24) transformed by (A. W ) → (A.32c) (3. We can therefore set A = a and B = ib where a.31b) (3.
i. For the potential φ0 + φ1 we require function to be wellbehaved at r = 0.37) coeﬃcient of 1 ∞ 0 ¯ ¯ (R1 R0 + R0 R1 )r2 dr. The integral (3. (3. This implies φ1 .40b) ¯ (RR)dr. ¯ The integral (3. so that P (0) = 0.e the following integral exists: ∞ 0 (R0 + R1 )(R0 + R1 )r2 dr. ∞ 0 (3. ∞ 0 ¯ (R + R)R0 rdr. (3.36) and upon equating coeﬃcients of coeﬃcient of 0 we require the following integrals to exist: ∞ 0 ¯ (R0 R0 )r2 dr.37) requires the wavefunction of the stationary spherically symmetric problem to be normalisable. corresponding to the freedom we have of where to set the zero on 23 .40a) (3.34) The condition that R0 + R1 be a physically representable wavefunction is that it be normalisable. in terms of R. φ1 and R1 must be wellbehaved functions of r at r = 0. We can therefore choose φ0 → 0 and φ1 → 0 as r → ∞. We expect that the solution for which this is not the case will blow up exponentially.R0 + R1 to be a physically representable wavefunction. Hence R → 0 as r → ∞ is a necessary but not a suﬃcient condition for the wavefunction to be normalisable.35) This becomes ∞ 0 ¯ ¯ ¯ (R0 R0 + (R1 R0 + R0 R1 ) + 2 ¯ R1 R1 )r2 dr. so that the boundary condition that R → 0 as r → ∞ should be suﬃcient. We also have an arbitrary scaling the energy scale. ∞ 0 (3.39) The O(1) condition (3. and the remaining two integrals become. R1 → ﬁnite values as r → 0. as R0 behaves like eEr as r → ∞.38) coeﬃcient of 2 ¯ (R1 R1 )r2 dr.40a) exists provided R+R does not grow exponentially with r. (3. R(0) = 0. to the potential function.40b) implies that R2 → 0 as r → ∞. (3. (3.
24) where (R0 .45) ¯ is real if iλ is real or 0 λ ¯ ABdr = 0 which implies that ¯ is real i.S of (3.43) = 0 We note that the R.H. (3.4 Restriction on the possible eigenvalues ∞ 0 We claim that λ2 is real for the perturbation equations unless proof is due to Tod [26]. A(∞) = 0. (3. V0 ) satisfy (rR0 ) (rV0 ) We now consider −iλ ∞ 0 ¯ ABdr = 0. W (0) = 0. W (∞) = 0. Combining the two results we have that: ∞ ¯ −iλ 0 ABdr . W (∞) = 0. (3. 2 (3.44) = 0 We note that the R. Consider (3. 24 . ¯ ¯ ¯ ¯ ¯ When R = (A + B)eλt + (A − B)eλt and P = W eλt + W eλt . W (0) = 0. the boundary conditions on R R(0) = 0 ⇒ P (0) = 0 ⇒ R(∞) = 0 ⇒ (rP )(∞) = 0 ⇒ A(0) = 0. B(∞) = 0. B(∞) = 0. A(∞) = 0. B(0) = 0.42b) ¯ ABdr = 0 ∞ ∞ ¯ ¯ Brr B + V0 B Bdr. 1 V0 A2 − Ar 2 − Wr 2 dr.S of (3. We also consider: −iλ ∞ 0 ¯ ABdr = 0 ∞ ∞ ¯ ¯ ¯ Arr AV0 AA − R0 W Adr.e that λ2 is real .44) is real since V0 is real.41) 3. B(0) = 0.42a) (3. V0 B2 − Br 2 dr. So to summarise the boundary condition on A. = −rR0 V0 .43) is real since V0 is real. ∞ ¯ ¯ iλ 0 ABdr ∞ (3.and φ imply that. The following 2 = −rR0 . Hence either λ2 λ 0 ∞ ¯ ABdr = 0. B and W are A(0) = 0.H.
(3. We choose a normalisation of the perturbation so that A2 = cos2 θ.49) E 3 )4 .46) and then by the Holder and Sobolev inequalities. 25 .46) λR + iλI cos 2θ ≤ 2C1 sin θ cos θ( Now use Sobolev and Section 1. (3. 9π 2 (3. we ﬁnd  with C1 = 23 3π 3 4 4 ¯ R0 B W  ≤ C1 ( A2 ) 2 ( 1 B2 ) 2 ( 1 R0 3 ) 3 .24) it is possible to prove an inequality for the real part of λ . First we obtain i ¯ ¯ ¯ [λAA + λB B]d3 x = − ¯ R0 B W d3 x.3. 2 (3. B2 = sin2 θ.5 An inequality on Re(λ) From the linearised perturbation system (3. 3 (3. 2 (3.51) Note that if the normalisation is diﬀerent to one we need to rescale the E value.2 to ﬁnd 4 R0 3 ≤ C1 (− 3 R0 3 ) 3 . as in Tod [25].48) and set λ = λR + iλI to ﬁnd from (3.50) so ﬁnally λR  ≤ 4 √ −E.47) .
dx dx (4. We let p(x) be the unique polynomial of degree N or less such that p(xi ) = vi for i = 0. by approximating the problem to that of solving a matrix eigenvalue problem. . . N + 1 . . N.3) . vN wN vN + 1 wN + 1 26 . .24) are linear whereas the SN equations are not.Chapter 4 Numerical solution of the perturbation equations 4. N + 1 and N + 1 is the number of Chebyshev points. Deﬁne wi ’s by wi = p (xi ) for i = 0. where the vi ’s are values of a function at the Chebyshev points. where the Chebyshev polynomials are such that pn (x) = cos(nθ). = DN .1) (4. . 1 . N + 1. denoted DN . 1 . with θ = cos−1 (x). . . (4. N. See [27] for more details about spectral methods. N.1 The method The O(1) perturbation equations (3. is deﬁned to be such that v1 w1 v2 w2 . They also satisfy the diﬀerential equation 1 dpn (x) d ((1 − x2 ) 2 ) = n2 pn (x). The diﬀerentiation matrix for polynomials of degree N. We N note that these points correspond to zeros of the Chebyshev polynomials. . . We use Chebyshev polynomial interpolation to get a diﬀerentiation matrix. . We can therefore solve these equations using spectral methods. 1 .2) When using Chebyshev polynomials we sample our data at Chebyshev points that is at iπ xi = cos( ) where i = 0.
=− 6 xi =− 2(1 − x2 ) i ci (−1)i+j cj (xi − xj ) 1 ≤ i ≤ N − 1. 6 2N 2 + 1 . . In this case it is the DN matrix. .1] .j = where ci = 2 for i = 0 or i = N and ci = 1 otherwise. ..which is : D0. . . 0 .L] = D[−1. .4a) Di.. . 2 The second derivative matrix is just DN since the p (x) is the unique polynomial of degree N or less through wi ’s. 0 0 R(Xn−1 ) .. 27 . . 0 . .. . B = . 2 −V ˜ 0 I 0 W W 0 R0 −DN 0 A(X1 ) .. (4. . W B(Xn−1 ) W (X1 ) .. L].. And R0 = A(Xn−1 ) B(X1 ) A . We note that since the interval we are interested in is [0. (4..0 = DN. 0 .N Di.5) 0 DN + V0 0 B = iλ −I 0 0 B . . 0 .. all diﬀerentiation matrix below are on the interval [0.B and W be zero at the boundary is applied by deleting the ﬁrst and last rows and columns of the diﬀerentiation matrix in question. . B(0) = 0 = B(L) 2 and W (0) = 0 = W (L). . 1] we L rescale points by Xi = (1 + xi ). N where For the perturbation equations we therefore obtain the matrix eigenvalue equation ˜2 A −2R0 0 DN 0 0 0 A ˜2 (4. The requirement L that A.. . since A(0) = 0 = A(L). We also need to rescale the diﬀerentiation matrix so 2 1 D[0. . 0 . . (4..6) . . W (Xn−1 ) R0 (X1 ) 0 0 R0 (X2 ) . . .. . L] instead of [−1. .7) . This yields a (N − 1) × (N − 1) ˜ matrix denoted by D2 . ... (i = j).i 2N 2 + 1 .
1: Eigenvalues of the perturbation about the ground state 28 . .. we suspect that the eigenvalues might be inaccurate due to this. The diﬀerence between the calculated values for the eigenvalues turns out to be small so we can use (4. .. . In ﬁgure 4. We therefore rewrite (4. 0 .08100785899036i ±0 − 0.07654635649084i ±0 − 0. .2 (note that the scale is diﬀerent) to see that up to these limits there are no eigenvalues other than imaginary ones. 0 ...We then solve this generalised eigenvalue problem to obtain the eigenvalues and eigenvectors.06882503249714i ±0 − 0. We note that since this generalised matrix eigenvalue problem (4. .1). .5) about the ground state of the spherically symmetric SN equations (1. . . We also plot all the eigenvalues obtained from solving (4..5) as: ˜2 DN ˜2 0 DN + (V0 + V ) 2 )−1 R ˜ + (V0 + V ) − 2R0 (DN 0 0 = iλ I 0 0 I A B A B . we used N = 60 and length of the interval L = 150. (4.5). B. ..8) that is we are inverting or solving the equation for W in terms of A. ±0. .08665500294956i Table 4..5) to obtain (A. 4.. (4.1. .. . .07310346395426i ±0 − 0.. ..9) V0 = V0 (X1 ) 0 0 V0 (X2 ) . 0 .. . . . 0 0 V (Xn−1 ) . which has no zeros.5) is singular. excluding the near zero eigenvalue which does not correspond to a perturbation. 0 .03412557804571i ±0 − 0. .2 The perturbation about the ground state We consider now the results obtained by the solving (4.8) in ﬁgure 4. To compute these results. W ) straight away. 0 .. .. The eigenvalues obtained are presented in table 4.00000011612065 ±0 − 0.1 we plot eigenvalues obtained by solving (4. Recall the eigenvalue for the ground state in the nondimensionalized unit is 0.082.06030198252911i ±0 − 0.
1: The smallest eigenvalues of the perturbation about the ground state 150 Eigenvalues of the perturbation equation 100 50 0 −50 −100 −150 −1 −0.2 0 0.4 −0.08 −1 −0.2: All the computed eigenvalues of the perturbation about the ground state 29 .06 −0.2 0 0.6 −0.4 0.06 0.4 −0.8 1 Figure 4.08 Eigenvalues of the perturbation equation 0.0.6 0.02 −0.8 1 Figure 4.6 −0.4 0.04 −0.2 0.02 0 −0.8 −0.2 0.8 −0.6 0.04 0.
3: The ﬁrst eigenvector of the perturbation about the ground state. that is the perturbations are only oscillatory.4. we can deduce that this solution corresponds to a phase rotation of the background solution. We plot the ﬁrst three eigenvectors for the ground state as 4.10 5 0 −5 2 1. We note that this shows the eigenvalue is converging with N see ﬁgure 4. We can also plot the graph of the eigenvalue 0. up to numerical error. A B r. and again this shows the eigenvalue converging with increasing L see ﬁgure 4.5 1 0.5.r and W r in ﬁgures 4. as the eigenfunction shows. We conclude that the ground state is linearly stable. which was expected (section 3.W are very small compared to B.3 since A. To test convergence of the eigenvalues we plot graphs against increase in N and increase in L.3.6.4. 4.7.0765463562i as a sample eigenvalue with increasing L instead of N . 30 .5 0 0 −5 −10 −15 −20 0 50 0 50 A/r 0 7 x 10 50 B/r 100 150 W/r 100 150 100 150 Figure 4. We note that all the eigenvalues are imaginary and that this agrees with the result obtained in section 3. As an example we plot the graph of the eigenvalue 0. We note that the eigenvalues are symmetric about the real axis. Note the scales We note that the nearzero eigenvalue obtained corresponds to the trivial zeromode.0765463562i as the value of N increases.2). In ﬁgure 4.
5: The third eigenvector of the perturbation about the ground state 31 .4: The second eigenvector of the perturbation about the ground state 10 5 0 −5 4 2 0 −2 5 0 −5 −10 −15 0 50 A/r 0 50 B/r 100 150 0 50 W/r 100 150 100 150 Figure 4.10 5 0 −5 8 6 4 2 0 5 0 −5 −10 −15 0 50 0 50 A/r 0 50 B/r 100 150 W/r 100 150 100 150 Figure 4.
003 difference in eigenvalue −0.01 100 110 120 130 140 L 150 160 170 180 190 Figure 4.008 −0.002 −0.004 −0.005 −0.7: The change in the sample eigenvalue with increasing values of L (N = 60) 32 .009 −0.18 16 14 12 x 10 −7 difference in eigenvalue 10 8 6 4 2 0 −2 30 40 50 60 N 70 80 90 Figure 4.006 −0.007 −0.6: The change in the sample eigenvalue with increasing values of N (L = 150) 0 −0.001 −0.
00859859681740i −0.01004023179300i 0.00300149174300i −0.01533675005044i −0.9 on a diﬀerent scale.4.00300149174300i 0 − 0.00139326930981 ±0.00139326930981 ±0 ±0 ±0 −0.2.533821320897923e − 14 3. ±0 ±0 ±0 ±0.10) 0.00859859681740i −0. We also plot all the eigenvalues obtained in ﬁgure 4. The lowest eigenvalues about the second state are presented in table 4.3 Perturbation about the second state We now consider perturbation about the second state.23476646058070 0.00000003648902i −0. In ﬁgure 4.01533675005044i Q 0.00139326930981 + 0.01004023179300i −0. This time we obtain some eigenvalues with nonzero real parts. We note that the eigenvalues have the symmetries expected from section 4. λ 0 − 0.00139326930981 − 0.3: Q for diﬀerent eigenvalues of the perturbation about the second state 33 .01004023179300i +0. To see whether this is the case up to numerical error we compute Q= If this much less than one then L ¯ 0 ABdr 1 1 L L ( 0 A2 dr) 2 ( 0 B2 dr) 2 ∞ ¯ we know that 0 ABdr =  .01004023179300i 0 − 0.02761845529494i Table 4.22194825684122 0. Using the same method we compute the numerical solutions of the perturbation about the second bound state.00000003648902i 0 − 0.3 the calculated values of Q with the eigenvalues.919537745928816e − 14 0. (4. when the unperturbed wavefunction has one zero and the energy eigenvalue is 0.0308.89372677914625 Table 4.4 ∞ ¯ we require that 0 ABdr = 0. In the case where L = 145 and N = 60 we present in table 4. to see that up to these limits there are no other complex ones.2.36801505735057 3.8 we plot the lower eigenvalues for the case where N = 60 and L = 150.2: Eigenvalues of the perturbation about the second state We have some complex eigenvalues so if our results are to satisfy the result in section 3.02105690867367i −0. which we expected from the equations.
5 1 x 10 1.5 −1 −0.5 −1 −0.5 1 x 10 1.5 −3 Figure 4.5 −3 Figure 4.5 0 0.01 0 −0.9: All the computed eigenvalues of the perturbation about the second bound state 34 .0.8: The lowest eigenvalues of the perturbation about the second bound state 150 Eigenvalues of the perturbation equation 100 50 0 −50 −100 −150 −1.04 −1.03 0.01 −0.5 0 0.02 0.03 −0.04 Eigenvalues of the perturbation equation 0.02 −0.
10: The ﬁrst eigenvector of the perturbation about the second bound state. Note the scales 10 5 0 −5 15 10 5 0 −5 20 15 10 5 0 0 50 0 50 A/r 0 50 B/r 100 150 W/r 100 150 100 150 Figure 4.10 5 0 −5 10 A/r 0 6 x 10 50 B/r 100 150 imaginary part 5 0 −5 20 15 10 5 0 0 50 100 150 0 50 W/r 100 150 Figure 4.11: The second eigenvector of the perturbation about the second bound state 35 .
Note that this is approximately B = R0 hence this eigenvector corresponds to a phase rotation. Note again that the ﬁrst near zero eigenvalue just corresponds to a phase rotation. In ﬁgure 4.5. In ﬁgures 4. the state with just two zeros in the unperturbed wavefunction. 4. In ﬁgure 4. with N = 60 and L = 450.4 are conﬁrmed.15. The eigenvalues are presented in table 4.16 the imaginary part.16 we plot the eigenfunction of a complex eigenvalue. In ﬁgure 4. so the results of section 3.10 5 0 −5 20 0 −20 −40 −60 20 10 0 −10 0 50 A/r 0 50 B/r 100 150 W/r 100 150 0 50 100 150 Figure 4.e a trivial zeromode.4 Perturbation about the higher order states We now consider the numerical method about the third state. 4. i.4 can be conﬁrmed.15 the real part is plotted and in ﬁgure 4. We note that the result of section 3.12: The third eigenvector of the perturbation about the second bound state We note that for the eigenvalues with nonzero real part Q is almost zero.4. 4.11.14 we plot the eigenfunction corresponding to the next eigenvalue after the near zero one.12 we plot the next two eigenfunctions corresponding to the next two eigenvalues.10 we plot the eigenvector corresponding to the near zero eigenvalue. In ﬁgure 4. In ﬁgure 4. 36 .13 we have the eigenvalues of the perturbation about the third state. Now for the fourth bound state with L = 700 and N = 100 we have eigenvalues which are presented in table 4.
14: The second eigenvector of the perturbation about the third bound state 37 .8 x 10 −3 Eigenvalues of the perturbation equation 6 4 2 0 −2 −4 −6 −8 −6 −4 −2 0 2 4 x 10 6 −4 Figure 4.13: The eigenvalues of the perturbation about the third bound state 4 2 0 −2 5 0 −5 −10 6 4 2 0 −2 0 50 100 150 200 A/r 0 50 100 150 200 B/r 250 300 350 400 450 0 50 100 150 200 W/r 250 300 350 400 450 250 300 350 400 450 Figure 4.
16: The third eigenvector of the perturbation about the third bound state 38 .15: The third eigenvector of the perturbation about the third bound state 4 2 0 −2 5 A/r 0 50 100 150 200 B/r 250 300 350 400 450 imaginary part 0 −5 −10 6 4 2 0 −2 0 50 100 150 200 250 300 350 400 450 0 50 100 150 200 W/r 250 300 350 400 450 Figure 4.4 2 0 −2 4 2 0 −2 6 4 2 0 −2 0 50 100 150 200 A/r 0 50 100 150 200 B/r 250 300 350 400 450 0 50 100 150 200 W/r 250 300 350 400 450 250 300 350 400 450 Figure 4.
00185003885932i −0. and see that it is satisfactorily conﬁrmed.±0. In table 4.00504446459212i +0.00297504982576i −0.00039265148039 ±0 −0.00557035273905i Table 4.00088225206795i +0.00022482647750 ±0.5 Bound on real part of the eigenvalues From section 3. 4.2.00022482647750 ±0 ±0.00015985257765 ±0.00225911281180i +0.00088225206795i −0.00308656938411i −0.00015985257765 ±0 −0.00017460656919 ±0 ±0 ±0 ±0.4: Eigenvalues of the perturbation about the third state ±0. 4.6 we compare this bound with the values obtained.00168871696256i +0.00431823282039i −0.00260936445117i −0.00017460656919 ±0.00220985235805i −0.5: Eigenvalues of perturbation about the fourth state There we notice that there are three quadruples. that is we convert (3.5 we have a bound on the real part of the eigenvalues of the perturbation equations.00134992819716i −0.00340215999936i Table 4.00078451579204i −0.00225911281180i −0.00368240791656i −0.00051994798783 ±0 ±0 ±0 ±0.00039265148039 ±0.24) into a set of ﬁrst order 39 .00308656938411i +0.00000000975044 ±0 ±0.1.00031089278959i −0.6 Testing the numerical method by using RungeKutta integration The result obtained via the Chebyshev numerical methods can be veriﬁed by using RungeKutta integration as in section 2.00051994798783 ±0.00168871696256i −0.00000001236655 ±0 ±0.00504446459212i −0.
00100532361160 0.0005199479878 0.0000652878367 Bound 0.11f) where Y1 = A. = Y6 . (4.5 −1 −0. = 2R0 Y3 . For the initial conditions on Y6 .5 2 x 10 2.11a) (4.11c) (4.00058275894209 0.5 1 1.0002248264775 0.E’s.00073784267376 0.11e) (4.0001143754114 0.11b) (4. Y3 = B. Y3 (0) = 0 and Y5 (0) = 0.11d) (4. = Y4 . = −V0 Y1 − iλY1 + R0 Y5 .5 0 0.0013932693098 0.D.5 4 3 2 1 0 −1 −2 −3 −4 x 10 −3 Eigenvalues of the perturbation equation −5 −2.5 −2 −1. Y5 = W and λ is an eigenvalue.00048153805083 Table 4.5 −4 Figure 4. Y4 and 40 .6: Bound of the real part of the eigenvalues O.00362396540325 0. Y1 Y2 Y3 Y4 Y5 Y6 = Y2 .17: The eigenvalues of the perturbation about the fourth bound state State 1 2 3 4 5 6 Numerical maximum real part 0 0.00157632209179 0. = −V0 Y3 − iλY3 . The boundary conditions at the origin are Y1 (0) = 0.
4.11) for the ﬁrst two eigenvalues of (3. The result plotted ﬁgure 4. 4.23. we have analysed the linear stability of the stationary sphericallysymmetric states using a spectral method. Proceeding to do RungeKutta integration on the results obtained with the perturbation about the second bound state in section 4. We note that except for the blowing up near the ends which we expect since the RungeKutta method is sensitive to inaccuracies in the initial data.22 show the ﬁrst eigenvector of the perturbation about the third bound state. Figures 4. compare with ﬁgure 4.7 Conclusion In this section. The ﬁrst two eigenvalues are those worked out in section 4.18.10 5 0 −5 2 1 0 −1 0 −5 −10 −15 −20 0 5 10 15 A/r 0 7 x 10 5 10 15 20 B/r 25 30 35 40 0 5 10 15 20 W/r 25 30 35 40 20 25 30 35 40 Figure 4. we proceed in the same way.18: The ﬁrst eigenvector of the perturbation about the ground state. 41 . using RungeKutta integration.20. we plot the results obtained in ﬁgures 4.24 show the second eigenvector with real and imaginary parts.3. the solutions obtain by RungeKutta integration correspond to the eigenvectors obtain by solving (4.21. 4.2. In ﬁgures 4. To test the perturbation about the third bound state.19 we plot the result doing a RungeKutta integration on (4.3 Y2 we can use the values obtained from the eigenvectors obtain when we solved the matrix eigenvalue problem.5).24) with R0 and V0 corresponding to the ﬁrst bound state. 4. and have checked the results with a RungeKutta method.
19: The second eigenvector of the perturbation about the ground state. compare with ﬁgure 4. using RungeKutta integration.10 42 .20: The ﬁrst eigenvector of the perturbation about the second bound state. using RungeKutta integration.4 20 0 −20 −40 6 4 2 0 −2 0 −20 −40 −60 0 5 10 15 20 A/r 0 8 x 10 5 10 15 20 25 B/r 30 35 40 45 50 25 W/r 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 Figure 4. compare with ﬁgure 4.15 10 5 0 −5 15 10 5 0 0 −5 −10 −15 0 5 10 15 20 A/r 25 B/r 30 35 40 45 50 0 5 10 15 20 25 W/r 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 Figure 4.
21: The second eigenvector of the perturbation about the second bound state.22: The second eigenvector of the perturbation about the third bound state. compare with ﬁgure 4. using RungeKutta integration.10 5 0 −5 15 10 5 0 −5 20 15 10 5 0 0 50 0 50 A/r 0 50 B/r 100 150 W/r 100 150 100 150 Figure 4. using RungeKutta integration.11 5 0 −5 −10 5 0 −5 −10 8 6 4 2 0 0 20 40 60 80 A/r 0 20 40 60 80 100 B/r 120 140 160 180 200 0 20 40 60 80 100 W/r 120 140 160 180 200 100 120 140 160 180 200 Figure 4. compare with ﬁgure 4.14 43 .
using RungeKutta integration. compare ﬁgure 4.23: The real part of third eigenvector of the perturbation about the third bound state.15 40 A/r imaginary part 20 0 −20 10 0 20 40 60 80 100 B/r 120 140 160 180 200 imaginary part 0 −10 −20 −30 5 0 20 40 60 80 100 W/r 120 140 160 180 200 imaginary part 0 −5 −10 0 20 40 60 80 100 120 140 160 180 200 Figure 4. using RungeKutta integration.16 44 . compare with ﬁgure 4.24: The imaginary part of third eigenvector of the perturbation about the third bound state.5 0 −5 −10 −15 20 10 0 −10 6 4 2 0 0 20 40 60 80 A/r 100 B/r 120 140 160 180 200 0 20 40 60 80 100 W/r 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200 Figure 4.
in that the eigenvalues are purely imaginary. which we saw in section 1. The calculation of section 4. which is to say the (n+1)th state. which requires a numerical evolution of the nonlinear equations. In the numerical calculation.λ and −λ) while real or complex ones occur in pairs. 45 . has n quadruples of complex eigenvalues as well as pureimaginary pairs.3 showed that complex eigenvalues necessarily occur in quadruples ¯ ¯ (λ. • no purely real eigenvalues appear. and this is satisfactorily veriﬁed by numerical solutions found here.We have checked convergence of the eigenvalues obtained by the spectral method under increase of the number N of Chebyshev points and the radius L of the spherical grid. we found that complex eigenvalues could only arise if a certain integral of the perturbed wavefunction was zero. In section 3.4.−λ. We must now turn to the nonlinear stability of the ground state. Thus only the ground state.2 is the absolute minimiser of the conserved energy. and in particular no zeromodes occur. is linearly stable. All other sphericallysymmetric stationary states are linearly unstable. • the nth excited state. we ﬁnd that: • the ground state is linearly stable.
oﬀ the grid) leaving a remnant at the origin consisting of a ground state rescaled as in subsection 1. This is the content of section 5. we expect the ground state to be the only stable state. o In particular. Thus we have a preliminary picture of the evolution: any initial condition will disperse to large distances (i. Chapter 6 contains the numerical results. Since the problem is nonlinear. we ﬁrst consider the simpler problem of solving the timedependent Schr¨dinger equation in a ﬁxed potential. and what o we can check with a nonzero but ﬁxed potential. which is to say evolving the potential as well as the wavefunction.1 The problem In the next two chapters. We need a numerical method for o evolution. If the 46 .5) with the restriction of spherical symmetry. we face the problem of evolving the full SN equations.2 and with total probability less than one.4 below. Finally. in section 5.2. we consider how to test the method against an explicit solution of the zeropotential Schr¨dinger equation.Chapter 5 Numerical methods for the evolution 5. In section 5. we want to see how far the linearised analysis of chapter 3 and chapter 4 is an accurate picture. we consider what the problems are in this programme. We describe an iteration for this and list some checks which we can make on the reliability and convergence of the method. and methods for dealing with them. On the basis of the linearised calculations of chapter 3 and chapter 4. In this chapter. keeping reﬂection from the boundary to a minimum. and a technique for dealing with the boundary that will allow the wave function to escape from the grid.e.7. the aim is to ﬁnd and test a numerical technique for solving (1.2 to section 5. We want to evolve an initial ψ long enough to see the dispersive eﬀect of the Schr¨dinger equation and the concentrating eﬀect of the gravity. In section 5.5.10 we introduce the notion of residual probability which we may describe as follows.
i ∂ 2ψ ∂ψ = − 2 + ψφ. that is solving. Letting ψ n+1 = λeikx and substituting into (5.2) where ψ n is the wavefunction at the nth time step and D2 is the approximation for the diﬀerentiation operator. We could consider renormalising the wavefunction as the numerical method progresses. Examples of what D2 can be are ﬁnite diﬀerence or a Chebyshev diﬀerentation matrix.1) We note that Numerical Recipes on Fortran [20] and Goldberg et al [9] use a numerical method to solve the Schr¨dinger equation which they say needs to preserve the Hamiltonian o structure of the equation or equivalently the numerical method is time reversible. If we used an implicit method instead. (5. δt So here we have an ampliﬁcation factor of λ which provided k 2 + φ = 0 has λ > 1 and (5.2) we get λ = 1 − iδt(k 2 + φ).3) this method leads to a growth in the normalisation of the mode.4) (5.6) So in this case we note that λ < 1 provided k 2 + φ = 0.2 Numerical methods for the Schr¨dinger equation with o arbitrary time independent potential We consider numerical methods that solve the 1dimensional Schr¨dinger equation with o given initial data. ∂t ∂x (5. At each time step the modes get ampliﬁed by 47 . but this would have several draw backs. this method leads to decaying modes. Now we consider what happens to the mode eikx . that is: i ψ n+1 − ψ n = D2 (ψ n+1 ) + φψ n+1 .initial condition is a state with negative energy and this preliminary picture is sound then we can estimate the residual probability remaining in this rescaled ground state. 5. 1 + iδt(k 2 + φ) (5. that is. To see why this is consider the case of an explicit method given by: i ψ n+1 − ψ n = D2 (ψ n ) + φψ n .5) We get an amplifying factor of λ= 1 . when D2 (eikx ) = −k 2 eikx . δt (5.
. 1 + 1 iδt(k 2 + φ) 2 (5. 5. With this method the discretization of the Schr¨dinger equation is: o i 1 φ ψ n+1 − ψ n = − (D2 (ψ n+1 ) + D2 (ψ n )) + (ψ n+1 + ψ n ).o. that is Chebyshev diﬀerentation matrix. The phase 48 .t 2 6 k 6 δt3 12 (5. t) = e−ik t eikx .9) which is just that of the factor obtained in (5. . 2 2 4 2 1 2 4 iδt3 k 6 = 1 − iδk 2 t − δt k − + .10) = 1 − iδtk 2 − δt2 k 4 ik 6 δt3 + + h. Suppose we consider the case of the linear Schr¨dinger equation in one o dimension with φ = 0 which has solutions of the form ψk (x.11) So to make the other terms small we can require factor is correct but only to this order. To get a numerical method that will preserve the constants of the evolution of the equation (namely the normalisation) we need to use a CrankNicholson method.. Expanding λ in powers of δt we have: i 1 i i λ = (1 − δtk 2 − δt2 k 4 + δt3 k 6 + . but that does not mean that the phase is correct.7) which has an amplifying factor of λ= 1 1 − 2 iδt(k 2 + φ) . as in [9]. δt 2 2 (5. 2 2 while the actual phase factor should be : e−ik 2 δt (5. Also for the SN equation renormalisation would require rescaling the space axis which would need a interpolation at each step..diﬀerent amounts so the ratio in diﬀerent modes will change. Now the wave evolves with a change in phase factor (λ) of: λ= 1 1 − 2 iδt(k 2 ) . to be small. 1 + 1 iδt(k 2 ) 2 2 (5.3 Conditions on the time and space steps We note that the CrankNicholson method might be unitary. is seen to give better results for less computation time compared to ﬁnite diﬀerence method..)(1 − δtk 2 ). Since the boundary conditions are straightforward a spectral method.8) which is unitary (λ = 1) since it is in the form of a number over its complex conjugate.8).
2 ψ n+1 = [1 − δtE + δt3 E 3 12 (−iδtE)2 (−iδtE)3 + + O(δt4 )]ψ n . since otherwise the wavefunction would not be normalisable. −D2 (ψ N ) + φψ N = Eψ N .17) The boundary condition at r = 0 is just a matter of setting χ = 0 for the ﬁrst point in the domain. (5. δt 2 (5. 2! 4 (5. Substitution into the method gives.14) iδtE n iδtE −1 ) (1 − )ψ .16) In order that the phase is calculated correctly we require that the error due to the term be small. n+1 ψi = (1 + (5.15) Expanding out with  δtE  < 1 we have. The discretized method in the Schr¨dinger equation is : o i ψ n+1 + ψ n 1 = [−D2 (ψ n+1 ) − D2 (ψ n ) + φ(ψ n+1 + ψ n )]. We expect the solution to evolve like ψ n = e(−iEt) ψ 0 where E is the energy of the stationary solution. for any N .Now suppose that ψ is a stationary state solution of the Schr¨dinger equation with o potential. The problem with this is that only in a certain few cases will we know the actual 49 . 2 2 (5.13) δtE n+1 δtE n ψ + ψ . To try to impose these boundary conditions numerically we can solve the Schr¨dinger equation o in rψ = χ in which case it becomes: iχt = −χrr + φχ.4 Boundary conditions and sponges The boundary conditions which we want to impose on the Schr¨dinger equation are that o the wavefunction must remain ﬁnite as r tends to zero and the wavefunction must tend to zero as r tends to inﬁnity. i(ψ n+1 − ψ n ) = and so we obtain. ψ r where ψ is the actual analytic solution. 2 2 (5.12) Now since ψ n and ψ n+1 are stationary states we know that. The other boundary condition can be approximated in one of the following ways: • We can at a given r = R say set that χ = solution. 5.
There will also be a reduction in the normalisation integral. C is chosen such that the wavefunction is normalised. That is. This also gives a check on the boundary conditions.18) where the function e(r) is a strictly negative (one sign) so that the equation acts like a heat equation to reduce the probability which is assumed to be heading oﬀ the grid.• We set the condition that χ = 0 at r = R for R large so that the other boundary reﬂect all the outward going probability and send it back towards the origin.21) but in this case we will have the limiting solution: 50 a σ We note that C will not exist when exp(− σ2 − v 4 ) = 1 which is when v = 0 and a = 0. The problem with this boundary condition is that it will • The other thing we can do to reduce the eﬀect of probability bouncing back is to Schr¨dinger equation the equation. There will still be some reﬂection oﬀ the sponge but this is a smaller eﬀect. introduce sponge factors at r = R were R is large.5 Solution of the Schr¨dinger equation with zero potential o To check that our numerical method is functioning correctly we aim to check it in the case of spherical symmetry with the same boundary condition where a solution is known both analytically and numerically. . That is: √ −a2 v 2 σ 2 1 = C 2 π[1 − exp( 2 − )]. σ 4 2 2 2 that χ = 0 at r = 0 and χ = 0 at r = ∞. when t = 0. condition is approximated. (5. This is a wave bump “moving” at a velocity (5.19) so as give a smooth function which only has an eﬀect at the boundary since for small r the sponge factor eﬀectively vanishes. In the spherically symmetric case with zero potential a solution to the Schr¨dinger o equation is: rψ = χ = √ C σ (σ 2 + 2) 1 2 [exp(− (r − vt − a)2 ivr iv 2 t + − ) 2(σ 2 + 2it) 2 4 (5. o (i + e(r))χt = −χrr + φχ. Here we have the boundary condition v starting at a distance a from the origin. For example we take e(r) = e(0.1(r−R)) .20) − exp(− (r + vt + a)2 ivr iv 2 t − − )]. we consider instead of the (5. 2(σ 2 + 2it) 2 4 we shall call this a moving Gaussian shell. 5.
it can only be done for testing. 51 (5. This method will cause the wave to reﬂect back oﬀ the boundary.25) (5. • We can use sponge factors.24) . which should be the case since the numerical method preserves this. change the equation so that is becomes a type of reduce the wave which is reﬂected from the boundary.23) Note that the solution (5. In this case we know that the energy E is preserved where E is given by E = T + V. 2(σ 2 + 2it) (5. • We can set the boundary condition to be χ = 0 at r = R where R is chosen so that it is large compared to the initial data. since there is no gravity or other forces to stop the natural dispersion of the wave equation. but that normalisation will not be preserved. π (5.rψ = χ = where A is such that Ar (σ 2 + 2it) 2 3 exp(− r2 ).6 Schr¨dinger equation with a trial ﬁxed potential o We can consider the Schr¨dinger equation with a potential which is constant with respect o to time. and also to approximate the eﬀect of the boundary condition at inﬁnity: • Check that normalisation is preserved. We note that this will 5. that is. We note that this will only work for the case where the analytical solution heat equation. can do the following things to check the numerical calculation of the solution. when there are no sponge factors or when there is no probability moving oﬀ the grid. • We can set the boundary value at r = R to be the value of the analytical solution at is known. We note that we cannot numerically model the boundary condition at r = ∞ so we that point. where T =  ψ2 d3 x. that is.22) 2σ 3 A2 = √ .20) will in the long term tend to zero everywhere. where the heat is absorbed at the boundary.
7 Numerical evolution of the SN equations The method that Bernstein et al [4] use for numerical evolution of the wavefunction of the SN equations consists of a CrankNicholson method for the timedependent Schr¨dinger o equation and an iterative procedure to obtain the potential at the next time step. • Energy of the wavefunction. They transform the SN equations in the spherically symmetric case into a simpler form which is: ∂u ∂t d2 (rφ) dr2 i where u = rψ. Then the following procedure is used to calculate the wavefunction and potential at the next time step: 1.26) we can work out the bound states. We know by general theory that ψn ψdr = cn eiEn t . In summary we can check in this case. (5. Choose φn+1 = φn to be an initial guess for φ at the (n + 1)th time step. where ψ is a general wavefunction and cn is a constant.and V = with the choice of potential φ = solve the eigenvalue problem 2 1 (r+1) φψ2 d3 x. r (5.30) .27) This gives us eigenvalues En say with eigenfunctions ψn where the ψn are independent of time. (5. that is we can ψ + φψ = Eψ (5. • Normalisation. 52 = − = ∂2u + φu. ∂r2 u2 . the ﬁrst guess being that of the current potential.28) • The inner products with respect to the bound state to see that they are of constant modulus and the phase change correctly 5.29) (5.
These could be added to stop outgoing waves being reﬂected from the imposed boundary condition at r = R. In the SN equation in general we can check: 53 . that is: 2i U − un δt = −D2 (U ) − D2 (un ) +[ΦU + φn un ].2 for the SN equations the energy is no longer conserved. (5.34) Thus it is useful to call 3I the conserved energy EI .2. Calculate U by the discretized version of the Schr¨dinger equation with Φ being the o potential at the (n + 1) step. 3 (5. 3. r (5. which is o 2i un+1 − un δt = −D2 (un+1 ) − D2 (un ) +[φn+1 un+1 + φn un ]. 5. This outline of a general method is independent of the Poisson solver and numerical method used to evolve the Schr¨dinger equation. This is the reason why we need to work out the potential at the n + 1 time step instead of just using an implicit method in the potential. R] and R is large. That is Φ that solves: d2 (rΦ) dr2 = un+1 2 .31) no sponge factors. 5.8 Checks on the evolution of SN equations We recall that as we saw in section 1. We also note that if ψ corresponds to a bound 2 state with energy eigenvalue En then I= 1 En .32) where this equation is solved on the region r ∈ [0. but the action I = T + 1 V is. Calculate the Φ which is the solution of the potential equation for the un+1 . We note that CrankNicholson is second order accurate with respect to time and is such that it preserves the normalisation. we could use either a ﬁnite o diﬀerence method or spectral methods.33) where the boundary conditions are the same as in step 2. (5. where there are 4. Consider U − un+1  if less than a certain tolerance stop else take φn+1 = Φ and continue with step 2. That is. Calculate un+1 from the discretized version of the Schr¨dinger equation.
To get just a function of p we can calculate: Oh1 − O h3 . but we do still expect it to be second order in time.10 Large time behaviour of solutions For the Schr¨dinger equation with a ﬁxed potential.9 Mesh dependence of the methods To be sure that the evolution method produces a valid solution up to a degree of numerical error. In a case where we expect the error to be like Oh ∼ I + Ahp . the general solution will consist of a o linear combination of bound states and a scattering state. that is there is no requirement on the ratio . For large times. For the SN equations.36) where h1 . We note in the case of the spectral methods we do not expect the method to be second order in space. r ¯ 0 (ψψr ¯ − ψ ψr )dr . we have seen in chapter 4 54 . We expect this since the truncation error in the case of just solving the linear Schr¨dinger equation is second order. I actual value and p the order of the error.at least in In the case of evolving about a stationary state we can check • That the phase factor should evolve at a constant rate at the corresponding energy of the bound state at least for a while. We also know that the method is o ∆t unconditionally stable. we need to show that as the mesh and time steps decrease that the method converges to a solution.h2 and h3 are three diﬀerent values of the step size. To test for this convergence we can use “Richardson Extrapolation”. (5. 5. at least when not aﬀected by the sponge. 2 • That the change in the potential function satisﬁes φt = i the case of spherical symmetry. Oh2 − O h3 (5. • Preservation of the action I = T + 1 V . In the case of the ﬁnite diﬀerence method we expect errors to be second order in time and space steps.35) where Oh is the calculated value with space or time step size h. the scattering state will disperse leaving the bound states.• Preservation of the normalisation. so we can reﬁne the ∆x2 mesh in both direction independently and expect convergence. at least when it is not aﬀected by the boundary condition due to the sponge factors. 5.
2 to have total probability less than one. (5. and we shall see the same thing for nonlinear instability in chapter 6. rescaled as in subsection 1. E0 (5.38) for some α. We shall see support for this conjecture in chapter 6. So if we start with an initial value EI of the action which is negative then we claim EI = ES + α3 E0 .41) 55 . So then we have that α3 = (ES + EI ) . that this is a stationary state but with: ∞ 0 ψ0α 2 = α. we might conjecture that the solution at large values of time for the SN equations with arbitrary initial data would be a combination of a multiple of the ground state. It follows from the transformation rule for the SN equations (subsection 1.39) (5.2).2.2. E0  EI . together with a scattering state containing the rest of the probability but which disperses.that the bound states apart from the ground state are all linearly unstable. which leads to the inequality α3 > (5.40) since the scattered energy is positive. where ES is the scattered action and α3 E0 is the energy due to the remaining probability near the origin. Assuming the truth of this picture. What we mean by a multiple of the ground state is the following: if ψ0 (r) is the ground r state then consider ψ0α (r) = α2 ψ0 ( α ).37) We also note that E0α = α3 E0 . (5. Consequently. if the conserved energy is negative initially we can obtain a bound for the probability remaining in the multiple of the ground state.
1 Testing the sponges To test the ability of the sponge factor to absorb an outward going wave.1: Testing the sponge factor with wave moving oﬀ the grid 56 .Chapter 6 Results from the numerical evolution 6. we use the initial condition of an outward going wave from (5. Lump moving outwards 0.2 rψ 0.5). We can use the exponential function of (5.25 0.3 0.15 0.35 0.05 0 200 150 100 50 r 0 0 50 time 100 150 200 Figure 6.20) at t = 0.1 0.
1 0.1) − exp(− (r + vt + a)2 ivr iv 2 t − − )] 2(σ 2 + 2it) 2 4 Here v is chosen so that this corresponds to an outward going wave and is suﬃciently large so that the wavefunction will scatter.1 we plot a Gaussian shell moving oﬀ the grid.1 since it reﬂects less back from the boundary.2 Evolution of the ground state The ground state is a stationary state and is linearly stable by chapter 4 and we expect it to be nonlinearly stable for small perturbations.05 0 200 150 100 50 r 0 0 50 time 150 100 200 Figure 6. −2 exp(0.15 0. In ﬁgure 6.25 0. here the sponge factor is but we notice that there is some reﬂection oﬀ the boundary. This sponge factor gives a gradual increase toward the boundary. We note that due to inaccuracy in the calculation of the ground state and the numerical error that will be 57 .2 rψ 0.3 0.2 we plot an Gaussian shell moving oﬀ the grid. We note that this sponge factor work betters than one used for 6.35 0. here the sponge factor is ﬁgure 6.5(r − Rend )). since it has the lowest energy. In ﬁgure 6. −10 exp(0.Lump moving outwards 0.2 ∗ (r − Rend )). C is the normalisation factor and a and σ determine the position and shape of the distribution.2: Testing sponge factor with wave moving oﬀ the the grid rψ = √ C σ (σ 2 + 2it) 2 1 [exp(− (r − vt − a)2 ivr iv 2 t + − ) 2(σ 2 + 2it) 2 4 (6.
Since the ground state is a solution to the timeindependent SN equations we expect that there should be a constant change in phase corresponding to the energy of the state and the absolute value of the wavefunction should be constant.evolution of the angle . By adjusting the time step.3: The graph of the phase angle of the ground state as it evolves made in the evolution. So therefore we know that this oscillation about the ground state is introduced by numerical error. We ﬁnd a growing oscillation which reaches a bound and stabilises at a ﬁxed oscillation.4. We also can adjust the sponge factor to absorb more at the boundary to reduce the amplitude of the boundary. see ﬁgures 6. the numerical evolution will actually be the evolution of the ground state with a small perturbation. but this shouldn’t make any diﬀerence to the overall end result here.3) and we can check that it is of the correct value by calculating the energy of the wavefunction.5. we can reduce the amplitude of the limiting oscillation. starting with the ground state 30 4 2 25 20 15 10 5 r 0 20 40 60 80 100 time 120 140 0 phase 0 −2 −4 Figure 6. The next thing to try is the long time evolution of the ground state. 6. We also deduce that the errors occur due to the long tail of the ground state at which the wavefunction is approximately zero and small error here will have a great eﬀect on the normalisation and the energy so causing more error close to 58 . Now we evolve the ground state for a long time. Here we do get a constant phase (see ﬁgure 6. We can also check the conservation of energy and the conservation of normalisation. about 1E5 in the time scale which is about 5E5 time steps.
5 time 2 2.046 0.5 0.5 30 25 3 20 15 r 10 5 3.5 time 1 x 10 4 rψ 0.2 0 Figure 6.5 x 10 3 4 Figure 6.038 0 0.04 0.048 abs wave−function near origin 0.4: The oscillation about the ground state evolution about ground state 0 0.4 1.044 0.5 2 0 50 45 40 35 2.052 graph to show the boundness of the error when evolving around ground state 0.5: Evolution of the ground state 59 .042 0.05 0.0.5 1 1.
2 0 0.04 −0.6: The eigenvalue associated with the linear perturbation about the ground state the origin. The aim here being to see if we can induce ﬁnite perturbation around the ground state.2 0. add on to it an exponential distribution and evolve.08 Eigenvalues of the perturbation equation 0. N = 50 Chebyshev points and dt = 1 as the time step to obtain n = 170.8 we plot the Fourier transform of the oscillation about the ground state at a given point. In table 6. We expect that these would be at the frequencies determined by the ﬁrst order linear perturbation about the ground state and the corresponding higher order linear perturbations.6E − 5. which are all imaginary.02 0 −0. Now we want to consider an evolution of the ground state with deliberately added perturbations.04 0. which are plotted in ﬁgure 6. Then we can Fourier transform the absolute value of the wavefunction near the origin to obtain the diﬀerent frequencies that make up the oscillation. With this amount of sampling points and time π step we should be able to obtain the frequencies to an accuracy of = 3. 000 sampling points.6 0.1 we present the frequencies observed (see ﬁgure 6.08 −1 −0. In ﬁgure 6. In ndt ﬁgure 6.0. 60 .06 −0.7 we plot a section of the oscillation about the ground state at a given point.6.02 −0.4 0.8) with those obtained in chapter 4. We take a ground state.6 −0. We can compare these observed frequencies to those which were obtained in chapter 4 doing the ﬁrst order linear perturbation.8 −0. We use in our evolution method.8 1 Figure 6.06 0.4 −0.
261 graph rψ near the origin.7: The oscillation about the ground state at a given point −7 Fourier transform of oscillation about the ground state −8 −9 −10 log(fourier transform) −11 −12 −13 −14 −15 −16 −3.259 0.6 −2.6 −3. evolving about the ground state 0.8: The Fourier transform of the oscillation about the ground state 61 .26 rψ at r = 3.257 0.4 −3.0.8 log(frequency) −2.2 −3 −2.258 0.2 −2 Figure 6.256 0 500 1000 1500 2000 time 2500 3000 3500 4000 Figure 6.4 −2.6 0.
linearised frequencies ±0.1204 log of size of peak . emitting scatter and leaving something that looks like a multiple of the ground state.300 −7.0810 0.0866 0.431 −14. What is the amount of probability left in the ground state? 4.0764 0. There are a few issues to look at with the evolution of the higher stationary states when the state are evolved for long enough.1028 0.9).0688 ±0.0341 ±0.0341 0.466 −13. Is this above the estimate of section 5. so if the state is nonlinearly unstable it is likely that it will decay from just evolving.1 Evolution of the higher states Evolution of the second state We note that whether or not the higher stationary states are nonlinearly stable they should evolve for some time like stationary states before numerical errors grow suﬃciently to make them decay.1099 ±0.10 based on the assumption that the energy that is scattered is positive? As we can see in ﬁgure 6.377 −15.0867 ±0.038 −11.0934 ±0.3. 000 6.0810 ±0.603 −13.115 −16.0730 0.632 −8.10 the second state is decaying.1: Linearised eigenvalues of the ﬁrst state and observed frequencies about it.0687 0.102 −14.0603 ±0. How long does the state take to decay? 3. For the evolution of the second state we ﬁnd that we have again the correct phase factor (see ﬁgure 6.1012 ±0. 2.3 6.322 Table 6. 62 . Do they decay when perturbed into some scatter and some multiple of the ground state? Numerical calculation of a stationary state is only correct up to numerical errors.0731 ±0.1196 frequencies 0. n = 170.0765 ±0.0602 0. 1.0943 0.1071 0.265 −9. −8.
10: The long time evolution of the second state in Chebyshev methods 63 . starting with the second state 2 1 0 phase −1 −2 −3 100 80 60 40 20 0 time 50 40 r 30 20 0 10 Figure 6.2 0.1 0 150 100 50 0 3000 4000 5000 6000 7000 8000 9000 10000 Figure 6.evolution of the angle .9: The graph of the phase angle of the second state as it evolves 0 1000 2000 0.4 0.3 0.
In ﬁgure 6. since it is neither perturbation about the second state nor perturbation about the ground state. 64 .14 the normalisation and the action bound seem to be converging toward the same value. The resulting time series divides up into three sections. What we ﬁnd is the normalisation is still decreasing. t) − rψ(r1 . We expect the third section to be an oscillation about a “multiple” of the ground state and so we should get frequencies of oscillation but the linear perturbation frequencies should be transformed to make up for the diﬀerence in normalisation. origin.0095. 0) + rψ(r1 .11 the evolution of the second state. We deﬁne A to be such that A = rψ(r1 . This is decaying into something that looks like a multiple of the ground state.2) As in the case of the ground state we can consider rψ at a particular point to obtain where λ is the eigenvalue with nonzero real part obtained in solving (3. but here we have used in a lower tolerance in the computation and the result is that the second state takes longer to decay which would correspond to the error growing at a smaller rate. with bound on the action from section 5.We can see in ﬁgure 6. The ﬁrst section has an exponentially growing oscillation which we expect to be the unstable growing mode obtained in the linear perturbation theory (chapter 4). The growing mode here provides information about the time taken for the state to decay.14. 0) exp(real(λ)t) (6. To conﬁrm the expectation that this is the growing mode we consider modifying the function so that we would just have the oscillation with no growing exponential.24) about the second state and r1 is the point at which the time series has been obtained. The ﬁrst section is the evolution of the second stationary state. so we are unable to obtain any consistent frequencies. 0). that is the wavefunction is just a perturbation about the second state. Here we used n = 6000 sampling points and dt = 1 was the time step. The ﬁnal section corresponds to a “multiple” of the ground state with a perturbation about it. The second section of the time series we shall ignore. since the perturbation has to be signiﬁcantly large before the nonlinear eﬀects will cause the decay of the wavefunction.10. To remove the growing exponential we consider: f (t) = rψ(r1 .12. a time series. We note that f (t) − A is almost just We then Fourier transform f (t) − A (see ﬁgure 6. which is approximately that of the growing mode in the linear perturbation theory.13) and we obtain approximately one frequency which is at 0. which is near the an oscillation and we plot this in ﬁgure 6. We can plot the resulting probability in the region in ﬁgure 6. The next section is the part where it decays into the ground state emitting scatter oﬀ the grid.
2 0.1 0.12: f (t) − A 65 .05 0 10 8 6 x 10 4 4 2 150 time 0 200 r 100 0 50 Figure 6.15 rψ 0.Evolution of the second state 0.25 0.11: Evolution of second state tolerance E9 approximately 3 x 10 −9 f(t)−A 2 1 0 −1 −2 −3 0 2000 4000 6000 time 8000 10000 12000 Figure 6.
−21
Fourier transform of f(t)−A
−22
−23
−24
log(fourier transform)
−25
−26
−27
−28
−29
−30 −7
−6
−5
−4
−3 −2 log(frequency)
−1
0
1
2
Figure 6.13: Fourier transformation of f (t) − A
1.1
Decay of second state
1
0.9
norm
probability
0.8
0.7
0.6 action bound 0.5
0.4
0
1
2
3
4
5 time
6
7
8
9
10 x 10
4
Figure 6.14: Decay of the second state
66
Evolution of second bound state
2000
0.2 0.15
1500
rψ
0.1 0.05
1000
500 0 150 time 100 r 50 0 0
Figure 6.15: The short time evolution of the second bound state Although we have found the second state to be unstable as expected and the second state grows initially with an exponential mode, we can obtain more of the linear perturbation modes. We consider the evolution of the second state for a short time with an added perturbation to see if it does give any frequencies, and then check whether these agree with the linear perturbation theory. We note that since we can only evolve for a short time before the state decay the accuracy will be low. In ﬁgure 6.15 we plot the evolution of the second state with an added perturbation, and in ﬁgure 6.16 we plot the oscillation about the second state at a ﬁxed radius. In table 6.2 we present the observed frequencies about the second state compared with the eigenvalues of the linear perturbation. We note that the accuracy of the observed frequencies here is 0.002. In ﬁgure 6.17 we plot the Fourier transform of the second state with added perturbation, note that the graph is not smooth due to the low accuracy.
6.3.2
Evolution of the third state
Now we consider the evolution of the third stationary state. In ﬁgure 6.18 we plot the evolution of the third state, as expected we see that it is unstable and decays, emitting scatter and leaving a “multiple” of the ground state.
67
2.2
x 10
−3
section of the section bound state frequency oscillation
2.1
2
1.9
rψ at fixed point
1.8
1.7
1.6
1.5
1.4
1.3
0
50
100
150
200
time
250
300
350
400
450
Figure 6.16: Oscillation about second state at a ﬁxed radius
−10
fourier transform of freqency oscillation about second state
−10.5
−11
log(fourier transform)
−11.5
−12
−12.5
−13
−13.5
−14 −6
−5.5
−5
−4.5 log(frequency)
−4
−3.5
−3
Figure 6.17: Fourier transform of the oscillation about the second state
68
with bound on the action from section 5.linearised eigenvalues 0.1247 0.1419 0.0114 0. The mean velocity v the rate the shell is expanding outwards. we ﬁnd that it is exponentially growing. We plot the resulting probability in the region in ﬁgure 6.0000 0 − 0.2) but with λ now being the eigenvalue of the linear perturbation about the third state with the maximum real part.0211i 0 − 0.1577 Table 6.0342 0. In ﬁgure 6.1118i 0 − 0.0086i −0. we note the centre of mass is not moving.0022 and 0.0100i 0 − 0. the radial position of the peak of the shell.0014 + 0.0627i 0 − 0. The width σ of the shell.0643 0.19 we plot f (t) − A.0190 0.0737i 0 − 0.0530 0. 6.0030i 0 − 0. The mean position a.4 Evolution of an arbitrary spherically symmetric Gaussian shell In this section we consider the evolution of Gaussian shells or lumps (5.0416 0.0050 which correspond to the imaginary parts of the two complex eigenvalues of the linear perturbation.1263i 0 − 0.1579i frequencies 0.0153i 0 − 0. Since varying these factors will produce varied initial conditions. now we Fourier transform f (t) − A (see ﬁgure 6.0276i 0 − 0.20) varying the three associated parameters. We can again consider the perturbation about the third state initially as with the second state before it decays.0855i 0 − 0.0982i 0 − 0. we can perform a mesh analysis and a time step analysis on the evolution 69 .1417i 0 − 0.21.0853 0.21 the normalisation and the action bound seem to be converging toward the same value. In ﬁgure 6.0526i 0 − 0.0726 0.0351i 0 − 0.0266 0.0014 − 0.2: Linearised eigenvalue of the second state and frequencies about it.1123 0.0987 0.10. We now transform as in (6.0100i 0.20) and obtain two frequencies at 0.0434i 0 − 0.
5 1 1.4 0.4 1.6 0.5 3 3.2 0.5 5 0 50 100 r 150 200 250 300 350 Figure 6.2 0.8 x 10 2 4 Figure 6.19: f (t) − A with respect to the third state 70 .6 1.3 rψ 0.Evolution of third state 0.4 0.5 x 10 4 2 2.8 1 time 1.5 4 time 4.2 1.18: Evolution of third state 3 x 10 −9 2 1 f(t)−A 0 −1 −2 −3 0 0.1 0 0 0.
4 0.20: Fourier transform of growing mode about third state 1.5 x 10 5 4 0.5 4 4.6 0.−20 −22 −24 log(fourier transform) −26 −28 −30 −32 −34 −10 −8 −6 −4 log(frequency) −2 0 2 Figure 6.3 action bound 0 0.8 probability 0.5 0.5 1 1.1 Decay of third state 1 norm 0.21: Probability remaining in the evolution of the third state 71 .5 2 2.2 Figure 6.9 0.7 0.5 time 3 3.
Since the values are close to the expected values we are justiﬁed in supposing that the convergence is quadratic in the time step.4 time = 2500 0.6 time = 2000 time = 2250 remaining probability 0.a = 50. 72 .24.1 Evolution of the shell while changing v In ﬁgure 6.9. We also look at how much of the wavefunction is left at the origin which we expect to form a “multiple” of the ground state in the sense of section 5.3 −0.1 0. We compare the diﬀerence in diﬀerent time steps to the result in ﬁgure 6. N =40 time = 750 0.5 −0. We also notice that the graph is not symmetric about v = 0 even though the wavefunction energy is symmetric about v = 0.22: Progress of evolution with diﬀerent velocities and times as outlined in section 5.4 −0.8 Evolution of lump with speed v. We calculate the “Richardson fraction” for diﬀerent values of h i ’s we plot the diﬀerent Richardson fractions in ﬁgure 6.4 0.2 0. 6.1 0 v 0.1 0 −0.3 0.23 and note that the diﬀerence is small.22 we plot the remaining probability at diﬀerent times and at diﬀerent initial mean velocities v.10. We also show in table 6.0.2 0.4. while a and σ are constant initially.5 Figure 6.5 0.2 −0. We can also plot the variation with in the number of Chebyshev points to show convergence with increasing N (this time it does not make sense to do Richardson Extrapolation).3 0. We notice that the negative values of v correspond to an outward going wavefunction moving oﬀ the grid and a positive v is an inward going wavefunction. σ = 6. We expect that the more energy in the system to begin with the less energy and therefore probability eventually will remain near the origin.3 the mean calculated value against the values expected for quadratic convergence in time.7 time = 1250 0.
3 0.5 Figure 6.6.5 −2 dt = 2.5 Figure 6.5.6.5 −2.5 h1 = 2.4 −0.4 0.1 0.5.a = 50.23: Diﬀerence of the other time steps compared with dt = 1 5.h2 = 1.5 −0.5 h1 = 1.5 0 dt = 5/4 −0.5 1 difference in probability to dt = 1 0.5 three different Richardson fractions h = 5.5 −0. σ = 6.4 0.24: Richardson fraction done with diﬀerent hi ’s 73 .2 −0.1 0. N =40 1.2 0.3 −0.2 x 10 −3 Evolution of lump with speed v.h3 = 1 3 2.4 −0.h3 = 1 2 −0.h = 2.3 −0.1 0 v 0.3 0.h = 1 1 2 3 5 4.5 −0.1 0 v 0.25.5 Richardson fraction 4 3.2 0.h2 = 1.2 −0.5 dt = 5/3 −1 −1.
29. 6. while v = 0 and σ is constant initially. σ=6.7094 2.1652 Table 6. In table 6.2 0. We also plot in ﬁgure 6.1 0. in ﬁgure 6.3 −0.2 −0. We can also plot the variation with in the number of Chebyshev points to show convergence with increasing N (this time it does not make sense to do Richardson Extrapolation). 74 .1605 median value calculated 4.5 Figure 6. The plot is in ﬁgure 6.5 1.28 we plot the “Richardson fraction” at diﬀerent values of a. We can again calculate the “Richardson fraction” as in section 5.4.25: Diﬀerence in N value The plot is in ﬁgure 6. time = 2500 2 Difference in probability to N =60 1 0 N=50 −1 −2 N=40 −3 −0.9 in the time steps.667 1.5 1.26 the remaining probability at diﬀerent times with diﬀerent mean positions a initially.4 0.667 h2 2.5714 2.25.27 the values for the remaining probability with diﬀerent time steps.9761 3.1 0 v 0.9531 3.5 −0.4 we show the mean calculated value against the expected values and again we see that we have quadratic convergence in the time step.25 h3 1 1 1 expected value 4.4 −0.3: Richardson fractions 3 x 10 −4 Evolution with a=50.3 0.2 Evolution of the shell while changing a We plot in ﬁgure 6.h1 5 2.
3 0.27: Comparing diﬀerences with diﬀerent time steps 75 .5 difference in remaining probability with dt = 0.01 dt = 1.02 Evolution of lump with v=0. σ = 6. σ = 6.8 Evolution of lump with v=0.01 0 20 40 60 80 a 100 120 140 160 180 Figure 6.625.7 probability remaining 0.25 0 dt = 0.26: Evolution of lump at diﬀerent a 0.1 0.6 0.9 0. N= 40 time = 500 0.5 0.1 0 time = 750 time = 1250 time = 1500 time = 2000 time = 2500 0 20 40 60 80 a 100 120 140 160 180 Figure 6.2 0.625 0.833 −0. dt = 0.4 0. N= 40 dt = 2.
h2 = 0. h3 =0. h = 1.5 h = 2. h =0.σ =6 time =2500 Difference in probability to N =80 N=40 N=60 N=20 0 20 40 60 80 a 100 120 140 160 Figure 6.29: Diﬀerence in N value 76 .5.25.625 3.6 Richardson fraction at different values 5.5 3 0 50 a 100 150 Figure 6.5 4 h1 = 1.28: Richardson fraction with diﬀerent hi ’s 5 4 3 2 1 0 −1 −2 −3 −4 −5 x 10 −3 Evolution with v=0.25.625 1 2 3 5 Richardson fraction 4.833.
833 h3 0.625 0.32 we plot the “Richardson fraction” at diﬀerent values of σ.5 1.625 0.65 time = 1250 remaining probability 0. N =40 0.7 0. In ﬁgure 6.25 0. while v = 0 and a = 50.5: Richardson fractions 77 .31 the values for the remaining probability at diﬀerent time steps.0055 3. using data obtain at diﬀerent time steps.455 1.5 0.8571 median value calculated 5.908 median value calculated 2.8588 Table 6.45 0. h1 0.55 time = 2250 time = 2500 0.5 we show the mean calculated value against the expected values again we have quadratic convergence.6 time = 2000 0.625 h3 0.v = 0. In table 6.914 Table 6.h1 2.55 0.72 0.4: Richardson fractions 0.30: Evolution of lump at diﬀerent σ 6.a = 50.55 expected value 2.75 Evolution of lump with σ. We can again calculate the “Richardson fraction”.25 h2 1.4 5 6 7 8 9 10 σ 11 12 13 14 15 Figure 6.83 h2 0. We also plot in ﬁgure 6.30 the remaining probability at diﬀerent times with diﬀerent widths σ initially.4.3 Evolution of the shell while changing σ We plot in ﬁgure 6.458 1.625 expected value 5 3.
1.5
x 10
−4
Evolution of lump with σ,v = 0,a = 50, N =40 dt = 0.83
1 dt = 0.72
difference in probability to dt =0.55
0.5
0
dt = 0.625
−0.5
−1
−1.5
−2
5
6
7
8
9
10 σ
11
12
13
14
15
Figure 6.31: Comparing diﬀerences with diﬀerent time steps
5.5
Richardson fractions
5
4.5
Richardson fraction
4
3.5
3 h1 = 0.72,h2 = 0.625,h3= 0.55 2.5 h1 = 0.83,h2 = 0.72,h3= 0.55
2
1.5
5
6
7
8
9
10 σ
11
12
13
14
15
Figure 6.32: Richardson fractions
78
8
x 10
−4
Evolution with v=0,a=50, time = 2500
6
Difference in probability to N =60
4
N=40
2 N=50 0
−2
−4
−6
5
6
7
8
9
10 σ
11
12
13
14
15
Figure 6.33: Diﬀerence in N value We can also plot the variation with in the number of Chebyshev points to show convergence with increasing N (this time it does not make sense to do Richardson Extrapolation). The plot is in ﬁgure 6.33.
6.5
Conclusion
In this chapter, we have evolved the SN equations using diﬀerent initial condition, using the methods of Chapter 5. We have checked the ability of the sponge factors to absorb outward going wavefunctions moving oﬀ the grid. We have checked that the evolution method converges to the solution by evolving the SN equations with diﬀerent initial conditions and changing the time steps and the number of Chebyshev points in the grid. To check convergence with respect to the time step we have used Richardson extrapolation and found it to be converging like the second order in the time steps. The numerical calculations show that: • The ground state is stable under full nonlinear evolution. • Perturbations about the ground state oscillate with the frequencies obtained by the linear perturbation theory.
79
• Higher states are unstable and will decay into a “multiple” of the ground state, while emitting some scatter oﬀ the grid. • The decay time for the higher states is controlled by the growing linear mode obtained in linear perturbation theory. • Perturbations about higher states will oscillate for a little while (until they decay) according to the linear oscillation obtained by the linear perturbation theory. • The evolution of diﬀerent Gaussian shells indicates that any initial condition appears to decay as predicted in section 5.10, that is they scatter to inﬁnity and leave a “multiple” of the ground state.
80
In chapter 8.2a) (7.2 The axisymmetric equations We consider the SN equations which in the general nondimensionalised case are: iψt = − 2 2 ψ + φψ. we consider the SN equations in Cartesian coordinates with a wavefunction independent of z. (7. we shall ﬁrst ﬁnd stationary solutions. This will give some stationary solutions with nonzero total angular momentum including a dipolelike solution that appears to minimise the energy among wave functions that are odd in z.1a) (7. In particular we evolve the dipolelike state. Then the wavefunction is a function of the polar coordinates r and θ but independent of the polar angle. For the axisymmetric problem.2b) (7.1 The problem There are two ways of adding another spatial dimension to the problems we are solving.1b) φ = ψ2 . Now consider spherical polar coordinates: x = r cos α sin θ. z = r cos θ. In this chapter.the two regions of probability density attract each other and fall together leaving as in chapter 6 a multiple of the ground state. which turns out to be nonlinearly unstable .Chapter 7 The axisymmetric SN equations 7. We then consider the timedependent problem. y = r sin α sin θ. we consider solving the SN equations for an axisymmetric system in 3 dimensions.2c) 81 . (7. 7.
∞) and α ∈ [0. (7. r ∂r2 r sinθ ∂θ (7.where r ∈ [0. (7.6a) (7. 1.4b) Now setting u = rψ the equations become: iut = −urr + u2 r 1 (sinθuθ )θ + φu. 7. The stationary SN equations are in the general nondimensionalised case: En ψ = − 2 2 as r → ∞.5b) For the boundary conditions we still want ψ to be ﬁnite at the origin so that the waveas r → ∞. θ ∈ [0. θ) → 0 for all θ function is wellbehaved.7a) (7. π]. We note that these boundary conditions are spherically symmetric so that the ψ + φψ. π] and that u(r. 82 . r ∂r2 r sinθ ∂θ r sin θ ∂α2 ∂ψ (7.6b) φ = ψ2 . So we have that u(0. r sinθ (7.3) so in the case where ψ is only dependent on θ and r the SN equations become: iψt = − ψ2 = ∂ψ 1 ∂ 2 (rψ) 1 ∂(sinθ ∂θ ) + 2 + φψ.5a) (7. We know that ψθ = 0 at θ = 0 and θ = π since otherwise we would have a singularity in the Laplacian. r2 sinθ 1 = φrr + 2 (sinθφθ )θ . r ∂r2 r sinθ ∂θ 1 ∂ 2 (rφ) 1 ∂(sinθ ∂V ) ∂θ + 2 . The Laplacian becomes: 2 ψ= ∂ 2ψ 1 ∂(sinθ ∂θ ) 1 1 ∂ 2 (rψ) + 2 + 2 2 . θ) = 0 for all θ ∈ [0. so the axially symmetric stationary SN equation are: En u = −urr + u2 r = φrr + 1 r2 sinθ 1 r2 sinθ (sinθuθ )θ + φu.3 Finding axisymmetric stationary solutions To ﬁnd axisymmetric solutions of the stationary SN equations we proceed to adapt Jones et al [3] as follows.7b) (sinθφθ )θ . 2π) . Using the potential φ we now solve the timeindependent Schr¨dinger equation that o is the eigenvalue problem 2ψ − φψ = −En ψ with ψ = 0 on the boundary. (1 + r) 2. and to satisfy the normalisation condition we require that ψ → 0 solutions in the spherically symmetric case are still solutions.4a) (7. Take as an initial guess for the potential φ = −1 .
In the case of the Hydrogen atom we know explicitly the wavefunction. see below. 5. but as the iteration process continues the pattern of zeros wont be ﬁxed and we may need another 1 method. J 2 which diﬀer only by a small amount. For 83 . We can use this in our selection procedure in step 3 to get started. 7. These values do not change much under iteration of the stationary state problem and we can use this to obtain the next solution with values of E.4 Axisymmetric stationary solutions We present in table 7. In this case the number of zeros in the radial o r direction and the axial directions follow a characteristic pattern. because the solutions are separable.8) (7.2 the ﬁrst few stationary states of the axially symmetric solution of the SN equations called axp1 to axp8 together with energy and angular momentum. We also note in the case of the Schr¨dinger equation with a potential that each o r stationary state in the axially symmetric case has diﬀerent values of constants E and J 2 .9) We use a combination of the above properties to select solutions at step 3. (7. Here E is : E= and J 2 is : 2 ¯ 1 ∂(sinθ ∂θ ) + 1 ∂ ψ ]d3 x. In step 2 the method we use to solve the eigenvalue problem involves Chebyshev diﬀerentiation in two directions that is in r and θ. Now provided that the new potential does not diﬀer from the previous potential by a ﬁxed tolerance in the norm of the diﬀerence. Select an eigenfunction in such a way that the procedure will converge to a stationary state. ψ[ J = sinθ ∂θ sin2 θ ∂α2 R3 2 ∂ψ R3 ¯ −ψ 2 ψ + φψ2 d3 x. Once the eigenfunction has been selected calculate the potential due to that eigenfunction. that is a solution 1 of Schr¨dinger equation with a potential.3.) Small errors in symmetry will cause the solution to move along the zaxis and oﬀ the grid. (This needs care. Let the new potential be the one obtained from step 4 above but symmetrise potential π since we require that φ should be symmetric around the α = so that numerical errors 2 do not cause the wavefunction to move along the axis (the solution we want should have its centre of gravity at the centre of our grid. 6. stop else continue from step 2.) 4.
7. Axp1 in ﬁgure 7.16276929132192 0.comparison we give the energy of the ﬁrst few spherically symmetric stationary states in table 7. Axp3 and axp6 are the second and third spherical states. The others are higher multipoles.2: Table for E and J 2 of the axially symmetric SN equations 7. but we are not able to ﬁnd a simple classiﬁcation based on.1178 5.5 Timedependent solutions Now we consider the evolution of the axisymmetric timedependent SN equations (7. energy and angular momentum as there would be for the states of the Hydrogen atom.03079656067054 0. and minimises the energy among odd functions. 7.16. 84 . We plot the graphs the stationary states that appear in table 7. 7.0115 J zero 5.01252610801692 0.7. but instead of solving the Schr¨dinger equation o and the Poisson equation in one space dimension we now need to solve them in two space dimensions.1592 0. 7.1: The ﬁrst few eigenvalues of the spherically symmetric SN equations Energy 0.3548 17.1.14. in ﬁgures 7.0162 0.15 and the contour plots of these stationary states in ﬁgures 7.0208 0.9. 7.4). 7.13.0292 0.00674732963038 0. 7. We use the same method as in section 5.4.0599 0.5. Number of state 0 1 2 3 4 Energy Eigenvalue 0.8. 7.6. say. 7.3. It is odd as a function of z.1 is the ground state which is from table 7. 7.10. 7.7.0263 0. Axp2 in ﬁgure 7.155 3.11.2. has vanishing J 2 . 7.12. It was previously found in [22] and has nonzero J 2 . 7.1.2.0358 0.2053 1.1853 0.2.00420903256689 Table 7.9E6 name axp1 axp2 axp3 axp4 axp7 axp8 axp5 axp6 Table 7.3 is a dipole state.002 2. 7.
07 0. axp1 85 .09 0.01 0 60 40 20 0 R −60 −40 −20 z 0 20 40 60 Figure 7.04 0.08 0.2: Contour plot of the ground state.1: Ground state. axp1 Contour plot 50 40 R 30 20 10 0 −50 −40 −30 −20 −10 0 z 10 20 30 40 50 Figure 7.02 0.03 0.06 0.05 ψ 0.0.
axp2 86 .3: “Dipole” state next state after ground state .03 100 80 60 40 20 R 0 −80 −60 −40 z −20 0 20 40 60 100 80 −100 Figure 7.03 0.01 −0.0.02 −0.4: Contour Plot of the dipole.01 0 ψ −0.02 0. axp2 Contour Plot 90 80 70 60 50 40 30 20 10 0 −80 −60 −40 −20 0 20 40 60 80 Figure 7.
axp3 Contour plot 100 80 60 R 40 20 0 −100 −80 −60 −40 −20 0 z 20 40 60 80 100 Figure 7.6: Contour Plot of the second spherically symmetric state.01 120 100 80 60 40 20 R 0 −150 −100 −50 z 0 50 100 150 Figure 7.5: Second spherically symmetric state.02 0.01 0.015 ψ 0.005 −0.005 0 −0.03 0.0. axp3 87 .025 0.
01 200 150 100 50 R 0 −200 −100 z 100 0 200 Figure 7.0.axp4 contour plot 150 100 R 50 0 −150 −100 −50 0 z 50 100 150 Figure 7.02 0.8: Contour plot of the not quite 2nd spherically symmetric state.7: Not quite the 2nd spherically symmetric state .01 ψ 0 −0. axp4 88 .
9: Not quite the 3rd spherically symmetric state E = −0.x 10 10 8 6 4 −3 ψ 2 0 −2 −4 200 100 0 −100 −200 z 200 150 R 100 0 50 Figure 7. axp5 89 .0162.10: Contour plot of Not quite the 3rd spherically symmetric state E = −0.0162. axp5 contour plot 150 100 R 50 0 −150 −100 −50 0 z 50 100 150 Figure 7.
axp6 contour plot 150 100 R 50 0 −150 −100 −50 0 z 50 100 150 Figure 7.11: 3rd spherically symmetric sate .12: Contour plot of 3rd spherically symmetric state .axp6 90 .x 10 10 8 6 4 −3 ψ 2 0 −2 −4 200 150 100 50 R 0 −200 −100 z 100 0 200 Figure 7.
01 0.14: Contour plot of axp7 .0.01 −0.0263.13: Axially symmetric state with E = −0.005 −0.005 0 ψ −0.double dipole 91 .015 0. axp7 Contour Plot 90 80 70 60 50 40 30 20 10 0 R −80 −60 −40 −20 0 z 20 40 60 80 Figure 7.015 100 80 60 40 20 R 0 −100 −80 −60 −40 z −20 0 20 40 60 80 100 Figure 7.
1178 .1178 .axp8 92 .axp8 Contour plot 180 160 140 120 100 80 60 40 20 0 R −150 −100 −50 0 z 50 100 150 Figure 7.16: Contour plot of the state with E = −0.0208 and J = 3.15: State with E = −0.0208 and J = 3.x 10 15 −3 10 5 ψ 0 0 −5 200 50 150 100 100 50 0 −50 150 −100 z −150 −200 200 R Figure 7.
j i. i.j .We want to use ADI (alternating direction implicit) method to solve the Schr¨dinger o equation.j = (7. before it decays. r ∂r2 (7. (1 + iφn+1 )U n+1 = (1 − iφn )T. r ∂θ ∂θ (7. where fi.j iteration until it converges.j −L2 φn+2 + ρφn+2 = ρφn+1 + L1 φn+1 + fi.12c) Now we also need a similar way to solve the Poisson equation for the potential.j (7.j = fi. Let ui.j i.j i.13) (1 − iL2 )T = (1 + iL1 )S.j i. This leaves the remaining operator to be: L2 = ∂ 1 ∂2 [ 2 + cot(θ) ]. we use a the solution of the equation and deﬁne the iteration to be: −L1 φn+1 + ρφn+1 = ρφn + L2 φn + fi.12a) (7.14a) (7. (7. We note that the last operator is not just a function of one variable.j − L2 wi.14b) where ρ is small factor chosen such that the iteration converges. We take the operator for the r direction to be: L1 = 1 ∂2 .j . We can now evolve the SN equations using the above method starting with the dipole (ﬁrst axially symmetric state) and evolving with the same sponge factors as that used in Chapter 6. but the important part is that it is diﬀerential operators in one variable.j 2 is the density distribution.11) which is the θ derivative part of the equation. Rather than solving this equation as it 2 ri stands (that is N 2 × N 2 matrix problem where N is the number of grid points).10) which is the r derivative part of the equation which acting on u = rψ. which is enough to use a ADI(LOD) method to solve the Schr¨dinger equation as used by Guenther [10]. It decays in the same way as in the 93 PeacemanRachford ADI iteration [1] on the equation. We continue with the .12b) (7. i. and which acts on u.j .j i. w = rV then the discretized equation for the potential is: −L1 wi. so we need to split the Laplacian into two parts dependent on each coordinate. The ADI scheme is: o (1 − iL1 )S = (1 + iL2 )U n . Let φ0 = 0 be an initial guess for i. We ﬁnd that again the state behaves like that of a stationary state that is evoling which constant phase.j i.
02 0.5 in ﬁgure 7.20 we plot the evolution of the dipole with diﬀerent mesh size N = 30.25. to leave a “multiple” of the ground state. In ﬁgure 7.18 the average Richardson fraction. N 2 = 25 instead of N = 25.02 0 100 50 R 0 −100 z 0 100 t = 2000 t = 2600 0.01 0. 0.17 we plot the result of the evolution. In this we can also check the method for convergence.t=0 t = 1400 0.125. We can also check how the method converges with decreasing the mesh size. using Richardson fraction in time step.19 and note that again this shows that the convergence is linear in time.2 and if it is linear it should be 0. 1. We note that the state still decays into a “multiple” of the ground state.04 ψ 0.02 0.5. which if the convergence is quadratic should be 0.03 0.17: Evolution of the dipole spherically symmetric states emitting scatter oﬀ the grid and leaving a “multiple” of the ground state behind. One diﬀerence here is that the angular momentum is lost with the wave scattering oﬀ the grid.25. In ﬁgure 7.33. 94 . We consider three diﬀerent time steps for evolution of the dipole. 0.04 ψ ψ 100 0 0 −100 z 0.06 0. Where N is the number of Chebyshev poins in the r direction and N 2 is the number of point in the θ direction. N 2 = 20. 0. We also plot the Richardson fraction in the case of dt = 0. We plot in ﬁgure 7.02 0 100 50 R 0 −100 z 0 0 100 50 R 100 Figure 7. where dt = 0. Here we note that the convergence is linear in time.01 0 100 50 R 0 −100 z 0 ψ 100 0.
33 Richardson fraction 0.32 Richardson fraction Richardson fraction 0.3 0. 0.32 0.29 0.34 0.25.19: Average Richardson fraction with dt = 0.25.18: Average Richardson fraction with dt = 0.125.26 0. 0. 1 0.27 0.5 95 .335 0.25 0 500 1000 1500 2000 time 2500 3000 3500 4000 Figure 7.305 0 500 1000 1500 2000 time 2500 3000 3500 4000 Figure 7.35 0.315 0.0. 0.31 0.33 0.325 0.31 0.28 0.34 Richardson fraction 0.5.
01 0 100 50 R 0 −100 z 0 ψ 100 0.02 ψ 0.20: Evolution of the dipole with diﬀerent mesh size 96 .02 ψ 0.01 0 100 50 R 0 −100 z 0 100 Figure 7.02 0.t=0 t = 1400 0.02 0.01 0 100 50 R 0 −100 z 0 ψ 100 0.01 0 100 50 R 0 −100 z 0 100 t = 2000 t = 2600 0.
1) with the boundary conditions of zero at the edge of a square.Chapter 8 The TwoDimensional SN equations 8. we still. the case where the Laplacian is just 2 ψ=( ∂2 ∂2 + 2 )ψ.2a) (8.2b) φxx + φyy = ψ2 To evolve the full nonlinear case we require to use the same method as section 5. 8. radiating angular momentum.2 The equations Consider the 2dimensional case. as in the case of the one o dimension need a numerical scheme that will preserve the normalisation as it evolves.1 The problem In this chapter. we o require a method to solve the Schr¨dinger equation in 2D as well as the Poisson equation. however. instead of on the sphere. o To model the Schr¨dinger equation on this region. we need a method to solve the Schr¨dinger equation and Poisson equation. y. ∂x2 ∂y (8. o 97 . we consider the SN equations in a plane.7. We shall ﬁnd a dipolelike stationary solution. So the nondimensionalized Schr¨dingerNewton equations then in 2D are : o iψt = −ψxx − ψyy + φψ (8. that is. Therefore we use an ADI (alternating direction implicit) method to solve the Schr¨dinger equation. That is. and some solutions which are like rigidly rotating dipoles. that is in Cartesian coordinates x. but we need to modify this since the matrices become large and no longer tridiagonal in the case of ﬁnite diﬀerences. that is. These rigidly rotating solutions are unstable. We can use a CrankNicholson method here. and will merge.
6) since we do not want any corners in the sponge factors. and D2 is the denotes a Chebyshev diﬀerentiation matrix or a ﬁnitediﬀerence method for approximating the derivative in the direction indicated.j − D2y φi. (8.j = fi. 98 . We proceed with the iteration until 8. Let φ0 = 0 be an initial guess for the solution of the equation i.3a) (8. We can introduce to deal with a nonzero potential an extra term in the scheme.This works when V = 0. we use a PeacemanRachford ADI iteration [1] on the and then we deﬁne the iteration to be.5( √ (x2 +y 2 )−20) ]. by the subscript. i. We the same scheme as used in Guenther [10] to intoduce the potential so we have the scheme: (1 − iD2x )S = (1 + iD2y )U n (1 + iV n+1 )U n+1 = (1 − iV n )T (1 − iD2y )T = (1 + iD2x )S (8. we choose sponge factor to be e = min[1.j i.j .4) N is the number of grid points). We also need a quicker way to solve the Possion equation in 2D.j it converges. N 2 × N 2 matrix problem where equation as described below. −D2x φn+1 + ρφn+1 = ρφn + D2y φn + fi.3b) (8.j −D2y φn+2 + ρφn+2 = ρφn+1 + D2x φn+1 + fi. Rather than solving this equation as it stands (that is.j i.3c) The last equation (8. that is.j i.j i.j = ψ2 . The discretised version of the Possion equation we want to solve is: −D2x φi. The ADI method approximates CrankNicolson to the second order.j i.5a) (8.1. (8.j (8. i.3 Sponge factors on a square grid Here we consider sponge factors on the square grid in two dimensions.j where fi. φn+2 − φn  is less than some tolerance. e0.j i.j .5b) where ρ is a small number chosen to get convergence.3c) deals with the potential. We plot a graph of this sponge factor in ﬁgure 8.
8 0. Now we can use the method of section 8. We can ﬁnd such stationary states by using the same method as for axially symmetric state section 7. but modiﬁed to the 2D case by changing the diﬀerential operators.4 the lump moving oﬀ the grid as a test of the sponge factor.3 and 8. that is rotating at constant phase.1: Sponge factor Again to test the sponge factors we send an exponential lump oﬀ the grid. In this case we ﬁnd that the ﬁrst stationary state after the ground state has two lumps like that of the dipole (see ﬁgure 8.2 0 30 20 10 0 −10 x −20 −30 30 20 10 0 −10 −20 −30 y Figure 8. We note that most of the lump is absorbed. We see that the dipolelike state evolves for a while like a stationary state. 8.6). approximately the ground state (see ﬁgure 8.6 0. We plot in ﬁgures 8.4 0.2 to evolve this stationary state with sponge factors.2.sponge factor for square grid 1 0.01((x−a) 2 +(y−b)2 )) eivy . (8. 8. 99 . a state with higher energy than the ground state.5). that is.3. and then it decays and becomes just one lump.7) a wavefunction centred initially at x = a and y = b and moving in the y direction with velocity v.4 Evolution of dipolelike state As in chapter 6 and chapter 7 we can consider now the evolution of a higher state. where the initial function is of the form: ψ = e−0.
04 0.04 0.06 0.04 0.06 0.06 0.04 0.04 0.06 0.06 0.06 0.02 0 20 0 −20 −20 0 20 0.06 0.02 0 20 0 −20 −20 0 20 t = 72 t = 84 0.04 0. N = 30 and dt = 2 100 .3: Lump moving oﬀ the grid with v = 2.04 0.02 0 20 0 −20 −20 0 20 0.02 0 20 0 −20 −20 0 20 Figure 8.t=0 t = 12 0.06 0.02 0 20 0 −20 −20 0 20 t = 24 t = 36 0.02 0 20 0 −20 −20 0 20 0.04 0.02 0 20 0 −20 −20 0 20 Figure 8.02 0 20 0 −20 −20 0 20 0.2: Lump moving oﬀ the grid with v = 2. N = 30 and dt = 2 t = 48 t = 60 0.
08 40 20 0 −20 y −40 −40 −30 −20 −10 x 0 10 20 30 40 ψ Figure 8.04 −0.02 0 20 0 −20 −20 0 20 0.4: Lump moving oﬀ the grid with v = 2.06 0.08 0.06 0.04 0.04 0. N = 30 and dt = 2 0.02 0 20 0 −20 −20 0 20 Figure 8.06 0.04 0.5: Stationary state for the 2dim SN equations 101 .04 0.02 −0.t = 96 t = 108 0.04 0.06 0.02 0 −0.02 0 20 0 −20 −20 0 20 t = 120 t = 132 0.06 0.02 0 20 0 −20 −20 0 20 0.06 −0.
1 0. θ + ωt.t=0 t = 4400 0.05 ψ 0 40 ψ 20 0 −20 −40 y −40 −20 x 0 20 40 0 40 20 0 −20 −40 y −40 −20 x 0 20 40 t = 4800 0.5 Spinning Solution of the twodimensional equations To get a spinning solution we could try setting up two ground states or lumps distance apart and set them moving around each other by choosing the initial velocities of the lumps.1 0. −x sin(ωt) + y cos(ωt).6: Evolution of a stationary state 8.8) where ω is a real scalar and r and θ are the polar coordinates.1 0. This is redeﬁning the stationary state so that rotating solution are considered. Now we let.05 ψ 0 40 20 0 −20 −40 y −40 −20 x 0 20 40 0 40 20 0 −20 −40 −40 −20 0 20 40 Figure 8.9) (8.10b) 102 . (8. X = x cos(ωt) + y sin(ωt). Y = −x sin(ωt) + y cos(ωt).10a) (8. t) = e−iEt ψ(x cos(ωt) + y sin(ωt). t) = e−iEt ψ(r. 0). y. 0).05 0. (8. We could instead try to seek a solution. which satisﬁes: ψ(r. θ.1 0. In Cartesian coordinates (8.8) becomes: ψ(x.05 0.
If ψ 8. 0) − iωXψY (X. hence a solution that will spin? We can ﬁnd such a solution with ω = 0. which corresponds to a solution rotating in the other direction.dX dY which is a rotation of ωt. Y. = − 2 ψ + φψ. 103 . emitting scatter oﬀ the grid and leaving a “multiple” of the ground state. 0) + iωY ψX (X.12a) (8.7 and the imaginary part in ﬁgure 8.10 we plot the section of the evolution showing that the state does decay.005 which we plot the real part in ﬁgure 8. We can use the timedependent evolution method as in section 8. to calculate the numerical solution of (8. Note that ψ is a solution is a function of r then [Y ψx − Xψy ] vanishes and we get the ordinary stationary states in that case regardless of what value ω is. • Spinning solutions do exist and they decay like the other solutions to a “multiple” of the ground state.6 Conclusion For the 2dimensional SN equations we can conclude that: • They behave as the spherically symmetric and axisymmetric equations that is the ground state is stable and the higher states are unstable. In ﬁgure 8.3). does not vanish. In ﬁgure 8. dt dt ∂ψ becomes: Then i ∂t i ∂ψ = e−iEt [Eψ(X.9) into 2dimensional SN equations we have: Eψ + iωY ψX − iωXψY 2 φ = ψ2 . 0)]. Y. We can modify the methods used to calculate the stationary state solution in the axially symmetric case (section 7. Y.9 we plot the evolution of the state before it decays showing that the state does spin about x = 0 and y = 0.12) at a ﬁxed ω.12b) ¯ we note that these equations are dependent on X and Y only.2.8. We note that ψ(X. (8. We note that = Y and = −X. Do nontrivial solutions of these equations exist that is a solution such that [Y ψx −Xψy ] with −ω instead of ω.11) where ψX denotes partial diﬀerentiation of ψ in the ﬁrst variable and ψY in the second variable. with the initial data being a spinning solution. So substituting (8. 0) is no longer going to be of just one phase. Y. ∂t (8.
01 −0.08 0.06 0.02 −0.03 0.8: Imaginary part of spinning solution 104 .02 real(ψ) 0 −0.02 0.04 −0.7: Real part of spinning solution spinning solution ω =0.08 20 10 0 −10 y −20 −20 −10 x 10 0 20 Figure 8.01 imag(ψ) 0 −0.03 20 10 0 −10 y −20 −20 −10 x 10 0 20 Figure 8.005 0.02 −0.spinning solution ω =0.005 0.04 0.06 −0.
06 ψ 0.06 0.02 0 40 0 −20 −40 y −40 −20 x 0 20 40 20 0 −20 −40 y −40 −20 x 0 20 40 Figure 8.10: Evolution of spinning solution 105 .02 0 40 ψ 20 0.06 0.04 0.t=0 t = 200 0.04 0.06 0.06 ψ 0.06 ψ 0.02 0 40 20 40 ψ 0.02 0 40 20 40 ψ 0.08 0.08 0.06 ψ 0.04 0.02 0 40 20 40 0 −20 −40 y −40 −20 x 0 20 0 −20 −40 y −40 −20 x 0 20 t = 7600 t = 8800 0.04 0.9: Evolution of spinning solution t = 5200 t = 6400 0.06 0.04 0.04 0.08 0.02 0 40 0 −20 −40 y −40 t = 400 −20 x 0 20 40 20 0 −20 −40 y −40 t = 600 −20 x 0 20 40 0.02 0 40 20 40 0 −20 −40 y −40 −20 x 0 20 0 −20 −40 y −40 −20 x 0 20 Figure 8.04 0.04 0.02 0 40 ψ 20 0.08 0.
Zweifel.Bernstein. B. “A numerical study of the time dependent Schr¨dinger equation coupled o with Newtonian gravity” Doctor of Philosophy thesis for University of Texas at Austin (1995).Bernstein. Asymptotic behaviour for the 3D Schr¨dingerPoisson system in o the attractive case with positive energy App. Dynamics of the SelfGravitating BoseEinstein Condensate.W.Schwartz.H. Plenum (1995) 245250 [8] L.Rev. Timedependent dissipation in nonlinear Schr¨dinger o systems 36 (1995) 12741283 Journal of Mathematical Physics. J.R.F. Numerical Methods for Partial Diﬀerential Equations 2nd ed (1977) [2] E.Math.J.Schey.R. E. P.Goldbery.Christian.Bibliography [1] W. [11] H. B.H.C. Computergenerated motion pictures of onedimensional quantum mechanical transmission and reﬂection phenomena Am.Lett. “Partial diﬀerential equations” AMS Graduate Studies in Mathematics 19 AMS (1998) [9] A. Permanent State Reduction: Motivations. J.Lange.Giladi. Quantum Communications and Measurement.W.Lett 105A (1984) 199202 [7] L. D56 (1997) 4844. [6] L. K. H.Toomire.Diosi.Arriola. 106 .Jones. Results and ByProducts. Gravitation and the quantummechanical localization of macroobjects. K.Toomire.Phys 35 3 (1967) 177186 [10] R. P. Exactly soluble sector of quantum gravity Phys. Reo ports on Mathematical Physics 36 (1995) 331345 [12] H. Modern Physics Letters A13 (1998) 23272336 [4] D.Zweifel.F. (Draft) [5] J.12 (1999) 16 [3] D.F.Guenther.R.Ames.Evans.Diosi.Soler.Lange. An overview of Schr¨dingerPoisson Problems. Phys. Eigenstate of the Gravitational Schr¨dinger o Equation.Jones.
W.A o 280 (2001) 173176 [26] Private correspondence with K. K.Penrose.Flannery Numerical Recipes in Fortran (second edition) [21] R.Meth in o App. Phys.Zwiefel.L. Spectral Methods in MATLAB.M.187 (1969) 1767 [22] B.F.P. Numerical Solution of Partial Diﬀerential Equations (1994) [17] R.P.N. [28] L.Trefethen. S.Moroz. Class Quantum Grav.Sci. Physics Letters B 366 (1996) 8588 [23] H. W.J.Vetterting.Lett.Trefethen.Schmidt.Tod. SIAM 2000.[13] R.Powers.Rel.Penrose. (Lond) A 356 (1998) 1927 [19] D. 28 (1996) 581600 [18] R.M. “Diﬀerential equations: their solution using symmetries” Cambridge: CUP (1989) [25] K. uniqueness and asymptotic behaviour of solutions of the WignerPoisson and Sch¨dingerPoisson systems.Stephani. van der Bij. entanglement and state reduction Phil. P. H.Morton. Global existence.Penrose. Systems of SelfGravitating Particles in General Relativity and the concept of an Equation of State. An axiallysymmetric Newtonian boson star.(Oxford Lecture Course) 107 . Is there a gravitational collapse of the wavepacket ? (preprint) [24] H.Moroz. On gravity’s role in quantum state reduction.P. o Nonlinearity 12 (1999) 20116 [15] I. The ground state energy of the Schr¨dingerNewton equations Phys.Tod. K. [20] W.Mayers. Quantum computation. R.Ruﬃni.P. B.Illner.17 (1994) 349376 [14] I.Trans. S. 15 (1998) 27332742 [16] K.Schupp. Lectures on Spectral Methods.Teukolsky. Academic Press 1972.Rev. D.Tod.R.F.N. Boundary Value Problems.Tod. An Analytical Approach to the Schr¨dingerNewton equations.Lange. Sphericallysymmetric solutions of the Schr¨dingero Newton equations. Math. J. [27] L.Bonazzola.Soc.Grav.Press. Gen.
Y) dY/dt = FN(X.XBLOW.1 ! ! ! ! ! ! ! ! ! ! ! Program to calculate the bound states for the SN equations integrates the spherically symmetric SchrodingerNewton equations Integrates the system of 4 coupled equations dX/dt = FN(X. dimension(4) :: PY.PY2 Double Precision.DOUT.ENCO.STEP2.INSIZE.ENCO2 INTEGER :: N.XE.INT1.YP.X02AMF Double Precision :: Y1.DR2.METHOD . dimension(4) :: Y.Appendix A Fortran programs A.INT2 Double Precision :: X.THRES. I.XEND. LENWRK CHARACTER*1 :: TASK Logical :: ERRAS 108 .XBLOWP INTEGER :: DR.HSTART Double Precision :: X02AJF.YMAX.GOUT.STEP.TOL.Y1P.YS Double precision. dimension(32*4) :: W Double Precision :: RATIO.IFAIL.Y) using the NAG library routine D02PVF D02PCF using the NAG library routines X02AJF XO2AMF Program searchs for bound states and the values of r for which they blow up Program SN Implicit None External D02PVF External D02PCF External OUTPUT External X02AJF External X02AMF External FCN External COUNT Double precision.
! compute a few things ! N = 4 INSIZE = 0.01 Y1P = 1.2 X = 0.0000001 XEND = 2000 XE = X TOL = 100000000000*X02AJF() THRES(1:4) = Sqrt(X02AMF()) METHOD = 2 TASK =’U’ ERRAS = .FALSE. HSTART = 0.0 LENWRK = 32*4 IFAIL = 0 GOUT = 0 ! Ask for user input . ! to find root below given value ! Y(3) = 1.0 Y(2) = 0.0 Y(4) = 0.0 ! Ask for upper value Print*, " TOL ", TOL Print*, " Enter upper value " Y1 = 1.09 Y(1) = Y1 PY(1:4) = Y(1:4) PY2(1:4) = Y(1:4) ! Step size STEP = 0.0005 OPEN(Unit=’10’,file=’fort.10’,Status=’new’) OPEN(Unit=’11’,file=’fort.11’,Status=’new’) DO WHILE ( GOUT == 0 ) DR = 0 ! ! D02PVF is initially called to give initial starting values to ! subroutine , ! One has too determine the initial direction in which our ! first value blows up Step2 = INSIZE ! Initial trial step in Y(1) DR2 = 1 ! direction of advance in stepping ! DR is direction of blow up CALL D02PVF(N,X,Y,XEND,TOL,THRES,METHOD,TASK,ERRAS,HSTART,W,LENWRK,IFAIL) DO WHILE (DR == 0) XE = XE + STEP PY2(1:4) = PY(1:4) PY(1:4) = Y(1:4) 109
IF (XE < XEND) THEN CALL D02PCF(FCN,XE,X,Y,YP,YMAX,W,IFAIL) END IF IF (XE > 3*STEP) THEN RATIO = (XE2*STEP)*PY2(1)/((XESTEP)*PY(1)) RATIO = RATIO  (XESTEP)*PY(1)/(XE*Y(1)) IF ((RATIO*RATIO) < 1E15) THEN RATIO = (XESTEP)*PY(1)/(XE*Y(1)) IF (RATIO < 1) THEN IF (RATIO > 1) THEN IF (Y(1) > 0) THEN DR = 1 END IF IF (Y(1) < 0) THEN DR =  1 END IF END IF END IF END IF END IF IF (Y(1) > 2) THEN DR = 1 ENDIF IF (Y(1) < 2) THEN DR = 1 ENDIF IF (XE > XEND) THEN DR = 10 ENDIF ENDDO ! Do while is ended ... XBLOWP = 0 XBLOW = 0 DO WHILE (DR /= 10) Y1 = Y1STEP2*REAL(DR2) !Print*, "Y(1) ", Y1 Y(3) = 1.0 Y(2) = 0.0 Y(1) = Y1 Y(4) = 0.0 XBLOWP = XBLOW XBLOW = 0 ENCO = 0 ENCO2 = 0 DOUT = 0 X = 0.0000001 HSTART = 0.0 XE = X 110
CALL D02PVF(N,X,Y,XEND,TOL,THRES,METHOD,TASK,ERRAS,HSTART,W,LENWRK,IFAIL) DO WHILE (DOUT == 0) XE = XE + STEP YS = Y PY2(1:4) = PY(1:4) PY(1:4) = Y(1:4) IF (XE < XEND) THEN CALL D02PCF(FCN,XE,X,Y,YP,YMAX,W,IFAIL) END IF CALL COUNT(YS,Y,ENCO,ENCO2) IF (Y(1) < 0.0001) THEN IF (Y(1) > 0.0001) THEN XBLOW = XE ENDIF ENDIF IF (XE > 3*STEP) THEN RATIO = (XE2*STEP)*PY2(1)/((XESTEP)*PY(1)) RATIO = RATIO  (XESTEP)*PY(1)/(XE*Y(1)) IF ((RATIO*RATIO) < 1E15) THEN RATIO = (XESTEP)*PY(1)/(XE*Y(1)) IF (RATIO < 1) THEN IF (RATIO > 1) THEN IF (Y(1) > 0) THEN DOUT = 1 !Print*, "eup", XBLOW IF (DR /= 1) THEN STEP2 = STEP2/10 DR = 1 DR2 = DR2*(1) END IF END IF IF (Y(1) < 0) THEN DOUT = 1 !Print*, "edown", XBLOW !Print*, "zeros", ENCO IF (DR /= 1) THEN STEP2 = STEP2/10 DR = 1 DR2 = DR2*(1) END IF END IF END IF END IF END IF END IF IF (Y(1) > 1.5) THEN DOUT = 1 !Print*, "up", XBLOW 111
" Blow up value is ". " bound state ".5) THEN DOUT = 1 !Print*. "down " .ENCO ! Now consider the intergation to get A / B^2 !! Y(3) = 1. " Number of zeros is ". ENCO IF (DR /= 1) THEN STEP2 = STEP2/10 DR = 1 DR2 = DR2*(1) ENDIF ENDIF IF (XE > XEND) THEN DR = 10 DOUT = 1 ENDIF IF (STEP2+Y1 == Y1 ) THEN DR = 10 DOUT = 1 ENDIF ENDDO ENDDO ! obtained value for bound state now write it in file ! And look for another Print*.0 Y(1) = Y1 Y(4) = 0.0000001 HSTART = 0. XBLOW Write(10.IF (DR /= 1) THEN STEP2 = STEP2/10 DR = 1 DR2 = DR2*(1) ENDIF ENDIF IF (Y(1) < 1.XBLOW.0 XBLOWP = XBLOW XBLOW = 0 ENCO = 0 ENCO2 = 0 DOUT = 0 X = 0.Y1 PRINT*.0 XE = X INT1 = 1 INT2 = 0 112 . "zeros ".*). ENCO Print*.XBLOW !Print*.0 Y(2) = 0.Y1 .
THRES.METHOD.Y.IFAIL) END IF INT1 = INT1 .IFAIL) DO WHILE (XE < XBLOWP) XE = XE + STEP PY2(1:4) = PY(1:4) PY(1:4) = Y(1:4) IF (XE < XEND) THEN CALL D02PCF(FCN.YP.X.LENWRK. "A is : ".0000001 XE = X INSIZE = (Y1PY1)/10 Y1P = Y1 Y1 = Y1 .W. "B is : ".YMAX.W.0000001 Y(1) = Y1 Y(2) = 0. INT2 Write(11.*) INT1.TOL.INT2 X = 0.X.XE.TASK.0.ERRAS.CALL D02PVF(N. RATIO IF (Y(1) > 0) THEN DR = 1 END IF IF (Y(1) < 0) THEN DR = .1) THEN Print*.0 113 .(XESTEP)*PY(1)/(XE*Y(1)) IF ((RATIO*RATIO) < 1E15) THEN RATIO = (XESTEP)*PY(1)/(XE*Y(1)) IF (RATIO < 1) THEN IF (RATIO > .Y. INT1 Print*.1 END IF END IF END IF END IF END IF IF (Y(1) > 2) THEN DR = 1 ENDIF IF (Y(1) < 2) THEN DR = 1 ENDIF IF (XE > XEND) THEN DR = 10 ENDIF ENDDO Print*.XEND.HSTART.STEP*XE*Y(1)*Y(1) INT2 = INT2 + STEP*XE*XE*Y(1)*Y(1) IF (XE > 3*STEP) THEN RATIO = (XE2*STEP)*PY2(1)/((XESTEP)*PY(1)) RATIO = RATIO .
Y. DIMENSION(4) :: Y PRINT*.D0*Y(2)/SY(3)*Y(1) YP(3) = Y(4) YP(4) =2.D0*Y(4)/SY(1)*Y(1) RETURN END ! ! OUTPUT . Ynew Integer :: Number.Number. Xsol.YP) Implicit None Double Precision :: S.y(3) WRITE(2.y(1).y(1).way) Implicit None Double Precision.subroutine SUBROUTINE OUTPUT(Xsol.y(3) END ! ! Subroutine Count to count the bumps in our curve SUBROUTINE COUNT(Yold. DIMENSION(4) :: Yold.Y(3) = 1.0 Y(4) = 0.y) Implicit None Double Precision :: Xsol Double Precision.Y(4).Ynew.*). Xsol. way If (Yold(1) > 0) THEN If (Ynew(1) < 0) THEN ! we have just reached a turing point way = 1 Number = Number + 1 ENDIF ENDIF 114 .YP(4) YP(1) = Y(2) YP(2) =2.0 ENDDO END PROGRAM SN ! ! FUNCTION EVALUATOR ! SUBROUTINE FCN(S.
If (Yold(1) < 0) THEN If (Ynew(1) > 0) THEN way = 1 Number = Number + 1 ENDIF ENDIF END 115 .
% function [f.1) + f.1) + (i1)*step .evalue] = cpffsn(L.r+1). % then work out values of the function for i = 1:b tail(i.1.r*f.evalue] = cpffsn(L.1. s = exp(d*r)/r.N.2) = c*exp(d*tail(i.r). vr = interp1(nb2(:. f = (vr2*(r+1)vr*r) e = vr*r.3).1).1 B.Appendix B Matlab programs B. yr = interp1(nb2(:.r+1). end B.2 Programs to calculate stationary state by Jones et al method function [f. c = yr/s.1) = nb2(1.3). tail(i.1))/tail(i. vr2 = interp1(nb2(:.nb2(:.nb2(:.r).2).n).1) .x] = cheb(N).1).1). %step = nb2(2. d = real(log(yr*r/(yr2*(r+1)))).1 Chapter 2 Programs Program for asymptotically extending the data of the bound state calculated by RungeKutta integration % To work out the asymptotic solution % % r the point to match to !! % the nb2 is the solution matching to at the moment !! % matching with exp(r)/r % 1/r % b = 5000.2). yr2 = interp1(nb2(:.nb2(1.nb2(:.3) = e/tail(i. % [D. 116 .N.1). tail(i.nb2(:.1).1).n).
fx. D2 = D^2/(dist^2).phi. nom = inprod(ft.phi). g = abs(ft(2:end1)).x) % [f.phi.cx). % N is number of cheb points 117 .N.evalue] = cpfsn(L.N.x).newphi.fx/sq.ft.N). %f = interp1(sq*x. res = 1.x. while (res > 1E13) [ft.evalue] = CPESN(L. %sq2 = inprod2(f. %sq = sq*sq2./(1+x2).x). D2 = D2(2:N.N). cx = (xp+1)*dist.^2.x./cx(2:end1).evalue] = cpesn(L.x.cx. end fx = interp1(cx.n.evalue] = CPFSN(L. %f = interp1(sq*x.x).0].n.x) % % works out nth eigenvector of schrodinger equation with selfpotential % initial guess of 1/(r+1) with the length of the interval L .n. %sq = inprod2(fx. dist = L/2.2:N). newphi = pu(1:N). res = max(abs(newphioldphi)) oldphi = newphi.N). pu = D2\g.N. function [f. ft = ft/sqrt(nom).N.N.x2 = (1+x)*(L/2).n.evalue] = cpfsn(L.N. [f. function [f.xp] = cheb(N). oldphi = newphi.n.phi) % % solve the bounds states of the SchrodingerNewton Equation % L length of interval % N number of cheb points % n the nth bound state % x the grid points wrt function is required % phi initial guess for potential % [D. newphi = interp1(x.x2. newphi(N+1) = newphi(N).ft.f.phi) % [f./cx(1:N).x.pu.cx). pu = [0.evalue] = cpesn(L. f =fx.fx/sq. phi = 1.n.
x) % % potential of phi wrt x with usual boundary conditions.N. 0]. cx(N+1) = L.phi. end ev = [0. V(:. evalue = e.eigs] = cpsn(L. cx(N+1) = L.x).ev.1)). function [V.phi.2:N). for a = 1:n [e.i). end E = diag(eigs).phi. cx(1) = 0. E0(i) = NaN.x) % [V.x). for i = 1:(N+1) cx(i) = (cos((i1)*pi/(N)))*dist+dist+cx(1).eigs] = cpsn(L.% interpolates to x % [V.N. ZE = zeros(N. dist = (cx(N+1)cx(1))/2. I = diag(ones(N1. f = interp1(cx. for i = 1:(N+1) cx(i) = (cos((i1)*pi/(N)))*dist+dist+cx(1). % using cheb differential operators % % N = number of cheb points % L = length of interval % cx(1) = 0. E0 = 0*E. end % % working out the cheb difff matrix % D = cheb(N). D2W = D^2/dist^2. % %Matrices C1 and C2 % % Matrices C1 and C2 to solve % D2 + 1/rR = ER % 118 .N).i] = max(EE0).eigs] = cpsn(L. dist = (cx(N+1)cx(1))/2. D2 = D2W(2:N.N.
1).eigs] = eig(C1.nb1(:. br3(i) = interp1(nb3(:. end.2). break.1) = (2*N^2+1)/6. B.cphi = interp1(x. br2(i) = interp1(nb2(:.phi.nb2(:.1))/2.2 B. for j = 0:N.(N+1)*(N+1) chebyshev spectral differentiaion % Matrix via explicit formulas function [D.x] = cheb(N) if N==0 D=0.1)nb1(1.1).bx1(i)). D = zeros(N+1.1 Chapter 4 Programs Program for solving the Schr¨dinger equation o % schrodinger equation with potential from the bound state % To create function of nb1 % with cheb point spaces % cos(i*pi/2k) % N = number of points clear bx1 bv1 br1 dist = (nb1(l. x = cos(pi*ii/N). x=1.m .3).bx1(i)).1).2). %cheb.j+1) = ci. C2 = I.bx1(i)). C1 = D2 + diag(cphi). D(N+1. %compute one column at a time cj = 1.nb3(:. bv1(i) = interp1(nb1(:. end % working out the cheb difff matrix % D = cheb(N).cx(2:N)).1). D2 = D^2/dist^2.2). D(ii+1. br1(i) = interp1(nb1(:.ones(N1.N+1). denom(j+1) = 1. if j==0  j==N cj = 2.bx1(i)).C2). for i = 1:(N+1) bx1(i) = (cos((i1)*pi/(N)))*dist+dist+nb1(1. 119 . ii = (0:N)’. end end D(1.5*x(j+1)/(1x(j+1)^2).*(1). %Compute eigenvalues [V.2./denom. end denom = cj*(x(ii+1)x(j+1)).nb1(:.^(ii+j).2].N+1) = (2*N^2+1)/6. if j > 0 & j< N D(j+1.1). ci = [2.j+1) = .1).
V(1:N1.V(1:N1. %Matrices C1 and C2 %for the schrodinger equation %D2 + V0 = e % C1 = D2 + V0. error2 = interp1(bx1(2:N).C2).i). plot(diag(eigs). error = error2’ . hold off subplot(1. f(1) = 0.i). int(1) title(’second eigenvector ’). grid.i). 120 .2. plot(bx1(2:N). ratio2 = ratio. f(2:N) = f(1:N1).2:N). [y.i).2.2.1)). int = dist*(D\f).3). f(1) = 0. int(1) title(’first eigenvector ’). subplot(3. % %Compute eigenvalues [V. ratio2 = ratio2^(1/3).D2 = D2(2:N.2). E = diag(eigs). [y2. ratio = (V(1.eigs] = eig(C1. V0 = diag(bv1(2:N)+Estate).’r*’)./bx1(2:N)’.^2.^2/int(1). grid on.i2] = min(E) subplot(3.1.1).i] = min(E) subplot(3./bx1(2))/br1(2). C2 = I. plot(bx1(2:N). f(2:N) = f(1:N1). plot(bx1(2:N).^2.V(1:N1.i2). I = diag(ones(N1. D = D(1:N.ratio2*bx1(2:N))./bx1(2:N)’) f = V(1:N1.error) title(’error to nb1’) E(i) = max(E)+1. pause.i2).1:N). grid on./bx1(2:N)’) f = V(1:N1.br1(2:N)’*ratio.1). int = dist*(D\f).
error) title(’error to nb3’) B.nb3(:. plot(bx1(2:N).2 Program for solving the O( ) perturbation problem % first order perturbation % To create function of nb1 % with cheb point spaces % cos(i*pi/2k) % N = number of points clear bx1 bv1 br1 dist = (nb1(l.2)*ratio.ratio2*bx1(2:N)).^2.2)*ratio. error2 = interp1(nb2(:. int(1) title(’third eigenvector is:’).nb2(:.2.subplot(3.1).2)./bx1(2))/br2(2)./bx1(2:N)’. end % % working out the cheb difff matrix % D = cheb(N).V(1:N1.2. br1(i) = interp1(nb1(:.i3). int = dist*(D\f). f(1) = 0.i2). error2 = interp1(nb3(:.i3] = min(E) subplot(3.i3). grid on. for i = 1:(N+1) bx1(i) = (cos((i1)*pi/(N)))*dist+dist+nb1(1. ratio2 = ratio2^(1/3).^2/int(1).4). subplot(3. ratio = (V(1.1).nb1(:. ratio2 = ratio. ratio = (V(1.1).V(1:N1.1).2.3). [y2.ratio2*bx1(2:N)). plot(bx1(2:N).nb1(:. D2 = D2W(2:N. bv1(i) = interp1(nb1(:./bx1(2:N)’./bx1(2:N)’) f = V(1:N1.2. f(2:N) = f(1:N1). ratio2= ratio. error = error2’ .1)nb1(1.^2/int(1).6).5).error) title(’error to nb2’) E(i2) = max(E)+1.1))/2.i3).bx1(i)).bx1(i)). error = error2’ . ratio2 = ratio2^(1/3)./bx1(2))/br3(2). 121 . D2W = D^2/dist^2.V(1:N1.i3).1). plot(bx1(2:N).i2).2:N).
ZE = zeros(N1.10) A*B % abint. B = dist*(D\g). %Compute eigenvalues [V.m . 122 . f = abs(V(1:N1. A = dist*(D\f). f = V(1:N1.D2V0. fg(1) =0.ZE. f = conj(f). plot(Enew(1:N).*g. f(2:N) = f(1:N1).i)). g(2:N) = g(1:N1).2. Enew = sort(E).1:N).ZE]. fg(2:N) = fg(1:N1).ZE. f(1) = 0. %Matrices C1 and C2 %for the S_N equations % subject to the equations % 2R0A + D2W = 0 % D2B + V0B = eA % D2A . B =0 W = 0 at endpoints % DA = 0 at infinity % DB = 0 at infinity C1 = [2*R0. D = D(1:N. fg = f.V0 = diag(bv1(2:N)).I.C2).ZE.I. g = abs(V(N:2*N2.’r*’).N1).^2.program to work out integral of % D = cheb(N).ZE. R0 = diag(br1(2:N)). I = diag(ones(N1.ZE.ZE. g = V(N:2*N2.i)).3 Program for the calculation of (4.D2+V0.i). grid title(’Eigenvalues of the perturbed SN equation ’) B.ZE. g(1) = 0.^2. int = dist*(D\fg).1)).eigs] = eig(C1.ZE.ZE. C2 = [ZE.R0].D2.i). hold off E = j*diag(eigs). j = sqrt(1).V0A + R0W = eB % with boundary conditions % A = 0 .
length of the interval. Yprime(3) = Y(4). B.1). global e./xp)) title(’A/r’) subplot(3. y0 = [0 1 0 1 0 1]’.1.3) plot(xp. Yprime(6) = 2*R(X)*Y(3). % e is the eigenvalue ( e = i\lambda ) global e Yprime(1) = Y(2). [xp.Xfinal]. y0(4) = initial value for y0(4). X0 = 0. y0(6) = initial value for y0(6).imag(yp(:.000001.Y) % this is function for the first order pert equations % where V .imag(yp(:.2) plot(xp.y0).3.3 B. Yprime(2) = V(X)*Y(1)+e*Y(3). y0(2) = initial value for y0(2).1.ratio = sqrt(A(1))*sqrt(B(1))\abs(int(1)) B. F = ’yfo’.1 Chapter 6 Programs Evolution program %snevc2. Yprime = Yprime’.Xspan. R are the vaule of the potential and wave function % related to the zero th order perturbation.yp] = ode45(F. subplot(3.m 123 ./xp)) title(’W/r’) function Yprime = yfo(X. Xfinal = L . Xspan = [X0.2. Yprime(4) = V(X)*Y(3)+e*Y(1)+R(X)*Y(5). Yprime(5) = Y(6).5).4 Program for performing the RungeKutta integration on the O( ) perturbation problem % see the solution of the first order perturbation equations % intial data % global nb1./xp)) title(’B/r’) subplot(3.imag(yp(:. e = eigenvalue.1.3).1) plot(xp.
u = u(2:N). clear psi1. % solve the potential equation % sponge factors e = 2*exp(0./(x+1)). rho = abs(u(1:end)). R = 350. v = 0. clear i. one = 1. clear fgp. rbrack = sqrt(brack). wave = rbrack*rsig*exp(0.5*((1x)*R). dist = 0. 124 . %+0.^2. clear norm.N2./x.^2(0.2*(xx(end))).5*i*v*x)i*vs). dt = 0. pert = wavewave2.01*pert. rsig = sqrt(sigma). % Modified 13/3/2001 . a = 50. %clears used data formats hold off N = 70.%Iteration scheme for the schrodingernewton equations %using crank nicolson + cheb . % start of the iteration scheme for zi2 = 1:100 for zi = 1:1000 count = count +1. o = zeros(N1. % alternative initial data ! uold = u. fwe = u.%0. nu = dt. %e = zeros(N1./x(1:end).1).5*brack*(x+av). wave2= rbrack*rsig*exp(0. [D. av = a+v*t. count = 1. N2 = N.2:N).x] = cheb(N). fgp(count) = abs(u(1)).5*R.5*brack*(xav). fgp2(count) = abs(u(15)). pu = D2\rho. phi = pu. D2 = D^2. %calculate stationary solution x = x(2:N). vs = (v^2*t)/4. clear g.for perturbation about ground state. D2 = D2(2:N.3.1. D2 = D2/dist^2. sigma = 15.x.1).1.5. x = 0. u = uold. brack = 1/(sigma^2). clear Eu. u = cpfsn(R. t = 0. clear psi.5*i*v*x)i*vs).^2+(0. Eu(count) = abs(u(5)).
u = b\d. end maindaig = ones(N1. b = diag(maindaig) + diag(g)*D2. unew = u. phi = phi1.phi1 = phi. pe(zi2) = pesn(u.^2.x. fgp2(count) = abs(u(30)).phi. %fwe(:. psi(z) = one*(i*dt/2)*(phi(z) +o(z)). psi(z) = one*(i*dt/2)*(phi(z) +o(z)). for z = 1:N1 g(z) = (i+e(z))*nu/2.N1).N).1)psi’). u = b\d. psi1(z) = one*(i*dt/2)*(phi1(z)+o(z)). for z = 1:N1 g(z) = (i+e(z))*nu/2.*(D2*u).*u .’. pu = D2\rho.g. phi1 = pu. d = (ones(N1. % initial guess ! Res = 9E9.1)psi’).mat Eu %save nn.’.% end Eu(count) = abs(u(5)). while abs(Res) > 3E8 u = uold.x.u. fgp(count) = abs(u(1)). d = (ones(N1. psi1(z) = one*(i*dt/2)*(phi1(z)+o(z)). % this was the solving of the P. %Res = max(abs(phi1phi)) % end of checking the L inifinity norm . end maindaig = ones(N1.end+1) = u.D. %then recalution of the result ! to check % Now to check the L infinty norm of the residual u = uold./x(1:end). Res = max(abs(uunew)).*(D2*u).1)+psi1’. uold = unew.1)+psi1’.E ! % work out potential for u^n+1 %% rho = abs(u(1:end)). b = diag(maindaig) + diag(g)*D2.*u . %save fp. t = t + dt./x.mat norm end 125 . norm(zi2) = inprod(u.g.mat fgp %save Ep.
x2= dis*(x+1).’k’).N.l) % % warning off [D. ylabel(’log(fourier transform)’).log(abs(Fn(N+1:2*N+1)))) xlabel(’log(frequency)’). dt = 0. end B.aphi. idxs = idx*freq. %plot(nb1(:.’g’).angle(u).4.*nb1(:.N.abs(u).evalue.4 B. %plot3(tone.mat fgp fgp2 %Eu save nn.y] = cheb(N2).n. [E.1).evalue] = cp2fsn(dis.N2.l) % function [f. 126 .abs(nb1(:. plot(log(idxs(N+1:2*N+1)).N2. B.’r’) %plot(x. n = 2*N.n. E = E/dis2.3.end+1) = u.x] = cheb(N).1). Fn = fft(f).x. save fp. x2= x2(2:N). y2 = dis2*(y+1). % x is the radius points.1).aphi.2 Fourier transformation program N = max(size(f))/2.2)). Fn(N+1) = 0.5. D2 = D^2.mat fwe norm %tone = t*ones(N1. Fn = [conj(Fn(N+1)) Fn(N+2:end) Fn(1:N+1)].fwe(:.count] = cp2fsn(dis.phi1.’b’). D = D/dis. % scale results by fft length idx = N:N. dis2 = pi/2.1 Chapter 7 Programs Programs for solving the axisymmetric stationary state problem function [f. %drawnow %hold on %hold on %plot(x. % rearrange frequency points Fn = Fn/n. freq = pi/(N*dt). E2 = E^2.lres.
’. fac1 = diag(1. norms ft = ft/nom. L = kron(I./x2.evalue.n. corresponding to evector ! data = ft. res = 1./(1000*x2(j)+1). end end g = g. %Laplacian kinda of ! % actually Le Laplacian for axially symetric case ! newphi = phi. % initial guess for potential for i = 1:N2+1 for j = 1:N1 phi((N1)*(i1)+j) = 1.fac1)+ kron(fac2*E. oldphi = newphi.N2. for i = 1:N2+1 for j = 1:N1 g((N1)*(i1)+j) = abs(ft((N1)*(i1)+j))^2/x2(j). count = 0. pu = L\g.mener. I = eye(N2+1). fac2 = diag((cot(y2eps)+cot(y2+eps))/2).N. % start of a iteration while(res > 1E4) count = count +1. end end aphi = phi. % get nth evector correponding to potential % next aim to work out the potential.D2) + kron(E2. clear g. [ft. mener = 0. D2 = D2(2:N.2:N). 127 . end end for i = 1:N2+1 for j = 1:N1 newphi((N1)*(i1)+j) = 0.mang] = cp2esnea(dis.l).mang. mang = 0.5*(nwphi((N1)*(i1)+j)+nwphi((N1)*(N2+1i)+j)).^2). for i = 1:N2+1 for j = 1:N1 nwphi((N1)*(i1)+j) = pu((N1)*(i1)+j)/x2(j).mener. end end res2 = abs(max(newphinwphi)).D = D(2:N.newphi.2:N).fac1).
ii] = sort(real(En)).n. norms2 if (nom > 0.04) data = data/nom. % % [V.N.eigs] = cp2sn2(dis.N.N2. res = 0. end end f = ft. close = (penerener)^2 + 1E4*(pangang)^2+ 0.ang] = cp2esnea(dis. end end end while(W == 0). %axpl %drawnow. aphi = newphi. W = 1.N. vclose = 10E8. W = 0.N. [n2.evalue. if (close < vclose) vclose = close.pener. data = V(:. b = 0.ii(b)).04) data = data/nom. data = newphi.N2).phi).N2. enr.l2] = fnl(data. b = b+1. norms2 if (nom > 0. [e. for as = 1:limit data = V(:.1*((n2n)^2+ 10*(l2l)^2). enr. if count > 50 lres = res. close = (penerener)^2 + 1E4*(pangang)^2+ 0. 128 .l2] = fnl(data. En = diag(eigs). lres = res. [n2.phi.l). found = 0.ener.pang. limit = floor(max(ii)/4).ii(as)).res = abs(max(newphioldphi))/abs(max(newphi)) oldphi = newphi. function [f.1*((n2n)^2+ 10*(l2l)^2).N2).
ii(b)). for j = 1:N2+1 i = i +1. end end %if (W == 1) %W = input(’correct state’).N2). for i = 1:N1 j = j+1. function [n. f = V(:.N. j = 0.end if (W == 1) W = 0. %end clear no2 noo2. 129 .l] = fnl(data. if (no2(j) == 1) i = i 1.001) %axpl2 %drawnow. if (close < vclose*1. end if (j > 0) noo(j) = no(i). no2(j) = cz2sn(u(1:N1.i) = data((N1)*(i1)+j).j)). W = 1. end end clear no noo. % where the u is r\phi % for i = 1:N2+1 for j = 1:N1 u(j. no(i) = cz2sn(u(i. %end end evalue = En(ii(b)). i = 0.1:N2+1)). end end l = mean(no).. % find n and l number of the given grid . if (no(i) == 1) j = j 1. %if (j > 0) % l = mean(noo).
%l = round(l).I2) + kron(fac2*E.^2). x2= x2(2:N). [D./x2. dis2 = pi/2. %if (i > 0) %n = mean(noo2). end end n = mean(no2). [E. %n = round(n). %n = ceil(n). %Laplacian kinda of ! J = kron(E2. L = +kron(I. D2 = D2(2:N. D = D/dis. ang = 0. E2 = E^2.end if (i > 0) noo2(i) = no2(j). D2 = D^2. %for i = 1:N2+1 %for j = 1:N1 %data2((N1)*(i1)+j) = data((N1)*(i1)+j)/x2(j). %data2 = data. I = eye(N2+1).2:N). %end 130 .x] = cheb(N). D = D(2:N.2:N). I2 = eye(N1).fac1)+kron(fac2*E. % to calculate the norm of a % axially sysmetric data set! % % Jab = r^2sin \alpha % clear ener clear ang ener = 0. fac1 = diag(1. E = E/dis2.I2).fac1). %end %l = ceil(l). y2 = pi/2*(y+1).D2)+kron(E2.y] = cheb(N2). x2= dis*(x+1). fac2 = diag((cot(y2+eps)+cot(y2eps))/2).
% a = max(size(W)). posne = 2.jab*conj(data((N1)*(i1)+j))*nab2((N1)*(i1)+j)*darea.%end nab2 = L*data. [E. % cz2sn. ener = ener . end end norms2 % to calculate the norm of a % axially sysmetric data set! % % Jab = r^2sin \alpha % clear nom nom2 % Setup the grid nom = 0. for i = 3:N2+1 for j = 2:N1 darea = (x2(j)x2(j1))*(y2(i)y2(i1)). % x is the radius points.y] = cheb(N2). eps2 = 1E4*eps. x2= dis*(x+1). nom = nom + jab*abs(data((N1)*(i1)+j))^2*darea. end end nom = nom/2.jab*conj(data((N1)*(i1)+j))*aab2((N1)*(i1)+j)*darea. aab2 = J*data. if (W(1) > eps2) posne = 1. jab2 = sin(y2(i)). ang = ang . count = 0.m % function to work out number of crossing given matrix array.. [D.x] = cheb(N). function number = cz2sn(W). for i = 2:N2+1 for j = 2:N1 darea = (x2(j)x2(j1))*(y2(i)y2(i1)). y2 = pi/2*(y+1). x2= x2(2:N).. ener = ener . nom = sqrt(nom). 131 . jab = sin(y2(i)).jab*abs(data((N1)*(i1)+j))^2*darea*phi((N1)*(i1)+j). nom2 = 0. jab = sin(y2(i)).
eigs] = cp2sn2(dis. count = count+1.phi).\alpha are the variable in which we solve for at the % present both has N points ! % boundary condition are different on the \alpha ! % i. end end if (posne == 0) if W(i) > eps2 posne = 1. count = count+1. 132 .eigs] = cp2sn2(dis.end if (W(1) < eps2) posne = 0. end end if (posne == 2) if W(i) > eps2 posne = 1. % r.phi). end end end if posne == 2 count = 1. dis2 = pi/2.e we expect phi to be N1*N1 vector ! % [E. % [V. x2= x2(2:N). x2= dis*(x+1). E = E/dis2.N2. [D. y2 = pi/2*(y+1). D = D/dis.N. % x is the radius points. function [V. end for i = 2:a if (posne == 1) if W(i) < eps2 posne = 0. end number = count. % % program to solve the stationary schrodinger in axially symmetric case.x] = cheb(N).y] = cheb(N2). end if W(i) < eps2 posne = 0.N.N2.
fac1). x2 = dis*(x+1).D2)+kron(E2.4.cc) = x3(cc)*cos(y3(cc2)). fac1 = diag(1. L = +kron(I.cc) = f((cc21)*(N1)+cc). grav = 1. clear xx yy vv % grid setup % N = 25. end end dis = 50.2 Evolution programs for the axisymmetric case % nsax2. xx2(cc2. [EE. x3 = x3(2:N). % [V. E2 = E^2.cc) = x3(cc)*sin(y3(cc2)). clear DD EE for cc = 1:N1 for cc2 = 1:N2+1 f2(cc2. ADI(LOD) wave equation % that the wave equation done in chebyshev differention % matrix in 2d clear xx2 yy2 load dpole [DD. N2 = 20.fac1) + kron(fac2*E. x3 = dis*(x3+1)./x2. 133 . B.x3] =cheb(N).2:N). D = D(2:N.y] = cheb(N2).m SchrodingerNewton equation % on an axially symmetric grid % full nonlinear method using chebyshev. %Laplacian kinda of ! L = L + diag(phi).2:N). D2 = D2(2:N.x] = cheb(N). I = eye(N2+1).cc) = aphi((cc21)*(N1)+cc). fred(cc2.D2 = D^2. y3 = pi/2*(y3+1).y3] = cheb(N2). fac2 = diag((cot(y2eps)+cot(y2+eps))/2). dis2 = pi/2. y2 = pi/2*(y+1).eigs] = eig(L). [E. [D. yy2(cc2.^2).
*exp(0.^2) . %vv = 0.01*((xx). % time step ! dt = 0.2*dis)).j) = x2(j)*sin(y2(i)). L2 = D2.9*sqrt(xx. % wavefunction v = 0*pot.e no sponge factors %% calculate of differentiation matrices D2 = D^2.yy for i = 1:N2+1 for j = 1:N1 xx(i. pot = phi. phi = 0*vv.f2. % plotting the function on the screen at intervals ! plotgap = round((2000/10)/dt). L1 = E2 + fac2*E. e = 1*exp(0.3*(sqrt(xx.1*sqrt(xx.v.^2+(yy). vvnew = 0*vv.01*((xx+30).yy). % create xx.^2+yy.N. time =0. fac2(end. I2 = eye(N1).^2). 134 . end end % initial data % %vv = interp2(xx2.2:N). yy(i.^2). pic = vv. DD = D(2:N.^2)). E2 = E^2. D2 = D2(2:N. %vva = 0. D = D/dis.x2 = x2(2:N).u. % calculation of the potential %phi = 0*pot. vvold = vv.*exp(0.N2).yy2. E = E/dis2. fac1 = diag(x2).1) = 1E10.^2+(yy).2:N).^2)).xx.5.^2+yy.end) = 1E10.^2+yy. % sponge factors %e = 0*vv. vv = f2. fac3 = diag(1. I = eye(N2+1). fac2(1. % calculate the starting potential % u = grav*vv. %vv = vv + vva. fac2 = diag((cot(y2+eps)+cot(y2eps))/2). dt = (2000/10)/plotgap. % initial guess of new pot for potadi phi = padi(dis.j) = x2(j)*cos(y2(i)). %i./x2).
yy. % for a = 1:N2+1 % urr(a. for n = 0:30*plotgap t = n*dt.02:dis). %mesh(xxx.:.. % colormap([0 0 0]) %mesh(xx.^2+yy.a).phi) time(end+1) =t.yy.02:dis.% evolution loop begin here .yy.ADI(LOD) .yyy.’cubic’).N1). % calculate of the partials urr = zeros(N2+1.. for a = 1:N1 uaa(1:N2+1.abs(uaa)) %axis([2*dis 2*dis 0 2*dis 0.2 0.^2)). pic(:.imag(vv)) %hold on %surf(xx.2.a) = (1/x2(a)^2)*L1*vv(1:N2+1.yyy.end+1) =vv.abs(vvv)). %cn2d %cnorm end Res = 1.plotgap) == 0.n/plotgap+1).. d = (sv2 + 0.’..xxx. %shading interp %axis([1 1 1 1 1 1]).1:N1) + e(a.yyy] = meshgrid(dis:dis*.abs(vv). 135 . %vvv = interp2(xx.1:N1).N1). % .1:N1)../sqrt(xx.1:N1).% vv2 = 0*vv. %surf(xx. ds = sqrt(1)*uaa(a.abs(vv)) %. % end uaa = zeros(N2+1.dis:dis*.1:N1)./sqrt(xx.vv.Plot results at multiples of plotgap %subplot(2. drawnow % calculate of the norm. %view(that).’).yy.^2+yy. %hold off %mesh(xx.1:N1) = (L2*vv(a.yy. %surf(xx.’.^2)). npot = phi. %[xxx. if rem(n.5*dt*ds).yy. while (abs(Res) > 1E4) % set up for the iteration .2]) %title([’t = ’ num2str(t)]).*uaa(a. pot = phi. end % . % (1+dr)Un2 = (1da)Un1 for a = 1:N2+1 sv2 = vv(a.
b = eye(N1) .a))*L1.0.1:N1))*L2.uaa % for a = 1:N1 % uaa(1:N2+1.1:N1). %inaxprod(abs(vv). vv = vvnew.v.1) save axtest pic time N N2 xx yy dis end 136 .5*dt*bpm2. end % end part3 % % now the calculate of the potential of the new potential % due to the new wavefuntion u = grav*vvnew.’.5*dt*ds.N2. vvnew(1:N2+1. d = sv + 0. npot = n2pot.bs2 = sqrt(1)*L2 + diag(e(a.5*dt*bs2.0. end % end part1 % % caculate urr.u.N. vv3(1:N2+1. bs2 = (1/x2(a)^2)*sqrt(1)*L1 + (1/x2(a)^2)*diag(e(1:N2+1. for a = 1:N1 sv = vv2(1:N2+1.a) + 0. end % end part2 % % (1V)Un4 = (1+V)Un3 vvnew = 0*vv3. %pot = npot. % calculation of the potential %n2pot = 0*pot.5*dt*bs2.N2).*urr(1:N2+1.1:N1) = (b\d).5*pot.N.a)). b = eye(N2+1) . % +0.a) + e(1:N2+1. bpm2 = sqrt(1)*diag(npot(1:N2+1.a) = b\d.a).a). % end for a = 1:N2+1 urr(a. end phi = n2pot.’).a). % initial guess of new pot for potadi n2pot = padi(dis.a) = (1/x2(a)^2)*L1*vv2(1:N2+1. ds = sqrt(1)*urr(1:N2+1. end % (1+da)Un3 = (1dr)Un2 vv3 = 0*vv2.a).’.1:N1) = (L2*vv2(a.5*sqrt(1)*dt*vv3(1:N2+1. vv2(a. for a = 1:N1 d = vv3(1:N2+1. b = eye(N2+1) .a) = b\d.0.a).dis.*pot(1:N2+1. Res = norm(n2potnpot).a).abs(vv). % wavefunction v = 0*pot.
2:N). % readjust density function ! uu = abs(u). umod = uu.1:N1).N. dis2 = pi/2. 137 . E = E/dis2.’. D = D(2:N. % % u is the wavefunction ! % v is the potential ! % grid setup ! [E.u.^2*fac1.1:N1) = (L2*v(a. fac1= diag(1.x] = cheb(N).1) = 1E5. fac2(end. fac2= diag((cot(y2+eps)+cot(y2eps))/2).end) = 1E5.N1).function phi = padi(dis. [D.N1). %end waa = zeros(N2+1. E2 = E^2. I2 = eye(N1).017.’). while (abs(Res) > 1E3) % calculate wrr.017. I = eye(N2+1). x2 = dis*(x+1). % uu =u.y] = cheb(N2). D2 = D2(2:N. x2 = x2(2:N). % calculation of the differnetiation matrics D2 = D^2.waa wrr = zeros(N2+1. rho = 0. L1 = E2 + fac2*E. % start of the iteration Res = 1E12.with axially % sysmetric case. % x is the radius points.N2) % padi. y2 = pi/2*(y+1). rho2 = 0.m % Aim: To use PeacemanRachford ADI iteration % to find the solution to the poisson equation in 2D. fac2(1./x2).2:N). %for a = 1:N2+1 % wrr(a.v. D = D/dis. L2 = D2.
1:N1)).% % (L2+rho)S = (L1+rhoumod)Wn A = L2+rho*I2.1 Chapter 8 Programs Evolution programs for the two dimensional SN equations % ns2d.a).a).1:N1) .a) .a)).1:N1).1:N1) = (L2*vv(a.’).umod(1:N2+1.for a= 1:N1 waa(1:N2+1. end % end of ADI test = abs(newvv).wrr %for a = 1:N1 %waa(1:N2+1.1:N1) + waa(a. phi = v*fac. %abs(uu(a.’)).’. end % end part1 of ADI % calculate waa.1:N1). newv(1:N2+1. % part2 ADI %A = L1+rho2*I. vv(a.’. end newv = 0*v.umod(a.^2. v = newv.5 B. ADI(LOD) wave equation % that the wave equation done in chebyshev differention % matrix in 2d dis = 40.1:N1)). A = L1*(1/x2(a)^2) + rho2*I. vv = 0*v.ADI . %end for a = 1:N2+1 wrr(a.a) = A\sv. for a = 1:N2+1 sv2 = rho*v(a.5. % (L1+rho2)Wn+1 = (L2+rho2umod)S for a = 1:N1 sv = rho2*vv(1:N2+1. end % ./x2).m SchrodingerNewton equation % on a 2d grid % full nonlinear method using chebyshev.a) = (1/x2(a)^2)*L1*v(1:N2+1.a) = L1*vv(1:N2+1.^2. Res = norm(test(1:N2+1. padires = Res. 138 .a). grav = 1. %abs(uu(1:N2+1.a) + wrr(1:N2+1. B.1:N1) = (A\(sv2. end fac = diag(1.
^2+yy. %vv = vv + vva. x = dis*x.*exp(sqrt(1)*speed*yy).yy2.^2))*exp(sqrt(1)*0. for cc = 1:N+1 for cc2 = 1:N+1 if isnan(vv(cc.07*exp(0. D = D*1/dis. pot = phi. vvnew = 0*vv.speed =0.^2+yy. phi = 0*vv. [D.01*((xx+10). [xx. %% calculate of differentiation matrices D2 = D^2. % initial data % %vv = 0. %initial velocity perhaps %.:)). end end end dt = 1. % to add initial velocity %vv = 0.y).y2). load twin [DD. %vv = vv+vva.5*(sqrt(xx. dt = 400/(2*plotgap).^2)(dis10)))).yy). y = x’.*exp(sqrt(1)*0.1*exp(0.^2)).^2)). %vva = 0. [xx2.01*((xx+35).5*(f2+f2(N21:1:1.yy2] = meshgrid(x2.^2+yy.^2+(yy35).f2. 139 .07*exp(0. % plotting the function on the screen at intervals ! plotgap = round(400/(2*dt)). end end %f3 = 0. for cc = 1:N21 for cc2 = 1:N21 f2(cc.07*exp(0.01*((xx15).^2)). %vva = 0. x2 = dis2*x2(2:N2). vvold = vv.*exp(sqrt(1)*speed*(yy+xx)/sqrt(2)).02*((xx).yy] = meshgrid(x. vv = interp2(xx2.x2] = cheb(N2). e = min(1. % grid setup % N = 36.05*exp(0.^2+(yy+15). y2 = x2’.cc2) = f((cc1)*(N21)+cc2). % sponge factors e = 0*vv.cc2) = 0.cc2)) == 1 vv(cc.1*xx).5).xx.x] = cheb(N).
end % .yy.2:N). % calculate the starting potential % u = grav*vv.03]) %title([’t = ’ num2str(t)]).% vv2 = 0*vv. time(end+1) = t.plotgap) == 0.N). drawnow % calculate of the norm.2:N). % initial guess of new pot for potadi phi = potadi(dis.2:N) + e(a. %axis([1 1 1 1 1 1]).abs(vvv)).2. %mesh(xx.u.*uxx(a.:. % wavefunction v = 0*pot. %cn2d %cnorm end Res = 1.N+1).yy. % .yyy.’).yyy.2:N).N+1). % evolution loop begins here for n = 0:50*plotgap t = n*dt. pot = phi. end uxx = zeros(N+1.yy.03 0. %axis([dis dis dis dis 0.phi) %view(that). 140 . pic = vv.. pic(:.2:N) = (D2*vv(a.xxx.a). % calculate uxx uyy = zeros(N+1.end+1) = vv. %vvv = interp2(xx..ADI(LOD) .D2 = D2(2:N. % colormap([0 0 0]) %mesh(xx. % calculation of the potential time = 0. %mesh(xxx. for a = 2:N uyy(a.yyy] = meshgrid(dis:dis*.02:dis. npot = phi. d = (sv2 + 0. if rem(n. %[xxx. for a = 2:N uxx(2:N.v. while (abs(Res) > 1E3) % set up for the iteration .Plot results at multiples of plotgap %subplot(2.02:dis).’cubic’).n/plotgap+1).’.imag(vv)).2:N). % (1+dy)Un2 = (1dx)Un1 for a = 2:N sv2 = vv(a.’. ds = sqrt(1)*uxx(a.dis:dis*.5*dt*ds).2:N).vv.a) = D2*vv(2:N.
a) + e(2:N.a))*D2.a) . end % end part1 % % caculate uxx.a) = b\d.0.a).5*sqrt(1)*dt*vv3(2:N. % wavefunction v = 0*pot. npot = n2pot. d = sv + 0.a).u.*pot(2:N.’).5*pot.a).0.5*(vvnew .N).2:N) = (D2*vv2(a.5*dt*bs2. for a = 2:N sv = vv2(2:N.a). vv3(2:N.’.2:N) = (b\d). for a = 2:N d = vv3(2:N. end for a = 2:N uyy(a.a) = D2*vv2(2:N. end % end part2 % % (1V)Un4 = (1+V)Un3 vvnew = 0*vv3.a)).2:N))*D2. % +0. vv = vvnew.a). b = eye(N1) + 0. % calculation of the potential Res = norm(n2potnpot). end % (1+dx)Un3 = (1dy)Un2 vv3 = 0*vv2. vvnew(2:N.5*dt*bs2.v. end % end part3 % % now the calculate of the potential of the new potential % due to the new wavefuntion u = grav*vvnew.2:N).5*dt*bpm2. b = eye(N1) 0. vv2(a. end save test3 pic time N xx yy dis function phi = potadi(dis. % initial guess of new pot for potadi n2pot = potadi(dis. end phi = n2pot.a).vvnew(:.N+1:1:1)).u.’.uyy for a = 2:N uxx(2:N.N) 141 .v.5*dt*ds.bs2 = sqrt(1)*D2 + diag(e(a.*uyy(2:N. %vv = 0. %pot = npot. b = eye(N1) . ds = sqrt(1)*uyy(2:N. bpm2 = sqrt(1)*diag(npot(2:N.a) = b\d. bs2 = sqrt(1)*D2 + diag(e(2:N.
m Aim: To use PeacemanRachford ADI iteration to find the solution to the poisson equation in 2D u is the wavefunction ! v is the potential ! % grid setup ! [D. rho = +0.uyy uyy = zeros(N+1.ADI .% A = D2+rho*I.% % % % % % potadi. end for a = 2:N uyy(a.uyy for a = 2:N uxx(2:N.2:N) = (A\(sv2.a) = D2*v(2:N.abs(u(a. end % end part1 of ADI % calculate uxx.N+1). for a= 2:N uxx(2:N.’.a) = D2*vv(2:N. y = x’. % start of the iteration Res = 1.y). D2 = D2(2:N. for a = 2:N uyy(a. vv = 0*v. while (abs(Res) > 1E4) % calculate uxx.’. end uxx = zeros(N+1. I = eye(N1).yy] = meshgrid(x.2:N)).a).^2. 142 .2:N) + uxx(a.2:N) = (D2*v(a. D = D*1/dis.2:N) = (D2*vv(a.’).2:N) .2:N). end newv = 0*v. end % . % calculation of the differnetiation matrics D2 = D^2.a).2:N).’)). x = dis*x.’. [xx. vv(a.2:N).01. for a = 2:N sv2 = rho*v(a.x] = cheb(N).’).N+1).
end phi = v.a) = A\sv. Res = norm(test(2:N.a)).% part2 ADI for a = 2:N sv = rho*vv(2:N.abs(u(2:N. end % end of ADI test = newvv. newv(2:N.^2. v = newv.a) +uyy(2:N. 143 .a) .2:N)).
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.