Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Schr¨ odingerNewton equations
Richard Harrison
St.Peter’s College
University of Oxford
A thesis submitted for the degree of
Doctor of Philosophy
Trinity 2001
Acknowledgements
I would like to thank my supervisors Paul Tod and Irene Moroz, Roger Penrose
for the idea behind my thesis, Nick Trefethen for his help in use of spectral
methods and his excellent lectures, Ian Sobey for his helpful comments and
Lionel Mason for getting me started with this project. I would also like to
thank the EPSRC for their ﬁnancial support. Last but not least I would like
thank my mother, father and sister for their support.
Abstract
The Schr¨ odingerNewton (SN) equations were proposed by Penrose [18] as a model for
gravitational collapse of the wavefunction. The potential in the Schr¨ odinger equation is
the gravity due to the density of ψ
2
, where ψ is the wavefunction. As with normal
Quantum Mechanics the probability, momentum and angular momentum are conserved.
We ﬁrst consider the spherically symmetric case, here the stationary solutions have been
found numerically by Moroz et al [15] and Jones et al [3]. The ground state which has the
lowest energy has no zeros. The higher states are such that the (n+1)th state has n zeros.
We consider the linear stability problem for the stationary states, which we numerically
solve using spectral methods. The ground state is linearly stable since it has only imaginary
eigenvalues. The higher states are linearly unstable having imaginary eigenvalues except
for n quadruples of complex eigenvalues for the (n +1)th state, where a quadruple consists
of {λ,
¯
λ, −λ, −
¯
λ}. Next we consider the nonlinear evolution, using a method involving an
iteration to calculate the potential at the next time step and CrankNicolson to evolve the
Schr¨ odinger equation. To absorb scatter we use a sponge factor which reduces the reﬂection
back from the outer boundary condition and we show that the numerical evolution converges
for diﬀerent mesh sizes and time steps. Evolution of the ground state shows it is stable
and added perturbations oscillate at frequencies determined by the linear perturbation
theory. The higher states are shown to be unstable, emitting scatter and leaving a rescaled
ground state. The rate at which they decay is controlled by the complex eigenvalues of
the linear perturbation. Next we consider adding another dimension in two diﬀerent ways:
by considering the axisymmetric case and the 2D equations. The stationary solutions are
found. We modify the evolution method and ﬁnd that the higher states are unstable. In
2D case we consider rigidly rotationing solutions and show they exist and are unstable.
Contents
1 Introduction: the Schr¨ odingerNewton equations 1
1.1 The equations motivated by quantum statereduction . . . . . . . . . . . . . 1
1.2 Some analytical properties of the SN equations . . . . . . . . . . . . . . . . 4
1.2.1 Existence and uniqueness . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Rescaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 Lagrangian form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.4 Conserved quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.5 Lie point symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.6 Dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Analytic results about the timeindependent case . . . . . . . . . . . . . . . 8
1.3.1 Existence and uniqueness . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Variational formulation . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.3 Negativity of the energyeigenvalue . . . . . . . . . . . . . . . . . . . 9
1.4 Plan of thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Sphericallysymmetric stationary solutions 12
2.1 The equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Computing the sphericallysymmetric stationary states . . . . . . . . . . . . 13
2.2.1 RungeKutta integration . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.2 Alternative method . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Approximations to the energy of the bound states . . . . . . . . . . . . . . 15
3 Linear stability of the sphericallysymmetric solutions 18
3.1 Linearising the SN equations . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Separating the O() equations . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Restriction on the possible eigenvalues . . . . . . . . . . . . . . . . . . . . . 24
3.5 An inequality on Re(λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
i
4 Numerical solution of the perturbation equations 26
4.1 The method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2 The perturbation about the ground state . . . . . . . . . . . . . . . . . . . 28
4.3 Perturbation about the second state . . . . . . . . . . . . . . . . . . . . . . 33
4.4 Perturbation about the higher order states . . . . . . . . . . . . . . . . . . 36
4.5 Bound on real part of the eigenvalues . . . . . . . . . . . . . . . . . . . . . . 39
4.6 Testing the numerical method by using RungeKutta integration . . . . . . 39
4.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5 Numerical methods for the evolution 46
5.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2 Numerical methods for the Schr¨ odinger equation with arbitrary time inde
pendent potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3 Conditions on the time and space steps . . . . . . . . . . . . . . . . . . . . 48
5.4 Boundary conditions and sponges . . . . . . . . . . . . . . . . . . . . . . . . 49
5.5 Solution of the Schr¨ odinger equation with zero potential . . . . . . . . . . . 50
5.6 Schr¨ odinger equation with a trial ﬁxed potential . . . . . . . . . . . . . . . 51
5.7 Numerical evolution of the SN equations . . . . . . . . . . . . . . . . . . . 52
5.8 Checks on the evolution of SN equations . . . . . . . . . . . . . . . . . . . 53
5.9 Mesh dependence of the methods . . . . . . . . . . . . . . . . . . . . . . . . 54
5.10 Large time behaviour of solutions . . . . . . . . . . . . . . . . . . . . . . . 54
6 Results from the numerical evolution 56
6.1 Testing the sponges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.2 Evolution of the ground state . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.3 Evolution of the higher states . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.3.1 Evolution of the second state . . . . . . . . . . . . . . . . . . . . . . 62
6.3.2 Evolution of the third state . . . . . . . . . . . . . . . . . . . . . . . 67
6.4 Evolution of an arbitrary spherically symmetric Gaussian shell . . . . . . . 69
6.4.1 Evolution of the shell while changing v . . . . . . . . . . . . . . . . . 72
6.4.2 Evolution of the shell while changing a . . . . . . . . . . . . . . . . . 74
6.4.3 Evolution of the shell while changing σ . . . . . . . . . . . . . . . . 77
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
ii
7 The axisymmetric SN equations 81
7.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.2 The axisymmetric equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.3 Finding axisymmetric stationary solutions . . . . . . . . . . . . . . . . . . . 82
7.4 Axisymmetric stationary solutions . . . . . . . . . . . . . . . . . . . . . . . 83
7.5 Timedependent solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
8 The TwoDimensional SN equations 97
8.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.2 The equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.3 Sponge factors on a square grid . . . . . . . . . . . . . . . . . . . . . . . . . 98
8.4 Evolution of dipolelike state . . . . . . . . . . . . . . . . . . . . . . . . . . 99
8.5 Spinning Solution of the twodimensional equations . . . . . . . . . . . . . . 102
8.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
A Fortran programs 108
A.1 Program to calculate the bound states for the SN equations . . . . . . . . 108
B Matlab programs 116
B.1 Chapter 2 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
B.1.1 Program for asymptotically extending the data of the bound state
calculated by RungeKutta integration . . . . . . . . . . . . . . . . . 116
B.1.2 Programs to calculate stationary state by Jones et al method . . . . 116
B.2 Chapter 4 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
B.2.1 Program for solving the Schr¨ odinger equation . . . . . . . . . . . . . 119
B.2.2 Program for solving the O() perturbation problem . . . . . . . . . . 121
B.2.3 Program for the calculation of (4.10) . . . . . . . . . . . . . . . . . . 122
B.2.4 Program for performing the RungeKutta integration on the O()
perturbation problem . . . . . . . . . . . . . . . . . . . . . . . . . . 123
B.3 Chapter 6 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
B.3.1 Evolution program . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
B.3.2 Fourier transformation program . . . . . . . . . . . . . . . . . . . . . 126
B.4 Chapter 7 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
B.4.1 Programs for solving the axisymmetric stationary state problem . . 126
B.4.2 Evolution programs for the axisymmetric case . . . . . . . . . . . . . 133
B.5 Chapter 8 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
B.5.1 Evolution programs for the two dimensional SN equations . . . . . 138
iii
Bibliography 143
iv
List of Figures
2.1 First four eigenfunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 The least squares ﬁt of the energy eigenvalues . . . . . . . . . . . . . . . . . 16
2.3 The number of nodes with the Moroz et al [15] prediction of the number of
nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 Loglog plot of the energy values with log(n) . . . . . . . . . . . . . . . . . 17
4.1 The smallest eigenvalues of the perturbation about the ground state . . . . 29
4.2 All the computed eigenvalues of the perturbation about the ground state . . 29
4.3 The ﬁrst eigenvector of the perturbation about the ground state. Note the
scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.4 The second eigenvector of the perturbation about the ground state . . . . . 31
4.5 The third eigenvector of the perturbation about the ground state . . . . . . 31
4.6 The change in the sample eigenvalue with increasing values of N (L = 150) 32
4.7 The change in the sample eigenvalue with increasing values of L (N = 60) . 32
4.8 The lowest eigenvalues of the perturbation about the second bound state . . 34
4.9 All the computed eigenvalues of the perturbation about the second bound state 34
4.10 The ﬁrst eigenvector of the perturbation about the second bound state, Note
the scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.11 The second eigenvector of the perturbation about the second bound state . 35
4.12 The third eigenvector of the perturbation about the second bound state . . 36
4.13 The eigenvalues of the perturbation about the third bound state . . . . . . 37
4.14 The second eigenvector of the perturbation about the third bound state . . 37
4.15 The third eigenvector of the perturbation about the third bound state . . . 38
4.16 The third eigenvector of the perturbation about the third bound state . . . 38
4.17 The eigenvalues of the perturbation about the fourth bound state . . . . . . 40
4.18 The ﬁrst eigenvector of the perturbation about the ground state, using Runge
Kutta integration; compare with ﬁgure 4.3 . . . . . . . . . . . . . . . . . . . 41
4.19 The second eigenvector of the perturbation about the ground state, using
RungeKutta integration; compare with ﬁgure 4.4 . . . . . . . . . . . . . . . 42
v
4.20 The ﬁrst eigenvector of the perturbation about the second bound state, using
RungeKutta integration; compare with ﬁgure 4.10 . . . . . . . . . . . . . . 42
4.21 The second eigenvector of the perturbation about the second bound state,
using RungeKutta integration; compare with ﬁgure 4.11 . . . . . . . . . . . 43
4.22 The second eigenvector of the perturbation about the third bound state,
using RungeKutta integration; compare with ﬁgure 4.14 . . . . . . . . . . . 43
4.23 The real part of third eigenvector of the perturbation about the third bound
state, using RungeKutta integration; compare ﬁgure 4.15 . . . . . . . . . . 44
4.24 The imaginary part of third eigenvector of the perturbation about the third
bound state, using RungeKutta integration; compare with ﬁgure 4.16 . . . 44
6.1 Testing the sponge factor with wave moving oﬀ the grid . . . . . . . . . . . 56
6.2 Testing sponge factor with wave moving oﬀ the the grid . . . . . . . . . . . 57
6.3 The graph of the phase angle of the ground state as it evolves . . . . . . . . 58
6.4 The oscillation about the ground state . . . . . . . . . . . . . . . . . . . . . 59
6.5 Evolution of the ground state . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.6 The eigenvalue associated with the linear perturbation about the ground state 60
6.7 The oscillation about the ground state at a given point . . . . . . . . . . . . 61
6.8 The Fourier transform of the oscillation about the ground state . . . . . . . 61
6.9 The graph of the phase angle of the second state as it evolves . . . . . . . . 63
6.10 The long time evolution of the second state in Chebyshev methods . . . . . 63
6.11 Evolution of second state tolerance E9 approximately . . . . . . . . . . . . 65
6.12 f(t) −A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.13 Fourier transformation of f(t) −A . . . . . . . . . . . . . . . . . . . . . . . 66
6.14 Decay of the second state . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.15 The short time evolution of the second bound state . . . . . . . . . . . . . . 67
6.16 Oscillation about second state at a ﬁxed radius . . . . . . . . . . . . . . . . 68
6.17 Fourier transform of the oscillation about the second state . . . . . . . . . . 68
6.18 Evolution of third state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.19 f(t) −A with respect to the third state . . . . . . . . . . . . . . . . . . . . 70
6.20 Fourier transform of growing mode about third state . . . . . . . . . . . . . 71
6.21 Probability remaining in the evolution of the third state . . . . . . . . . . . 71
6.22 Progress of evolution with diﬀerent velocities and times . . . . . . . . . . . 72
6.23 Diﬀerence of the other time steps compared with dt = 1 . . . . . . . . . . . 73
6.24 Richardson fraction done with diﬀerent h
i
’s . . . . . . . . . . . . . . . . . . 73
6.25 Diﬀerence in N value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
vi
6.26 Evolution of lump at diﬀerent a . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.27 Comparing diﬀerences with diﬀerent time steps . . . . . . . . . . . . . . . . 75
6.28 Richardson fraction with diﬀerent h
i
’s . . . . . . . . . . . . . . . . . . . . . 76
6.29 Diﬀerence in N value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.30 Evolution of lump at diﬀerent σ . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.31 Comparing diﬀerences with diﬀerent time steps . . . . . . . . . . . . . . . . 78
6.32 Richardson fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.33 Diﬀerence in N value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.1 Ground state, axp1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.2 Contour plot of the ground state, axp1 . . . . . . . . . . . . . . . . . . . . . 85
7.3 “Dipole” state next state after ground state , axp2 . . . . . . . . . . . . . . 86
7.4 Contour Plot of the dipole, axp2 . . . . . . . . . . . . . . . . . . . . . . . . 86
7.5 Second spherically symmetric state, axp3 . . . . . . . . . . . . . . . . . . . 87
7.6 Contour Plot of the second spherically symmetric state, axp3 . . . . . . . . 87
7.7 Not quite the 2nd spherically symmetric state ,axp4 . . . . . . . . . . . . . 88
7.8 Contour plot of the not quite 2nd spherically symmetric state, axp4 . . . . 88
7.9 Not quite the 3rd spherically symmetric state E = −0.0162, axp5 . . . . . . 89
7.10 Contour plot of Not quite the 3rd spherically symmetric state E = −0.0162,
axp5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.11 3rd spherically symmetric sate ,axp6 . . . . . . . . . . . . . . . . . . . . . . 90
7.12 Contour plot of 3rd spherically symmetric state ,axp6 . . . . . . . . . . . . 90
7.13 Axially symmetric state with E = −0.0263, axp7 . . . . . . . . . . . . . . . 91
7.14 Contour plot of axp7  double dipole . . . . . . . . . . . . . . . . . . . . . . 91
7.15 State with E = −0.0208 and J = 3.1178 ,axp8 . . . . . . . . . . . . . . . . . 92
7.16 Contour plot of the state with E = −0.0208 and J = 3.1178 ,axp8 . . . . . 92
7.17 Evolution of the dipole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
7.18 Average Richardson fraction with dt = 0.25, 0.5, 1 . . . . . . . . . . . . . . 95
7.19 Average Richardson fraction with dt = 0.125, 0.25, 0.5 . . . . . . . . . . . . 95
7.20 Evolution of the dipole with diﬀerent mesh size . . . . . . . . . . . . . . . . 96
8.1 Sponge factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
8.2 Lump moving oﬀ the grid with v = 2, N = 30 and dt = 2 . . . . . . . . . . 100
8.3 Lump moving oﬀ the grid with v = 2, N = 30 and dt = 2 . . . . . . . . . . 100
8.4 Lump moving oﬀ the grid with v = 2, N = 30 and dt = 2 . . . . . . . . . . 101
8.5 Stationary state for the 2dim SN equations . . . . . . . . . . . . . . . . . 101
8.6 Evolution of a stationary state . . . . . . . . . . . . . . . . . . . . . . . . . 102
vii
8.7 Real part of spinning solution . . . . . . . . . . . . . . . . . . . . . . . . . . 104
8.8 Imaginary part of spinning solution . . . . . . . . . . . . . . . . . . . . . . . 104
8.9 Evolution of spinning solution . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.10 Evolution of spinning solution . . . . . . . . . . . . . . . . . . . . . . . . . . 105
viii
Chapter 1
Introduction: the
Schr¨ odingerNewton equations
1.1 The equations motivated by quantum statereduction
The Schr¨ odingerNewton equations (abbreviated as SN equations) are :
i
∂Ψ
∂
˜
t
=
−
2
2m
∇
2
Ψ +mΦΨ, (1.1a)
∇
2
Φ = 4πGmΨ
2
, (1.1b)
where is Planck’s constant, m is the mass of the particle, Ψ is the wave function, Φ is
the potential, G is the gravitational constant and t is time.
To get the timeindependent form of the SN equations (1.1) we consider the substitution
Ψ(x, y, z, t) = Ψ(x, y, z)e
−i
˜
E
˜
t
, (1.2a)
Φ(x, y, z, t) = Φ(x, y, z), (1.2b)
which gives the timeindependent SN equations,
−
2
2m
∇
2
Ψ +mΦΨ =
˜
EΨ, (1.3a)
∇
2
Φ = 4πGmΨ
2
. (1.3b)
The boundary conditions for the equations are the same as those for the usual Schr¨ odinger
equation and the Poisson equation, i.e we require the wave function to be smooth for all
x ∈ R
3
and normalisable, we also require the potential function to be smooth and zero at
large distances.
One idea for a theory of quantum state reduction was put forward by Penrose [17], [18].
The idea was that a superposition of two or more quantum states, which have a signiﬁcant
amount of mass displacement between the states, ought to be unstable and reduce to one of
the states within a ﬁnite time. This argument is motivated by a conﬂict between the basic
1
principles of quantum mechanics and those of general relativity. This idea requires that
there be a special set of quantum states which collapse no further. This would be called a
preferred basis. This preferred basis would be the stationary quantum states. Penrose [18]
states that “the phenomenon of quantum state reduction is a gravitational phenomenon,
and that essential changes are needed in the framework of quantum mechanics in order that
its principles can be adequately married with the principles of Einstein’s general relativ
ity.” According to Penrose, superpositions of these states ought to decay within a certain
characteristic average time of T
G
, where T
G
=
E
G
and E
G
is the gravitational selfenergy
of the diﬀerence bewteen the mass distributions of the two superposed states. That is, the
selfenergy is
1
2
R
3
∇ψ
2
dV , where ψ = ψ
1
−ψ
2
the diﬀerence of the two superposed states
mass distributions. [The SN equations are a ﬁrst approximation we can consider the case
where gravity is taken to be Newtonian and spacetime is nonrelativistic.]
These equations were ﬁrst considered by Ruﬃni and Bonazzola [21] in connection with
the theory of selfgravitating bosonstars. There the SN equations are the nonrelativistic
limit of the governing KleinGordon equations. Bosonstars consist of a large collection of
bosons under the gravitational force of their own combined mass. Ruﬃni and Bonazzola [21]
considered the problem of ﬁnding stationary bosonstars in the nonrelativistic spherically
symmetric case which corresponds to ﬁnding stationary spherically symmetric solutions of
the SN equations. These equations have also been considered by Moroz, Penrose and Tod
[15] where they computed stationary solutions in the case of spherical symmetry. Moroz
and Tod [14] prove some analytic properties of the SN equations. They have also been
considered by Bernstein, Giladi and Jones [3] who developed a better way than a shooting
method for calculating the stationary solutions in the case of spherical symmetry. Bernstein
and Jones [4] have started considering a method for the dynamical evolution.
Our object in this thesis is to study the SN equations in the time dependent and time
independent cases.
We can consider the nondimensionalized SN equations, via a transformation (Ψ, Φ,
˜
t, R) →
(ψ, φ, t, r) where:
ψ = αΨ,
φ = βΦ,
t = γ
˜
t,
r = δR, (1.4)
2
such that the SN equation become :
i
∂ψ
∂t
= −∇
2
ψ +φψ, (1.5a)
∇
2
φ = ψ
2
, (1.5b)
Normalisation is preserved, i.e
Ψ
2
d
3
x = 1 and
ψ
2
d
3
X = 1 if α
2
= δ
−3
. The
gravitation equation (1.5b) then becomes,
β
δ
2
∇
2
Φ = α
2
Ψ
2
, (1.6)
from this we deduce that
α
2
δ
2
β
= 4πGm, thus β =
1
4πGmδ
. The Schr¨ odinger equation
becomes,
iα
γ
Ψ
˜
t
= −
α
δ
2
∇
2
Ψ +αβΨΦ, (1.7)
which becomes,
im
γβ
Ψ
˜
t
= −
m
δ
2
β
∇
2
Ψ +mΨΦ, (1.8)
so we then deduce that
m
γβ
= and
m
δ
2
β
=
2
2m
so,
4πGm
2
δ
=
2
2m
, (1.9)
and
δ =
8πGm
3
2
. (1.10)
For γ we have
γ =
32π
2
G
2
m
5
3
. (1.11)
Now from (1.5) the nondimensionalized time independent SN equations are
Eψ = −∇
2
ψ +φψ, (1.12a)
∇
2
φ = ψ
2
, (1.12b)
where
˜
E = E
2G
2
m
5
2
. We can eliminate the E term from (1.12) by letting
ψ = S, (1.13a)
E −φ = V, (1.13b)
which reduces (1.12) to
∇
2
S = −SV, (1.14a)
∇
2
V = −S
2
, (1.14b)
which was the form of the equations consider in [14], [15].
3
1.2 Some analytical properties of the SN equations
1.2.1 Existence and uniqueness
Existence and uniqueness for solutions is established by the following simpliﬁed version of
the theorem of Illner et al [13].
Given χ(x) ∈ H
2
((R)
3
) with L
2
norm equal to 1, the system (1.5) has a unique, strong
solution ψ(x, t) global in time with ψ(x, 0) = χ(x) and
ψ
2
= 1. Illner et al [13] give
regularity properties for ψ and φ.
1.2.2 Rescaling
Note that if (ψ, φ, x, t) is a solution of the SN equations then
(λ
2
ψ, λ
2
φ, λ
−1
x, λ
−2
t) is also a solution where λ is any constant. Suppose that
V
ψ
2
dV = λ
−1
, (1.15)
where λ is a real constant, and consider
ˆ
ψ = λ
2
ψ(λx, λ
2
t), (1.16a)
ˆ
φ = λ
2
φ(λx, λ
2
t). (1.16b)
Then
ˆ
ψ and
ˆ
φ satisfy the SN equations with
V

ˆ
ψ
2
d
ˆ
V = 1. (1.17)
We also note that the rescaling of the solutions will also cause the rescaling of the energy
eigenvalues (as well as the action). that is:
E
new
= λE. (1.18)
1.2.3 Lagrangian form
Note that (1.5) can be obtained from the Lagrangian:
[∇ψ · ∇
¯
ψ +
1
2
φψ
2
+
i
2
(ψ
¯
ψ
t
−
¯
ψψ
t
)]d
3
xdt, (1.19)
where it is understood that φ is the solution of ∇
2
φ = ψ
2
. Alternatively, one may solve
the Poisson equation with the appropriate Green’s function and consider the Lagrangian.
[∇ψ · ∇
¯
ψ −
1
8π
ψ(x)
2
ψ(y)
2
x −y
d
3
y +
i
2
(ψ
¯
ψ
t
−
¯
ψψ
t
)]d
3
xdt. (1.20)
See Christian [5] and Diosi [7] for details.
4
1.2.4 Conserved quantities
The system (1.5) admits several conserved quantities, all of which are to be expected from
linear quantum mechanics. Deﬁne
ρ = ψ
2
, (1.21a)
J
i
= −i(
¯
ψψ
i
−ψ
¯
ψ
i
), (1.21b)
S
ij
= −ψ
¯
ψ
ij
−
¯
ψψ
ij
+
¯
ψ
i
ψ
j
+ψ
i
¯
ψ
j
+ 2φ
i
φ
j
−δ
ij
φ
k
φ
k
, (1.21c)
where ψ
i
=
∂ψ
∂x
i
and φ
i
=
∂φ
∂x
i
then
˙ ρ = −J
i,i
, (1.22a)
˙
J
i
= −S
ij,j
, (1.22b)
where the dot denotes diﬀerentiation with respect to time. Therefore
P =
ρd
3
x = constant, (1.23)
that is, the total probability is conserved. Next deﬁne the total momentum P
i
by
P
i
=
J
i
d
3
x. (1.24)
Then
˙
P
i
= 0. (1.25)
Deﬁne the centre of mass < x
i
> by
< x
i
>=
ρx
i
d
3
x, (1.26)
then
˙
< x
i
> = P
i
, (1.27)
so that, as expected, the centreofmass follows a straight line. The total angular momentum
is
L
i
=
ijk
x
j
J
k
d
3
x, (1.28)
and then, by the symmetry of S
ij
, it follows that
˙
L
i
= 0. (1.29)
5
We deﬁne the kinetic energy T and potential energy V in the obvious way by
T =
∇ψ
2
d
3
x, (1.30a)
V =
φψ
2
d
3
x. (1.30b)
Note from (1.5b) that
φ ˙ ρd
3
x =
˙
φρd
3
x. (1.31)
It follows from this that the quantity
E = T +
1
2
V, (1.32)
is conserved. We shall sometimes call this the conserved energy or the action. It is closely
related to the action for the timeindependent equations. The total energy E = T + V is
consequently not conserved. The identity
ρφ
i
= (φ
i
φ
j
−
1
2
δ
ij
φ
k
φ
k
)
i
(1.33)
is a consequence of (1.5b) and it follows from this that the averaged ‘selfforce’ is zero, i.e
that
< F
i
>=
φ
i
ρd
3
x = 0, (1.34)
which may be thought of as Newton’s Third Law.
We may deﬁne a sequence of moments q
(n)
i...j
by
q
(n)
i...j
=
ρ x
i
. . . x
j
. .. .
n
d
3
x, (1.35)
and other tensors p
(n)
i...jk
and s
(n)
i...jkm
by
p
(n)
i...jk
=
x
(i
. . . x
j
. .. .
n
J
k)
d
3
x, (1.36a)
s
(n)
i...jkm
=
x
(i
. . . x
j
. .. .
n
S
km)
d
3
x. (1.36b)
Then it follows from (1.22b) that
˙ q
(n)
i...j
= np
(n−1)
i...jk
, (1.37a)
˙ p
(n)
i...jk
= ns
(n−1)
i...jkm
. (1.37b)
In particular this means that p
(n)
i...jk
and s
(n)
i...jkm
are always zero for steady solutions.
6
1.2.5 Lie point symmetries
Following the general method of Stephani [24] we can ﬁnd the Lie point symmetries of
(1.5). These consist of rotations and translations together, with a generalised Galilean
transformation which can be expressed as follows:
x → ˆ x = x +P(t), (1.38a)
ψ →
ˆ
ψ = ψ(x +P, t) exp[
−i
2
˙
P.x +
i
4

˙
P
2
dt], (1.38b)
φ →
ˆ
φ = φ(x +P, t) +
1
2
x.
¨
P, (1.38c)
which is a Galilean transformation if and only if
¨
P = 0. By a Galilean transformation,
we can reduce the total momentum to zero, and then by a translation we can place the
centreofmass at the origin.
These clearly are all Lie point symmetries. The Galilean invariance of (1.5) was noted by
Christian [5]. The analysis leading to the claim that there are no more Lie pont symmetries
is straightforward but has not been published.
1.2.6 Dispersion
One of the moments is particularly signiﬁcant, namely the dispersion:
< x
2
>= q
(2)
ii
=
x
2
ρd
3
x. (1.39)
Following Arriola and Soler [2], we ﬁnd
¨ q
(2)
ii
= 2(
x
i
˙
J
i
d
3
x) = 2
S
ii
d
3
x =
(8∇ψ
2
+ 2φψ
2
)d
3
x, (1.40)
so that
d
2
< x
2
>
dt
2
= 4E + 4T = 8E −2V. (1.41)
Now recall that, as a consequence of the maximum principle (see e.g. [8]) φ is everywhere
negative. Thus
d
2
< x
2
>
dt
2
> 8E. (1.42)
The dispersion grows at least quadratically with time if E is positive, but we cannot conclude
this if E is negative.
7
1.3 Analytic results about the timeindependent case
1.3.1 Existence and uniqueness
In Moroz and Tod [14] it was shown that the system (1.12) has inﬁnitely many spherically
symmetric solutions, all with negative energyeigenvalue and ψ real. (It is easy to see that
ψ may always be assumed real in stationary solutions.) These authors did not show, but it
is believed that, for each integer n there is a unique (up to sign) real sphericallysymmetric
solution with n zeroes, and the energyeigenvalues increase monotonically in n to zero.
1.3.2 Variational formulation
The system (1.12) can be obtained from a variational problem, by seeking stationary points
of the action
I =
1
2
(E −E) =
[
1
2
∇ψ
2
+
1
4
φψ
2
−
1
2
Eψ
2
]d
3
x, (1.43)
subject to
∇
2
φ = ψ
2
, (1.44)
or by solving (1.12b) with the relevant Green’s function and considering,
I =
[
1
2
∇ψ
2
−
1
16π
ψ(x)
2
ψ(y)
2
x −y
d
3
y −
1
2
Eψ
2
]d
3
x. (1.45)
If we vary ψ →ψ +δψ, φ →φ +δφ then the ﬁrst variation of (1.43) is
δI =
1
2
δ
¯
ψ(−∇
2
ψ +φψ −Eψ) +c.c]d
3
x, (1.46)
from which we obtain (1.12a) as the expected EulerLagrange equations, while the second
variation, subject to (1.12b), is
δ
2
I =
[∇δψ
2
+φδψ
2
−
1
2
∇δψ
2
−Eδψ
2
]d
3
x. (1.47)
By exploiting various standard inequalities (Tod [25]) would show that the action is bounded
below:
E ≥ −
1
54π
4
. (1.48)
One expects that the direct method in the Calculus of Variations should now prove that the
inﬁmum of I is attained, that the minimising function is analytic (since the system (1.12)
is elliptic) and that the minimising function will be the ground state found numerically by
Moroz et al [15] and proved to exist by Moroz and Tod [14]. Note that, at the ground state,
the second variation (1.47) cannot be negative.
8
1.3.3 Negativity of the energyeigenvalue
We write E
0
for the energyeigenvalue of the ground state. We now prove that for any
stationary state the E energyeigenvalue is negative. Following Tod [25], we deﬁne the
tensor
T
ij
= ψ
i
¯
ψ
j
+
¯
ψ
i
ψ
j
+φ
i
φ
j
−δ
ij
(ψ
k
¯
ψ
k
+
1
2
φ
k
φ
k
+φψ
2
−Eψ
2
). (1.49)
Then as a consequence of (1.12) it follows that
T
ij,j
= 0, (1.50)
whence
0 =
(x
i
T
ij
)
,j
d
3
x =
T
ii
d
3
x, (1.51)
so that
0 =
[−∇ψ
2
−
5
2
φψ
2
+ 3Eψ
2
]d
3
x. (1.52)
and
3E = T +
5
2
V. (1.53)
With E = T +V this shows that for a steady solution
T = −
1
3
E, (1.54a)
V =
4
3
E, (1.54b)
E =
1
3
E, (1.54c)
as T > 0 by deﬁnition it follows that E < 0.
Tod [25] shows that
0 ≤ (−V ) ≤
4
3
√
3π
2
T
1
2
. (1.55)
Arriola and Soler [2] have a stronger result, with 4
−E
0
3
T
1
2
on the right hand side. Since
E = T +
1
2
V we may solve for T to ﬁnd the bounds
T
1
2
≤
1
3
√
3π
2
+ [E +
1
27π
4
]
1
2
, (1.56a)
≥
1
3
√
3π
2
−[E +
1
27π
4
]
1
2
. (1.56b)
These results still hold at each instant for the timedependent case, so that the kinetic and
potential energies are separately bounded in terms of the action or conserved energy at all
times.
9
1.4 Plan of thesis
In this thesis we start with a review of the methods which can be used to compute the
stationary solutions in the case of spherical symmetry (chapter 2). Then in chapter 3 we
consider the case of the linear perturbation about the spherically symmetric stationary
solutions, obtaining an eigenvalue problem (3.24) to solve. Also in chapter 3 we obtain
restrictions on the eigenvalues analytically such that the eigenvalues are purely real or
imaginary or an integral condition on the eigenvectors vanishes (see section 3.4). In chapter 4
we solve the eigenvalue problem using spectral methods for the ﬁrst few stationary solutions
and check the results using RungeKutta integration as well as conﬁrming the conditions
on the eigenvalues. In chapter 5 we consider the problem of ﬁnding a numerical method to
evolve the timedependent SN equations and we consider the boundary conditions to put
on the numerical problem. Also in chapter 5 we consider adding a small heat term, called a
sponge factor to the Schr¨ odinger equation to absorb scatter waves. In chapter 6 we evolve
numerically the problem with diﬀerent initial conditions and check the convergence of the
evolution method with diﬀerent mesh and time step sizes. We consider the evolution of the
stationary states to see if they are stable and we also look at the stationary states with
added perturbation and compare the frequencies of oscillation with the linear perturbation
theory. We consider the axisymmetric SN equations in chapter 7 and look at the evolution
as well as the timeindependent equations. In chapter 8 we consider the 2dimensional
SN equations (8.2) or equivalently the translationally symmetric case. The evolution is
considered as well as the concept of ﬁnding two spinning lumps orbiting each other.
1.5 Conclusion
The results obtained from considering the spherically symmetric case are:
• The ground state is linearly stable.(See section 4.2)
• The linear perturbation about the nth excited state, which is to say the (n + 1)th
state, has n quadruples of complex eigenvalues as well as pure imaginary pairs. (See
sections 4.3, 4.4)
• The ground state is stable under the full (nonlinear) evolution. (See section 6.5)
• Perturbations about the ground state oscillate with the frequencies obtained by the
linear perturbation theory. (See section 6.5)
• The higher states are unstable and will decay into a “multiple” of the ground state,
while emitting some scatter oﬀ the grid. (See section 6.3)
10
• The decay time for higher states is controlled by the growing linear mode obtained in
the linear perturbation theory. (See section 6.3)
• Perturbations about higher states will oscillate for a while (until they decay) according
to the linear oscillation obtained by the linear perturbation theory. (See section 6.3)
• The evolution of diﬀerent exponential lumps indicates that any initial condition ap
pears to decay, that is they scatter and leave a “multiple” of the ground state. (See
section 6.4)
The results obtained from considering the axially symmetric case are:
• Stationary solutions that are axisymmetric exist and the ﬁrst one is like the dipole of
the Hydrogen atom. (See section 7.4)
• Evolution of the dipolelike solution shows that it is unstable in the same way as the
spherically symmetric stationary solutions are, that is it emits scatter oﬀ the grid
leaving a “multiple” of the ground state and that lumps of probability density attract
each other. (See section 7.5)
The results obtained from considering the 2dimensional case are:
• Evolution of the higher states are unstable, emitting scatter and leaving a “multiple”
of the ground state. (See section 8.4)
• There exist rotating solutions, but these are unstable. (See section 8.5)
• Lumps of probability density attract each other and come together emitting scatter
and leave a “multiple” of the ground state. (See section 8.4)
11
Chapter 2
Sphericallysymmetric stationary
solutions
2.1 The equations
In the case of spherical symmetry and timeindependence we can assume without loss of
generality ψ is real. So (1.14) becomes
(rS)
= −rSV, (2.1a)
(rV )
= −rS
2
. (2.1b)
We have the boundary conditions S → 0 as r → ∞ and S
= 0 = V
at r = 0. We also
note that if (S, V, r) is a solution then so is (λ
2
S, λ
2
V, λ
−1
r). At large r bounded solutions
to (2.1) decay like
V = A−
B
r
, (2.2a)
S =
C
r
e
−kr
, (2.2b)
where
A = V
0
−
∞
0
xS
2
dx, (2.3a)
B =
∞
0
x
2
S
2
dx, (2.3b)
where V
0
is the initial value of the potential V , (i.e. V
0
= V (0)).
12
2.2 Computing the sphericallysymmetric stationary states
2.2.1 RungeKutta integration
Now (2.1) can be rewritten as a system of four ﬁrst order ODEs
Y
1
= Y
2
, (2.4a)
Y
2
=
−2Y
2
r
−Y
2
Y
4
, (2.4b)
Y
3
= Y
4
, (2.4c)
Y
4
=
−2Y
4
r
−Y
2
4
, (2.4d)
where Y
1
= S and Y
3
= V .
The numerical technique used by Moroz et al [15] to obtain the stationary states uses
fourthorder RungeKutta integration on (2.4), starting at r = 0 and integrating outwards
towards inﬁnity. The initial values are picked so that the boundary conditions at r = 0 are
satisﬁed and then other values are guessed. The normalisation invariance allows for either
V (0), S(0) be set equal to a chosen constant. The states are obtained by reﬁning the initial
guess so that a solution which tends to zero when r is large is obtained.
There are various methods for obtaining the correct initial values which correspond to
stationary states. The method used by Moroz et al [15] was to integrate up to a ﬁxed value
in the domain, waiting for the routine to fail or blow up, then plotting the solution so far,
and reﬁne the guess based on which way the function blows up.
Another is a shooting method, which involves integration until some value for r and
looking at the value of S at that point, then modifying the initial values such that the S
at that point is zero. The problem with this routine is that as the function blows up the
step size decreases so that the tolerance remains low. This increases the computation time
needed. Also the routine will fail if the routine takes to long, so the value for r cannot be
too large.
To calculate the stationary states, I have used a modiﬁed shooting method, to avoid the
problem which occurs with RungeKutta integrations, which is the exponential blow up of
solutions. I have modiﬁed the integration in such way that the program will integrate over
small steps using a 4th order RungeKutta NAG routine. After each step it will terminate
if the solution is too large in absolute magnitude. In the case where V
0
= 1 we say that
1.5 is large since the ﬁrst state occurs around 1.088. It will also terminate if the solution
for S blows up exponentially. From information on which side of the axis the solution
becomes unbounded we can reﬁne the initial conditions to obtain the eigenfunction. Using
this method we are able to obtain the ﬁrst 50 values for S
0
or equivalently the ﬁrst 50
energy levels.
13
For the stationary spherically symmetric case it was shown in Moroz et al [15] that the
eigenvalue is
˜
E =
2G
2
m
5
2
A
B
2
, (2.5)
where A and B are given by (2.3), and V
0
is the initial value of the potential function V .
Using the above method we calculated A, B and
2A
B
which is the energy up to a factor
of
G
2
m
5
2
and which compare with the ﬁrst 16 eigenvalues of Jones et al [3]. The ﬁrst
20 eigenvalues calculated with the above routine are given in Table 2.1. We also plot in
ﬁgure 2.1 the ﬁrst four spherically symmetric states normalised such that
ψ
2
d
3
x = 4π.
Note that the nth state has (n −1) zeroes or “nodes”.
Number of zeros Energy Eigenvalue Jones et al [3] Eigenvalues
0 0.16276929132192 0.163
1 0.03079656067054 0.0308
2 0.01252610801692 0.0125
3 0.00674732963038 0.00675
4 0.00420903256689 0.00421
5 0.00287386420271 0.00288
6 0.00208619042678 0.00209
7 0.00158297244845 0.00158
8 0.00124207860434 0.00124
9 0.00100051995162 0.00100
10 0.00082314193054 0.000823
11 0.00068906850493 0.000689
12 0.00058527053127 0.000585
13 0.00050327487416 0.000503
14 0.00043737620824 0.000437
15 0.00038362194847 0.000384
16 0.00033920111442
17 0.00030207158301
18 0.00027072080257
19 0.00024400868816
20 0.00022106369652
Table 2.1: The ﬁrst 20 eigenvalues
2.2.2 Alternative method
An alternative method, which we use later on in chapters 4 and 6 to compute the eigen
function at the Chebyshev points is given below. Jones et al [3] used an iterative numerical
scheme for computing the nnode stationary states, instead of using a shooting method. An
outline of their method is as follows:
14
0 10 20 30 40
0
0.02
0.04
0.06
0.08
0.1
r
ψ
first state
0 20 40 60 80 100 120
−5
0
5
10
15
20
x 10
−3
r
ψ
second state
0 50 100 150 200
−2
0
2
4
6
8
x 10
−3
r
ψ
third state
0 100 200 300
−2
0
2
4
6
x 10
−3
r
ψ
fourth state
Figure 2.1: First four eigenfunctions
1. Set an outer radius R.
2. Supply an initial guess for u
n
= rψ
n
.
3. Solve for Φ in
∂
2
Φ
∂r
2
+
2
r
∂Φ
∂r
= r
−2
u
2
n
. (2.6)
4. Solve for the nnode eigenvalue and eigenfunction of
∂
2
u
n
∂r
2
= 2u
n
(Φ −). (2.7)
5. Iterate the previous two steps until the eigenvalue converges suﬃciently, a typical
criterion being that the change in
n
from one iteration to the next is less than 10
−9
.
6. Iterate the previous ﬁve steps, increasing R until
n
stops changing with the change
in R.
2.3 Approximations to the energy of the bound states
Jones et al [3] claim, that the energy eigenvalues
2A
B
2
closely follow a leastsquares ﬁt of the
formula:
E
n
= −
α
(n +β)
γ
, (2.8)
15
0 5 10 15 20 25 30 35 40 45 50
−5
−4.5
−4
−3.5
−3
−2.5
−2
−1.5
−1
−0.5
0
x 10
−3
2A/B
2
number of nodes
E
n
e
r
g
y
least squares fit
Figure 2.2: The least squares ﬁt of the energy eigenvalues
where α = 0.096, β = 0.76 and γ = 2.00. This appears to be a very good ﬁt of the ﬁrst
50 eigenvalues. The eigenvalues and the ﬁt are plotted in ﬁgure 2.2. Moroz et al [15] claim
that the the number of zeros is of the order of exp(
V
2
0
S
2
0
). Figure 2.3 shows that exp(
V
2
0
S
2
0
) is
an over estimate for the number of nodes, which appears to converge as n gets large.
The loglog plot ﬁgure 2.4 shows that we have a gradient of 2 as n gets large. This is
what we expect since E
n
=
−α
(n +β)
γ
is such a good ﬁt when γ = 2.
16
0 10 20 30 40 50 60
0
10
20
30
40
50
60
number of nodes
e
x
p
(
V
0 2
/
S
0 2
)
Figure 2.3: The number of nodes with the Moroz et al [15] prediction of the number of
nodes
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
−11
−10
−9
−8
−7
−6
−5
−4
−3
−2
−1
log n
l
o
g
(
−
E
n
)
Figure 2.4: Loglog plot of the energy values with log(n)
17
Chapter 3
Linear stability of the
sphericallysymmetric solutions
3.1 Linearising the SN equations
In this chapter 3 we shall set up the linear stability problem, ready for numerical solution
in chapter 4.
We look for a solution to (1.5) of the form :
ψ = ψ
0
(r, t) +ψ
1
(r, t) +
2
ψ
2
(r, t) +. . . , (3.1a)
φ = φ
0
(r, t) +φ
1
(r, t) +
2
φ
2
(r, t) +. . . , (3.1b)
where 1. Substitution of (3.1) into the SN equations (1.5) gives:
iψ
0t
+∇
2
ψ
0
+[iψ
1t
+∇
2
ψ
1
] +
2
[iψ
2t
+∇
2
ψ
2
] =
ψ
0
φ
0
+(ψ
0
φ
1
+ψ
1
φ
0
) +
2
(ψ
0
φ
2
+ψ
1
φ
1
+ψ
2
φ
0
), (3.2a)
∇
2
φ
0
+∇
2
φ
1
+
2
∇
2
φ
2
=
ψ
2
0
+(ψ
0
¯
ψ
1
+
¯
ψ
0
ψ
1
) +
2
(ψ
0
¯
ψ
2
+ψ
1
¯
ψ
1
+
¯
ψ
0
ψ
2
). (3.2b)
Finally equating the powers of we obtain
0
:
iψ
0t
+∇
2
ψ
0
= ψ
0
φ
0
, (3.3a)
∇
2
φ
0
= ψ
0

2
, (3.3b)
1
:
iψ
1t
+∇
2
ψ
1
= ψ
0
φ
1
+ψ
1
φ
0
, (3.4a)
∇
2
φ
1
= ψ
0
¯
ψ
1
+
¯
ψ
0
ψ
1
, (3.4b)
18
2
:
iψ
2t
+∇
2
ψ
2
= ψ
0
φ
2
+ψ
1
φ
1
+ψ
2
φ
0
, (3.5a)
∇
2
φ
2
= ψ
0
¯
ψ
2
+ψ
1
¯
ψ
1
+
¯
ψ
0
ψ
2
. (3.5b)
We consider the case of spherical symmetry so that
∇
2
f =
1
r
2
(r
2
f
r
)
r
=
1
r
(rf)
rr
, then (3.3) becomes
i(rψ
0
)
t
+ (rψ
0
)
rr
= rψ
0
φ
0
, (3.6a)
(rφ
0
)
rr
= rψ
0
¯
ψ
0
. (3.6b)
Since we are interested in the stability of the stationary problem we take
ψ
0
= R
0
(r)e
−iEt
, (3.7a)
φ
0
= E −V
0
(r), (3.7b)
where R
0
is real, so that
(rR
0
)
rr
= −rR
0
V
0
, (3.8a)
(rV
0
)
rr
= −rR
2
0
. (3.8b)
Substituting into (3.4), the O() problem we have
i(rψ
1
)
t
+ (rψ
1
)
rr
= r(E −V
0
)ψ
1
+rφ
1
R
0
(r)e
−iEt
, (3.9a)
(rφ
1
)
rr
= rR
0
(r)e
−iEt
¯
ψ
1
+rR
0
(r)e
iEt
ψ
1
. (3.9b)
To eliminate e
−iEt
, we seek solutions of the form
ψ
1
= R
1
(r, t)e
−iEt
, (3.10a)
φ
1
= φ
1
, (3.10b)
where R
1
is complex and φ
1
is real so that (3.9) simpliﬁes to give
i(rR
1
)
t
+ (rR
1
)
rr
= R
0
(rφ
1
) −V
0
(rR
1
), (3.11a)
(rφ
1
)
rr
= R
0
(r
¯
R
1
) +
¯
R
0
(rR
1
). (3.11b)
For convenience we introduce P = rφ
1
, R = rR
1
. Note that P and R must vanish at the
origin. The O() problem then becomes,
iR
t
+R
rr
= R
0
P −V
0
R, (3.12a)
¶
rr
= R
0
¯
R +
¯
R
0
R. (3.12b)
19
3.2 Separating the O() equations
We look for a solution of the O() problem in the form
R = (A+B)e
λt
+ (
¯
A−
¯
B)e
¯
λt
, (3.13a)
P = W
1
e
λt
+W
2
e
¯
λt
, (3.13b)
where we assume for now that λ is not real and A, B, W
1
and W
2
are functions which are
time independent. As we are considering the spherical symmetric case they depend upon r
only. Now since P is real we note that W
1
=
¯
W
2
, so we can let W = W
1
=
¯
W
2
. Substituting
into (3.12a) gives,
i(λ(A+B)e
λt
+
¯
λ(
¯
A−
¯
B)e
¯
λt
) + (A
rr
+B
rr
)e
λt
+ (
¯
A
rr
−
¯
B
rr
)e
¯
λt
=
R
0
(We
λt
+
¯
We
¯
λt
) −V
0
((A+B)e
λt
+ (
¯
A−
¯
B)e
¯
λt
). (3.14)
Equating the coeﬃcients of e
λt
, e
¯
λt
and noting that we can do this only if
¯
λ = λ so that λ
is not real, the coeﬃcient of e
λt
gives
iλ(A+B) + (A
rr
+B
rr
) = R
0
W −V
0
(A+B), (3.15)
while the coeﬃcient of e
¯
λt
gives
i
¯
λ(
¯
A−
¯
B) + (
¯
A
rr
−
¯
B
rr
) = R
0
¯
W −V
0
(
¯
A−
¯
B). (3.16)
Substituting into (3.12b) we obtain
W
rr
e
λt
+
¯
W
rr
e
¯
λt
= R
0
((
¯
A+
¯
B)e
¯
λt
+ (A−B)e
λt
) +
¯
R
0
((A+B)e
λt
+ (
¯
A−
¯
B)e
¯
λt
). (3.17)
Again equating coeﬃcients of e
λt
, e
¯
λt
gives,
W
rr
= R
0
(A−B) +
¯
R
0
(A+B), (3.18)
and
¯
W
rr
= R
0
(
¯
A+
¯
B) +
¯
R
0
(
¯
A−
¯
B). (3.19)
Since R
0
=
¯
R
0
we have
W
rr
= 2R
0
A, (3.20a)
¯
W
rr
= 2R
0
¯
A, (3.20b)
20
which is just one equation, so we have,
R
0
W = iλ(A+B) + (A
rr
+B
rr
) +V
0
(A+B), (3.21a)
(R
0
¯
W) = R
0
W = −iλ(A−B) + (A
rr
−B
rr
) +V
0
(A−B). (3.21b)
Therefore
B
rr
+V
0
B = −iλA, (3.22)
and
R
0
W = iλB +A
rr
+V
0
A. (3.23)
To summarise, the O() problem leads to three coupled linear O.D.E’s for the perturbation:
W
rr
= 2R
0
A, (3.24a)
B
rr
+V
0
B = −iλA, (3.24b)
A
rr
+V
0
A = R
0
W −iλB. (3.24c)
If λ is real, we note that (3.13b) reduces to,
R = Ae
λt
, (3.25a)
P = We
λt
. (3.25b)
Substituting into the O() equations (3.12) gives,
iλAe
λt
+A
rr
e
λt
= R
0
We
λt
−V
0
Ae
λt
, (3.26a)
W
rr
e
λt
= R
0
¯
Ae
λt
+R
0
Ae
λt
. (3.26b)
Equating coeﬃcients of e
λt
we obtain
iλA+A
rr
= R
0
W −V
0
A, (3.27a)
W
rr
= R
0
(A+
¯
A). (3.27b)
If we let A = a + ib where a and b are real functions of r, and substitute into (3.27) we
obtain
iλ(a + ib) + (a
r
r + ib
rr
) = R
0
W −V
0
(a + ib), (3.28a)
W
rr
= 2R
0
a, (3.28b)
so that equating real and imaginary parts gives
W
rr
= 2R
0
a, (3.29a)
b
rr
+V
0
b = −λa, (3.29b)
a
rr
+V
0
a = R
0
W −λb. (3.29c)
21
We consider the real eigenvalues of
W
rr
= 2R
0
A, (3.30a)
B
rr
+V
0
B = −iλA, (3.30b)
A
rr
+V
0
A = R
0
W −iλB. (3.30c)
Suppose that the eigenvalues are such that A is real. This implies that B is imaginary,
while W is real. We can therefore set A = a and B = ib where a,b are real functions of r.
Then (3.30) become
W
rr
= 2R
0
a, (3.31a)
b
rr
+V
0
b = −λa, (3.31b)
a
rr
+V
0
a = R
0
W −λb, (3.31c)
which are the same as (3.28), so (3.24) covers both.
We note that λ = 0 will be an eigenvalue of (3.28), with a = 0, b = iR
0
and W = 0, but
this corresponds to just a rotation in the phase factor.
We note that (3.24) transformed by (A, B, W) →(A, −B, W) becomes,
W
rr
= 2R
0
A, (3.32a)
B
rr
+V
0
B = iλA, (3.32b)
A
rr
+V
0
A = R
0
W + iλB, (3.32c)
so if λ is eigenvalue then −λ is a eigenvalue. Also if of (A, B, W) → (
¯
A,
¯
B,
¯
W) then the
equations become
W
rr
= 2R
0
A, (3.33a)
B
rr
+V
0
B = −i
¯
λA, (3.33b)
A
rr
+V
0
A = R
0
W −i
¯
λB, (3.33c)
and if λ is an eigenvalue then −
¯
λ is an eigenvalue. So if λ is an eigenvalue then so are
¯
λ,−λ
and −
¯
λ. That is in the case of λ complex, eigenvalues exist in groups of four, otherwise
they exist in pairs or the singleton λ = 0.
3.3 Boundary Conditions
We now consider the boundary conditions for the O() problem (3.12). Since φ
1
=
1
r
P and
R
1
=
1
r
R and we require φ
0
+ φ
1
to be a physically representable potential function and
22
R
0
+ R
1
to be a physically representable wavefunction, φ
1
and R
1
must be wellbehaved
functions of r at r = 0. This implies φ
1
, R
1
→ ﬁnite values as r →0, so that
P(0) = 0, R(0) = 0. (3.34)
The condition that R
0
+ R
1
be a physically representable wavefunction is that it be
normalisable, i.e the following integral exists:
∞
0
(R
0
+R
1
)(R
0
+R
1
)r
2
dr. (3.35)
This becomes
∞
0
(R
0
¯
R
0
+(
¯
R
1
R
0
+
¯
R
0
R
1
) +
2
R
1
¯
R
1
)r
2
dr, (3.36)
and upon equating coeﬃcients of we require the following integrals to exist:
coeﬃcient of
0
∞
0
(R
0
¯
R
0
)r
2
dr, (3.37)
coeﬃcient of
1
∞
0
(
¯
R
1
R
0
+
¯
R
0
R
1
)r
2
dr, (3.38)
coeﬃcient of
2
∞
0
(R
1
¯
R
1
)r
2
dr. (3.39)
The O(1) condition (3.37) requires the wavefunction of the stationary spherically symmetric
problem to be normalisable, and the remaining two integrals become, in terms of R,
∞
0
(
¯
R +R)R
0
rdr, (3.40a)
∞
0
(R
¯
R)dr. (3.40b)
The integral (3.40a) exists provided
¯
R+R does not grow exponentially with r, as R
0
behaves
like e
Er
as r →∞. The integral (3.40b) implies that R
2
→0 as r →∞. Hence R →0 as
r →∞is a necessary but not a suﬃcient condition for the wavefunction to be normalisable.
We expect that the solution for which this is not the case will blow up exponentially, so
that the boundary condition that R → 0 as r → ∞ should be suﬃcient. For the potential
φ
0
+φ
1
we require function to be wellbehaved at r = 0. We also have an arbitrary scaling
to the potential function, corresponding to the freedom we have of where to set the zero on
the energy scale. We can therefore choose φ
0
→0 and φ
1
→0 as r →∞.
23
When R = (A+B)e
λt
+ (
¯
A−
¯
B)e
¯
λt
and P = We
λt
+
¯
We
¯
λt
, the boundary conditions on R
and φ imply that,
R(0) = 0 ⇒ A(0) = 0, B(0) = 0,
P(0) = 0 ⇒ W(0) = 0,
R(∞) = 0 ⇒ A(∞) = 0, B(∞) = 0,
(rP)(∞) = 0 ⇒ W(∞) = 0.
So to summarise the boundary condition on A, B and W are
A(0) = 0, B(0) = 0, W(0) = 0, A(∞) = 0, B(∞) = 0, W(∞) = 0. (3.41)
3.4 Restriction on the possible eigenvalues
We claim that λ
2
is real for the perturbation equations unless
∞
0
¯
ABdr = 0. The following
proof is due to Tod [26]. Consider (3.24) where (R
0
, V
0
) satisfy
(rR
0
)
= −rR
0
V
0
, (3.42a)
(rV
0
)
= −rR
2
0
. (3.42b)
We now consider
−iλ
∞
0
A
¯
Bdr =
∞
0
B
rr
¯
B +V
0
B
¯
Bdr,
=
∞
0
V
0
B
2
−B
r

2
dr. (3.43)
We note that the R.H.S of (3.43) is real since V
0
is real. We also consider:
−iλ
∞
0
¯
ABdr =
∞
0
A
rr
¯
AV
0
A
¯
A −R
0
W
¯
Adr,
=
∞
0
V
0
A
2
−A
r

2
−
1
2
W
r

2
dr. (3.44)
We note that the R.H.S of (3.44) is real since V
0
is real. Combining the two results we have
that:
−iλ
∞
0
A
¯
Bdr
i
¯
λ
∞
0
A
¯
Bdr
, (3.45)
is real if i
¯
λ
∞
0
A
¯
Bdr = 0 which implies that
λ
¯
λ
is real i.e that λ
2
is real . Hence either λ
2
is real or
∞
0
A
¯
Bdr = 0.
24
3.5 An inequality on Re(λ)
From the linearised perturbation system (3.24) it is possible to prove an inequality for the
real part of λ . First we obtain
i
[λA
¯
A+
¯
λB
¯
B]d
3
x = −
R
0
B
¯
Wd
3
x, (3.46)
and then by the Holder and Sobolev inequalities, as in Tod [25], we ﬁnd

R
0
B
¯
W ≤ C
1
(
A
2
)
1
2
(
B
2
)
1
2
(
R
0

3
)
2
3
, (3.47)
with C
1
=
2
4
3
3π
4
3
. We choose a normalisation of the perturbation so that
A
2
= cos
2
θ,
B
2
= sin
2
θ, (3.48)
and set λ = λ
R
+ iλ
I
to ﬁnd from (3.46)
λ
R
+ iλ
I
cos 2θ ≤ 2C
1
sinθ cos θ(
R
0

3
)
2
3
. (3.49)
Now use Sobolev and Section 1.2 to ﬁnd
R
0

3
≤ C
3
4
1
(−
E
3
)
3
4
, (3.50)
so ﬁnally
λ
R
 ≤
4
9π
2
√
−E. (3.51)
Note that if the normalisation is diﬀerent to one we need to rescale the E value.
25
Chapter 4
Numerical solution of the
perturbation equations
4.1 The method
The O(1) perturbation equations (3.24) are linear whereas the SN equations are not. We
can therefore solve these equations using spectral methods, by approximating the problem
to that of solving a matrix eigenvalue problem. See [27] for more details about spectral
methods. We use Chebyshev polynomial interpolation to get a diﬀerentiation matrix, where
the Chebyshev polynomials are such that
p
n
(x) = cos(nθ), (4.1)
with θ = cos
−1
(x). They also satisfy the diﬀerential equation
d
dx
((1 −x
2
)
1
2
dp
n
(x)
dx
) = n
2
p
n
(x). (4.2)
When using Chebyshev polynomials we sample our data at Chebyshev points that is at
x
i
= cos(
iπ
N
) where i = 0, 1 . . . N, N +1 and N +1 is the number of Chebyshev points. We
note that these points correspond to zeros of the Chebyshev polynomials.
We let p(x) be the unique polynomial of degree N or less such that p(x
i
) = v
i
for
i = 0, 1 . . . N, N+1 , where the v
i
’s are values of a function at the Chebyshev points. Deﬁne
w
i
’s by w
i
= p
(x
i
) for i = 0, 1 . . . N, N + 1. The diﬀerentiation matrix for polynomials of
degree N, denoted D
N
, is deﬁned to be such that
¸
¸
¸
¸
¸
¸
w
1
w
2
.
.
.
w
N
w
N
+ 1
¸
= D
N
¸
¸
¸
¸
¸
¸
v
1
v
2
.
.
.
v
N
v
N
+ 1
¸
, (4.3)
26
which is :
D
0,0
=
2N
2
+ 1
6
,
D
N,N
= −
2N
2
+ 1
6
,
D
i,i
= −
x
i
2(1 −x
2
i
)
1 ≤ i ≤ N −1,
D
i,j
=
c
i
c
j
(−1)
i+j
(x
i
−x
j
)
(i = j), (4.4a)
where c
i
= 2 for i = 0 or i = N and c
i
= 1 otherwise.
The second derivative matrix is just D
2
N
since the p
(x) is the unique polynomial of
degree N or less through w
i
’s.
We note that since the interval we are interested in is [0, L] instead of [−1, 1] we
rescale points by X
i
=
L
2
(1 +x
i
). We also need to rescale the diﬀerentiation matrix so
D
[0,L]
=
1
L
D
[−1,1]
, all diﬀerentiation matrix below are on the interval [0, L]. The requirement
that A,B and W be zero at the boundary is applied by deleting the ﬁrst and last rows and
columns of the diﬀerentiation matrix in question, since A(0) = 0 = A(L), B(0) = 0 = B(L)
and W(0) = 0 = W(L). In this case it is the D
2
N
matrix. This yields a (N −1) ×(N −1)
matrix denoted by
˜
D
2
N
.
For the perturbation equations we therefore obtain the matrix eigenvalue equation
¸
−2R
0
0
˜
D
2
N
0
˜
D
2
N
+V
0
0
−
˜
D
2
N
−V
0
0 R
0
¸
¸
A
B
W
¸
= iλ
¸
0 0 0
−I 0 0
0 I 0
¸
¸
A
B
W
¸
, (4.5)
where
¸
A
B
W
¸
=
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
A(X
1
)
.
.
.
A(X
n−1
)
B(X
1
)
.
.
.
B(X
n−1
)
W(X
1
)
.
.
.
W(X
n−1
)
¸
, (4.6)
And
R
0
=
¸
¸
¸
¸
¸
¸
¸
R
0
(X
1
) 0 . . . . . . 0
0 R
0
(X
2
) 0 . . . 0
.
.
. 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
0 . . . . . . 0 R(X
n−1
)
¸
, (4.7)
27
V
0
=
¸
¸
¸
¸
¸
¸
¸
V
0
(X
1
) 0 . . . . . . 0
0 V
0
(X
2
) 0 . . . 0
.
.
. 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
0 . . . . . . 0 V (X
n−1
)
¸
, (4.8)
We then solve this generalised eigenvalue problem to obtain the eigenvalues and eigenvectors.
We note that since this generalised matrix eigenvalue problem (4.5) is singular, we suspect
that the eigenvalues might be inaccurate due to this. We therefore rewrite (4.5) as:
0
˜
D
2
N
+ (V
0
+V )
˜
D
2
N
+ (V
0
+V ) −2R
0
(
˜
D
2
N
)
−1
R
0
0
A
B
= iλ
I 0
0 I
A
B
, (4.9)
that is we are inverting or solving the equation for W in terms of A.
The diﬀerence between the calculated values for the eigenvalues turns out to be small
so we can use (4.5) to obtain (A, B, W) straight away.
4.2 The perturbation about the ground state
We consider now the results obtained by the solving (4.5) about the ground state of the
spherically symmetric SN equations (1.1), which has no zeros. Recall the eigenvalue for
the ground state in the nondimensionalized unit is 0.082.
In ﬁgure 4.1 we plot eigenvalues obtained by solving (4.5), excluding the near zero
eigenvalue which does not correspond to a perturbation. To compute these results, we used
N = 60 and length of the interval L = 150. We also plot all the eigenvalues obtained from
solving (4.8) in ﬁgure 4.2 (note that the scale is diﬀerent) to see that up to these limits
there are no eigenvalues other than imaginary ones. The eigenvalues obtained are presented
in table 4.1.
±0.00000011612065
±0 −0.03412557804571i
±0 −0.06030198252911i
±0 −0.06882503249714i
±0 −0.07310346395426i
±0 −0.07654635649084i
±0 −0.08100785899036i
±0 −0.08665500294956i
Table 4.1: Eigenvalues of the perturbation about the ground state
28
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
Eigenvalues of the perturbation equation
Figure 4.1: The smallest eigenvalues of the perturbation about the ground state
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−150
−100
−50
0
50
100
150
Eigenvalues of the perturbation equation
Figure 4.2: All the computed eigenvalues of the perturbation about the ground state
29
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
0
0.5
1
1.5
2
x 10
7
B/r
0 50 100 150
−20
−15
−10
−5
0
W/r
Figure 4.3: The ﬁrst eigenvector of the perturbation about the ground state. Note the
scales
We note that the nearzero eigenvalue obtained corresponds to the trivial zeromode,
as the eigenfunction shows. In ﬁgure 4.3 since A,W are very small compared to B, we can
deduce that this solution corresponds to a phase rotation of the background solution, up to
numerical error.
We note that all the eigenvalues are imaginary and that this agrees with the result
obtained in section 3.4. We conclude that the ground state is linearly stable, that is the
perturbations are only oscillatory.
We note that the eigenvalues are symmetric about the real axis, which was expected
(section 3.2).
We plot the ﬁrst three eigenvectors for the ground state as
A
r
,
B
r
and
W
r
in ﬁgures 4.3,
4.4, 4.5.
To test convergence of the eigenvalues we plot graphs against increase in N and increase
in L. As an example we plot the graph of the eigenvalue 0.0765463562i as the value of N
increases. We note that this shows the eigenvalue is converging with N see ﬁgure 4.6.
We can also plot the graph of the eigenvalue 0.0765463562i as a sample eigenvalue with
increasing L instead of N, and again this shows the eigenvalue converging with increasing
L see ﬁgure 4.7.
30
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
0
2
4
6
8
B/r
0 50 100 150
−15
−10
−5
0
5
W/r
Figure 4.4: The second eigenvector of the perturbation about the ground state
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
−2
0
2
4
B/r
0 50 100 150
−15
−10
−5
0
5
W/r
Figure 4.5: The third eigenvector of the perturbation about the ground state
31
30 40 50 60 70 80 90
−2
0
2
4
6
8
10
12
14
16
18
x 10
−7
d
i
f
f
e
r
e
n
c
e
i
n
e
i
g
e
n
v
a
l
u
e
N
Figure 4.6: The change in the sample eigenvalue with increasing values of N (L = 150)
100 110 120 130 140 150 160 170 180 190
−0.01
−0.009
−0.008
−0.007
−0.006
−0.005
−0.004
−0.003
−0.002
−0.001
0
d
i
f
f
e
r
e
n
c
e
i
n
e
i
g
e
n
v
a
l
u
e
L
Figure 4.7: The change in the sample eigenvalue with increasing values of L (N = 60)
32
4.3 Perturbation about the second state
We now consider perturbation about the second state, when the unperturbed wavefunction
has one zero and the energy eigenvalue is 0.0308.
Using the same method we compute the numerical solutions of the perturbation about
the second bound state. This time we obtain some eigenvalues with nonzero real parts.
We note that the eigenvalues have the symmetries expected from section 4.2, which we
expected from the equations. In ﬁgure 4.8 we plot the lower eigenvalues for the case where
N = 60 and L = 150. We also plot all the eigenvalues obtained in ﬁgure 4.9 on a diﬀerent
scale, to see that up to these limits there are no other complex ones. The lowest eigenvalues
about the second state are presented in table 4.2.
±0 −0.00000003648902i
±0 −0.00300149174300i
±0 −0.00859859681740i
±0.00139326930981 −0.01004023179300i
±0.00139326930981 +0.01004023179300i
±0 −0.01533675005044i
±0 −0.02105690867367i
±0 −0.02761845529494i
Table 4.2: Eigenvalues of the perturbation about the second state
We have some complex eigenvalues so if our results are to satisfy the result in section 3.4
we require that
∞
0
¯
ABdr = 0. To see whether this is the case up to numerical error we
compute
Q =

L
0
¯
ABdr
(
L
0
A
2
dr)
1
2
(
L
0
B
2
dr)
1
2
, (4.10)
If this much less than one then we know that
∞
0
¯
ABdr = 0. In the case where L = 145
and N = 60 we present in table 4.3 the calculated values of Q with the eigenvalues.
λ Q
0 −0.00000003648902i 0.22194825684122
0 −0.00300149174300i 0.23476646058070
0 −0.00859859681740i 0.36801505735057
−0.00139326930981 −0.01004023179300i 3.533821320897923e −14
0.00139326930981 + 0.01004023179300i 3.919537745928816e −14
0 −0.01533675005044i 0.89372677914625
Table 4.3: Q for diﬀerent eigenvalues of the perturbation about the second state
33
−1.5 −1 −0.5 0 0.5 1 1.5
x 10
−3
−0.04
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
0.04
Eigenvalues of the perturbation equation
Figure 4.8: The lowest eigenvalues of the perturbation about the second bound state
−1.5 −1 −0.5 0 0.5 1 1.5
x 10
−3
−150
−100
−50
0
50
100
150
Eigenvalues of the perturbation equation
Figure 4.9: All the computed eigenvalues of the perturbation about the second bound state
34
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
−5
0
5
10
x 10
6
i
m
a
g
i
n
a
r
y
p
a
r
t
B/r
0 50 100 150
0
5
10
15
20
W/r
Figure 4.10: The ﬁrst eigenvector of the perturbation about the second bound state, Note
the scales
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
−5
0
5
10
15
B/r
0 50 100 150
0
5
10
15
20
W/r
Figure 4.11: The second eigenvector of the perturbation about the second bound state
35
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
−60
−40
−20
0
20
B/r
0 50 100 150
−10
0
10
20
W/r
Figure 4.12: The third eigenvector of the perturbation about the second bound state
We note that for the eigenvalues with nonzero real part Q is almost zero, so the results
of section 3.4 are conﬁrmed. In ﬁgure 4.10 we plot the eigenvector corresponding to the near
zero eigenvalue. Note that this is approximately B = R
0
hence this eigenvector corresponds
to a phase rotation, i.e a trivial zeromode.
In ﬁgures 4.11, 4.12 we plot the next two eigenfunctions corresponding to the next two
eigenvalues.
4.4 Perturbation about the higher order states
We now consider the numerical method about the third state, with N = 60 and L = 450.
In ﬁgure 4.13 we have the eigenvalues of the perturbation about the third state, the state
with just two zeros in the unperturbed wavefunction. The eigenvalues are presented in
table 4.4.
Note again that the ﬁrst near zero eigenvalue just corresponds to a phase rotation. In
ﬁgure 4.14 we plot the eigenfunction corresponding to the next eigenvalue after the near
zero one. We note that the result of section 3.4 can be conﬁrmed.
In ﬁgure 4.15, 4.16 we plot the eigenfunction of a complex eigenvalue. In ﬁgure 4.15 the
real part is plotted and in ﬁgure 4.16 the imaginary part.
Now for the fourth bound state with L = 700 and N = 100 we have eigenvalues which
are presented in table 4.5.
36
−6 −4 −2 0 2 4 6
x 10
−4
−8
−6
−4
−2
0
2
4
6
8
x 10
−3
Eigenvalues of the perturbation equation
Figure 4.13: The eigenvalues of the perturbation about the third bound state
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
A/r
0 50 100 150 200 250 300 350 400 450
−10
−5
0
5
B/r
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
6
W/r
Figure 4.14: The second eigenvector of the perturbation about the third bound state
37
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
A/r
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
B/r
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
6
W/r
Figure 4.15: The third eigenvector of the perturbation about the third bound state
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
A/r
0 50 100 150 200 250 300 350 400 450
−10
−5
0
5
i
m
a
g
i
n
a
r
y
p
a
r
t
B/r
0 50 100 150 200 250 300 350 400 450
−2
0
2
4
6
W/r
Figure 4.16: The third eigenvector of the perturbation about the third bound state
38
±0.00000001236655
±0 −0.00078451579204i
±0.00051994798783 −0.00225911281180i
±0.00051994798783 +0.00225911281180i
±0 −0.00297504982576i
±0 −0.00368240791656i
±0 −0.00431823282039i
±0.00039265148039 −0.00504446459212i
±0.00039265148039 +0.00504446459212i
±0 −0.00557035273905i
Table 4.4: Eigenvalues of the perturbation about the third state
±0.00000000975044
±0 −0.00031089278959i
±0.00022482647750 −0.00088225206795i
±0.00022482647750 +0.00088225206795i
±0 −0.00134992819716i
±0.00017460656919 −0.00168871696256i
±0.00017460656919 +0.00168871696256i
±0 −0.00185003885932i
±0 −0.00220985235805i
±0 −0.00260936445117i
±0.00015985257765 −0.00308656938411i
±0.00015985257765 +0.00308656938411i
±0 −0.00340215999936i
Table 4.5: Eigenvalues of perturbation about the fourth state
There we notice that there are three quadruples.
4.5 Bound on real part of the eigenvalues
From section 3.5 we have a bound on the real part of the eigenvalues of the perturbation
equations. In table 4.6 we compare this bound with the values obtained, and see that it is
satisfactorily conﬁrmed.
4.6 Testing the numerical method by using RungeKutta in
tegration
The result obtained via the Chebyshev numerical methods can be veriﬁed by using Runge
Kutta integration as in section 2.2.1, that is we convert (3.24) into a set of ﬁrst order
39
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5
x 10
−4
−5
−4
−3
−2
−1
0
1
2
3
4
5
x 10
−3
Eigenvalues of the perturbation equation
Figure 4.17: The eigenvalues of the perturbation about the fourth bound state
State Numerical maximum real part Bound
1 0 0.00362396540325
2 0.0013932693098 0.00157632209179
3 0.0005199479878 0.00100532361160
4 0.0002248264775 0.00073784267376
5 0.0001143754114 0.00058275894209
6 0.0000652878367 0.00048153805083
Table 4.6: Bound of the real part of the eigenvalues
O.D.E’s,
Y
1
= Y
2
, (4.11a)
Y
2
= −V
0
Y
1
−iλY
1
+R
0
Y
5
, (4.11b)
Y
3
= Y
4
, (4.11c)
Y
4
= −V
0
Y
3
−iλY
3
, (4.11d)
Y
5
= Y
6
, (4.11e)
Y
6
= 2R
0
Y
3
, (4.11f)
where Y
1
= A, Y
3
= B, Y
5
= W and λ is an eigenvalue. The boundary conditions at the
origin are Y
1
(0) = 0, Y
3
(0) = 0 and Y
5
(0) = 0. For the initial conditions on Y
6
, Y
4
and
40
0 5 10 15 20 25 30 35 40
−5
0
5
10
A/r
0 5 10 15 20 25 30 35 40
−1
0
1
2
x 10
7
B/r
0 5 10 15 20 25 30 35 40
−20
−15
−10
−5
0
W/r
Figure 4.18: The ﬁrst eigenvector of the perturbation about the ground state, using Runge
Kutta integration; compare with ﬁgure 4.3
Y
2
we can use the values obtained from the eigenvectors obtain when we solved the matrix
eigenvalue problem.
In ﬁgures 4.18, 4.19 we plot the result doing a RungeKutta integration on (4.11) for
the ﬁrst two eigenvalues of (3.24) with R
0
and V
0
corresponding to the ﬁrst bound state.
The ﬁrst two eigenvalues are those worked out in section 4.2. We note that except for the
blowing up near the ends which we expect since the RungeKutta method is sensitive to
inaccuracies in the initial data, the solutions obtain by RungeKutta integration correspond
to the eigenvectors obtain by solving (4.5).
Proceeding to do RungeKutta integration on the results obtained with the perturbation
about the second bound state in section 4.3, we plot the results obtained in ﬁgures 4.20,
4.21.
To test the perturbation about the third bound state, we proceed in the same way. The
result plotted ﬁgure 4.22 show the ﬁrst eigenvector of the perturbation about the third
bound state. Figures 4.23, 4.24 show the second eigenvector with real and imaginary parts.
4.7 Conclusion
In this section, we have analysed the linear stability of the stationary sphericallysymmetric
states using a spectral method, and have checked the results with a RungeKutta method.
41
0 5 10 15 20 25 30 35 40 45 50
−5
0
5
10
15
A/r
0 5 10 15 20 25 30 35 40 45 50
0
5
10
15
B/r
0 5 10 15 20 25 30 35 40 45 50
−15
−10
−5
0
W/r
Figure 4.19: The second eigenvector of the perturbation about the ground state, using
RungeKutta integration; compare with ﬁgure 4.4
0 5 10 15 20 25 30 35 40 45 50
−40
−20
0
20
A/r
0 5 10 15 20 25 30 35 40 45 50
−2
0
2
4
6
x 10
8
B/r
0 5 10 15 20 25 30 35 40 45 50
−60
−40
−20
0
W/r
Figure 4.20: The ﬁrst eigenvector of the perturbation about the second bound state, using
RungeKutta integration; compare with ﬁgure 4.10
42
0 50 100 150
−5
0
5
10
A/r
0 50 100 150
−5
0
5
10
15
B/r
0 50 100 150
0
5
10
15
20
W/r
Figure 4.21: The second eigenvector of the perturbation about the second bound state,
using RungeKutta integration; compare with ﬁgure 4.11
0 20 40 60 80 100 120 140 160 180 200
−10
−5
0
5
A/r
0 20 40 60 80 100 120 140 160 180 200
−10
−5
0
5
B/r
0 20 40 60 80 100 120 140 160 180 200
0
2
4
6
8
W/r
Figure 4.22: The second eigenvector of the perturbation about the third bound state, using
RungeKutta integration; compare with ﬁgure 4.14
43
0 20 40 60 80 100 120 140 160 180 200
−15
−10
−5
0
5
A/r
0 20 40 60 80 100 120 140 160 180 200
−10
0
10
20
B/r
0 20 40 60 80 100 120 140 160 180 200
0
2
4
6
W/r
Figure 4.23: The real part of third eigenvector of the perturbation about the third bound
state, using RungeKutta integration; compare ﬁgure 4.15
0 20 40 60 80 100 120 140 160 180 200
−20
0
20
40
i
m
a
g
i
n
a
r
y
p
a
r
t
A/r
0 20 40 60 80 100 120 140 160 180 200
−30
−20
−10
0
10
i
m
a
g
i
n
a
r
y
p
a
r
t
B/r
0 20 40 60 80 100 120 140 160 180 200
−10
−5
0
5
W/r
i
m
a
g
i
n
a
r
y
p
a
r
t
Figure 4.24: The imaginary part of third eigenvector of the perturbation about the third
bound state, using RungeKutta integration; compare with ﬁgure 4.16
44
We have checked convergence of the eigenvalues obtained by the spectral method under
increase of the number N of Chebyshev points and the radius L of the spherical grid. The
calculation of section 4.3 showed that complex eigenvalues necessarily occur in quadruples
(λ,−λ,
¯
λ and −
¯
λ) while real or complex ones occur in pairs. In the numerical calculation,
we ﬁnd that:
• the ground state is linearly stable, in that the eigenvalues are purely imaginary;
• the nth excited state, which is to say the (n+1)th state, has n quadruples of complex
eigenvalues as well as pureimaginary pairs;
• no purely real eigenvalues appear, and in particular no zeromodes occur.
Thus only the ground state, which we saw in section 1.2 is the absolute minimiser of
the conserved energy, is linearly stable. All other sphericallysymmetric stationary states
are linearly unstable.
In section 3.4, we found that complex eigenvalues could only arise if a certain integral
of the perturbed wavefunction was zero, and this is satisfactorily veriﬁed by numerical
solutions found here.
We must now turn to the nonlinear stability of the ground state, which requires a
numerical evolution of the nonlinear equations.
45
Chapter 5
Numerical methods for the
evolution
5.1 The problem
In the next two chapters, the aim is to ﬁnd and test a numerical technique for solving (1.5)
with the restriction of spherical symmetry. We want to evolve an initial ψ long enough to see
the dispersive eﬀect of the Schr¨ odinger equation and the concentrating eﬀect of the gravity.
In particular, we want to see how far the linearised analysis of chapter 3 and chapter 4 is
an accurate picture.
In this chapter, we consider what the problems are in this programme, and methods for
dealing with them. Chapter 6 contains the numerical results.
Since the problem is nonlinear, we ﬁrst consider the simpler problem of solving the
timedependent Schr¨ odinger equation in a ﬁxed potential. We need a numerical method for
evolution, and a technique for dealing with the boundary that will allow the wave function
to escape from the grid, keeping reﬂection from the boundary to a minimum. This is the
content of section 5.2 to section 5.4 below. In section 5.5, we consider how to test the
method against an explicit solution of the zeropotential Schr¨ odinger equation, and what
we can check with a nonzero but ﬁxed potential. In section 5.7, we face the problem
of evolving the full SN equations, which is to say evolving the potential as well as the
wavefunction. We describe an iteration for this and list some checks which we can make
on the reliability and convergence of the method. Finally, in section 5.10 we introduce
the notion of residual probability which we may describe as follows. On the basis of the
linearised calculations of chapter 3 and chapter 4, we expect the ground state to be the only
stable state. Thus we have a preliminary picture of the evolution: any initial condition will
disperse to large distances (i.e. oﬀ the grid) leaving a remnant at the origin consisting of a
ground state rescaled as in subsection 1.2.2 and with total probability less than one. If the
46
initial condition is a state with negative energy and this preliminary picture is sound then
we can estimate the residual probability remaining in this rescaled ground state.
5.2 Numerical methods for the Schr¨ odinger equation with
arbitrary time independent potential
We consider numerical methods that solve the 1dimensional Schr¨ odinger equation with
given initial data, that is solving,
i
∂ψ
∂t
= −
∂
2
ψ
∂x
2
+ψφ. (5.1)
We note that Numerical Recipes on Fortran [20] and Goldberg et al [9] use a numerical
method to solve the Schr¨ odinger equation which they say needs to preserve the Hamiltonian
structure of the equation or equivalently the numerical method is time reversible. To see
why this is consider the case of an explicit method given by:
i
ψ
n+1
−ψ
n
δt
= D
2
(ψ
n
) +φψ
n
, (5.2)
where ψ
n
is the wavefunction at the nth time step and D
2
is the approximation for the
diﬀerentiation operator. Examples of what D
2
can be are ﬁnite diﬀerence or a Chebyshev
diﬀerentation matrix. Now we consider what happens to the mode e
ikx
, when
D
2
(e
ikx
) = −k
2
e
ikx
. (5.3)
Letting ψ
n+1
= λe
ikx
and substituting into (5.2) we get
λ = 1 −iδt(k
2
+φ). (5.4)
So here we have an ampliﬁcation factor of λ which provided k
2
+ φ = 0 has λ > 1 and
this method leads to a growth in the normalisation of the mode. If we used an implicit
method instead, that is:
i
ψ
n+1
−ψ
n
δt
= D
2
(ψ
n+1
) +φψ
n+1
. (5.5)
We get an amplifying factor of
λ =
1
1 + iδt(k
2
+φ)
. (5.6)
So in this case we note that λ < 1 provided k
2
+ φ = 0, that is, this method leads to
decaying modes.
We could consider renormalising the wavefunction as the numerical method progresses,
but this would have several draw backs. At each time step the modes get ampliﬁed by
47
diﬀerent amounts so the ratio in diﬀerent modes will change. Also for the SN equation
renormalisation would require rescaling the space axis which would need a interpolation at
each step.
To get a numerical method that will preserve the constants of the evolution of the
equation (namely the normalisation) we need to use a CrankNicholson method. With this
method the discretization of the Schr¨ odinger equation is:
i
ψ
n+1
−ψ
n
δt
= −
1
2
(D
2
(ψ
n+1
) +D
2
(ψ
n
)) +
φ
2
(ψ
n+1
+ψ
n
), (5.7)
which has an amplifying factor of
λ =
1 −
1
2
iδt(k
2
+φ)
1 +
1
2
iδt(k
2
+φ)
, (5.8)
which is unitary (λ = 1) since it is in the form of a number over its complex conjugate.
Since the boundary conditions are straightforward a spectral method, that is Chebyshev
diﬀerentation matrix, is seen to give better results for less computation time compared to
ﬁnite diﬀerence method.
5.3 Conditions on the time and space steps
We note that the CrankNicholson method might be unitary, but that does not mean that
the phase is correct. Suppose we consider the case of the linear Schr¨ odinger equation in one
dimension with φ = 0 which has solutions of the form ψ
k
(x, t) = e
−ik
2
t
e
ikx
. Now the wave
evolves with a change in phase factor (λ) of:
λ =
1 −
1
2
iδt(k
2
)
1 +
1
2
iδt(k
2
)
, (5.9)
which is just that of the factor obtained in (5.8). Expanding λ in powers of δt we have:
λ = (1 −
i
2
δtk
2
−
1
2
δt
2
k
4
+
i
4
δt
3
k
6
+. . .)(1 −
i
2
δtk
2
),
= 1 −iδk
2
t −
1
2
δt
2
k
4
−
iδt
3
k
6
2
+. . . . (5.10)
while the actual phase factor should be :
e
−ik
2
δt
= 1 −iδtk
2
−
δt
2
k
4
2
+
ik
6
δt
3
6
+h.o.t (5.11)
So to make the other terms small we can require
k
6
δt
3
12
to be small, as in [9]. The phase
factor is correct but only to this order.
48
Now suppose that ψ is a stationary state solution of the Schr¨ odinger equation with
potential. We expect the solution to evolve like ψ
n
= e
(−iEt)
ψ
0
where E is the energy of
the stationary solution. The discretized method in the Schr¨ odinger equation is :
i
ψ
n+1
+ψ
n
δt
=
1
2
[−D
2
(ψ
n+1
) −D
2
(ψ
n
) +φ(ψ
n+1
+ψ
n
)]. (5.12)
Now since ψ
n
and ψ
n+1
are stationary states we know that,
−D
2
(ψ
N
) +φψ
N
= Eψ
N
, (5.13)
for any N. Substitution into the method gives,
i(ψ
n+1
−ψ
n
) =
δtE
2
ψ
n+1
+
δtE
2
ψ
n
, (5.14)
and so we obtain,
ψ
n+1
i
= (1 +
iδtE
2
)
−1
(1 −
iδtE
2
)ψ
n
. (5.15)
Expanding out with 
δtE
2
 < 1 we have,
ψ
n+1
= [1 −δtE +
(−iδtE)
2
2!
+
(−iδtE)
3
4
+O(δt
4
)]ψ
n
. (5.16)
In order that the phase is calculated correctly we require that the error due to the term
δt
3
E
3
12
be small.
5.4 Boundary conditions and sponges
The boundary conditions which we want to impose on the Schr¨ odinger equation are that
the wavefunction must remain ﬁnite as r tends to zero and the wavefunction must tend to
zero as r tends to inﬁnity, since otherwise the wavefunction would not be normalisable. To
try to impose these boundary conditions numerically we can solve the Schr¨ odinger equation
in rψ = χ in which case it becomes:
iχ
t
= −χ
rr
+φχ. (5.17)
The boundary condition at r = 0 is just a matter of setting χ = 0 for the ﬁrst point in the
domain. The other boundary condition can be approximated in one of the following ways:
• We can at a given r = R say set that χ =
ψ
r
where ψ is the actual analytic solution.
The problem with this is that only in a certain few cases will we know the actual
solution.
49
• We set the condition that χ = 0 at r = R for R large so that the other boundary
condition is approximated. The problem with this boundary condition is that it will
reﬂect all the outward going probability and send it back towards the origin.
• The other thing we can do to reduce the eﬀect of probability bouncing back is to
introduce sponge factors at r = R were R is large. That is, we consider instead of the
Schr¨ odinger equation the equation,
(i +e(r))χ
t
= −χ
rr
+φχ, (5.18)
where the function e(r) is a strictly negative (one sign) so that the equation acts like
a heat equation to reduce the probability which is assumed to be heading oﬀ the grid.
There will also be a reduction in the normalisation integral. For example we take
e(r) = e
(0.1(r−R))
, (5.19)
so as give a smooth function which only has an eﬀect at the boundary since for small
r the sponge factor eﬀectively vanishes. There will still be some reﬂection oﬀ the
sponge but this is a smaller eﬀect.
5.5 Solution of the Schr¨ odinger equation with zero potential
To check that our numerical method is functioning correctly we aim to check it in the case
of spherical symmetry with the same boundary condition where a solution is known both
analytically and numerically. This also gives a check on the boundary conditions.
In the spherically symmetric case with zero potential a solution to the Schr¨ odinger
equation is:
rψ = χ =
C
√
σ
(σ
2
+ 2)
1
2
[exp(−
(r −vt −a)
2
2(σ
2
+ 2it)
+
ivr
2
−
iv
2
t
4
)
−exp(−
(r +vt +a)
2
2(σ
2
+ 2it)
−
ivr
2
−
iv
2
t
4
)], (5.20)
we shall call this a moving Gaussian shell, when t = 0. Here we have the boundary condition
that χ = 0 at r = 0 and χ = 0 at r = ∞. This is a wave bump “moving” at a velocity
v starting at a distance a from the origin. C is chosen such that the wavefunction is
normalised. That is:
1 = C
2
√
π[1 −exp(
−a
2
σ
2
−
v
2
σ
2
4
)]. (5.21)
We note that C will not exist when exp(−
a
2
σ
2
−
v
2
σ
2
4
) = 1 which is when v = 0 and a = 0,
but in this case we will have the limiting solution:
50
rψ = χ =
Ar
(σ
2
+ 2it)
3
2
exp(−
r
2
2(σ
2
+ 2it)
), (5.22)
where A is such that
A
2
=
2σ
3
√
π
. (5.23)
Note that the solution (5.20) will in the long term tend to zero everywhere, since there is
no gravity or other forces to stop the natural dispersion of the wave equation.
We note that we cannot numerically model the boundary condition at r = ∞ so we
can do the following things to check the numerical calculation of the solution, and also to
approximate the eﬀect of the boundary condition at inﬁnity:
• Check that normalisation is preserved, which should be the case since the numerical
method preserves this, when there are no sponge factors or when there is no probability
moving oﬀ the grid.
• We can set the boundary value at r = R to be the value of the analytical solution at
that point. We note that this will only work for the case where the analytical solution
is known; that is, it can only be done for testing.
• We can set the boundary condition to be χ = 0 at r = R where R is chosen so that it
is large compared to the initial data. This method will cause the wave to reﬂect back
oﬀ the boundary.
• We can use sponge factors, that is, change the equation so that is becomes a type of
heat equation, where the heat is absorbed at the boundary. We note that this will
reduce the wave which is reﬂected from the boundary, but that normalisation will not
be preserved.
5.6 Schr¨ odinger equation with a trial ﬁxed potential
We can consider the Schr¨ odinger equation with a potential which is constant with respect
to time. In this case we know that the energy E is preserved where E is given by
E = T +V, (5.24)
where
T =
∇ψ
2
d
3
x, (5.25)
51
and
V =
φψ
2
d
3
x, (5.26)
with the choice of potential φ =
1
(r+1)
we can work out the bound states, that is we can
solve the eigenvalue problem
∇
2
ψ +φψ = Eψ (5.27)
This gives us eigenvalues E
n
say with eigenfunctions ψ
n
where the ψ
n
are independent of
time. We know by general theory that
ψ
n
ψdr = c
n
e
iE
n
t
. (5.28)
where ψ is a general wavefunction and c
n
is a constant.
In summary we can check in this case,
• Normalisation.
• Energy of the wavefunction.
• The inner products with respect to the bound state to see that they are of constant
modulus and the phase change correctly
5.7 Numerical evolution of the SN equations
The method that Bernstein et al [4] use for numerical evolution of the wavefunction of the
SN equations consists of a CrankNicholson method for the timedependent Schr¨ odinger
equation and an iterative procedure to obtain the potential at the next time step, the
ﬁrst guess being that of the current potential. They transform the SN equations in the
spherically symmetric case into a simpler form which is:
i
∂u
∂t
= −
∂
2
u
∂r
2
+φu, (5.29)
d
2
(rφ)
dr
2
=
u
2
r
, (5.30)
where u = rψ.
Then the following procedure is used to calculate the wavefunction and potential at the
next time step:
1. Choose φ
n+1
= φ
n
to be an initial guess for φ at the (n + 1)th time step.
52
2. Calculate u
n+1
from the discretized version of the Schr¨ odinger equation, which is
2i
u
n+1
−u
n
δt
= −D
2
(u
n+1
) −D
2
(u
n
)
+[φ
n+1
u
n+1
+φ
n
u
n
], (5.31)
where this equation is solved on the region r ∈ [0, R] and R is large, where there are
no sponge factors. These could be added to stop outgoing waves being reﬂected from
the imposed boundary condition at r = R.
3. Calculate the Φ which is the solution of the potential equation for the u
n+1
. That is
Φ that solves:
d
2
(rΦ)
dr
2
=
u
n+1

2
r
. (5.32)
4. Calculate U by the discretized version of the Schr¨ odinger equation with Φ being the
potential at the (n + 1) step, that is:
2i
U −u
n
δt
= −D
2
(U) −D
2
(u
n
)
+[ΦU +φ
n
u
n
], (5.33)
where the boundary conditions are the same as in step 2.
5. Consider U − u
n+1
 if less than a certain tolerance stop else take φ
n+1
= Φ and
continue with step 2.
This outline of a general method is independent of the Poisson solver and numerical
method used to evolve the Schr¨ odinger equation. That is, we could use either a ﬁnite
diﬀerence method or spectral methods. We note that CrankNicholson is second order
accurate with respect to time and is such that it preserves the normalisation. This is the
reason why we need to work out the potential at the n + 1 time step instead of just using
an implicit method in the potential.
5.8 Checks on the evolution of SN equations
We recall that as we saw in section 1.2 for the SN equations the energy is no longer
conserved, but the action I = T +
1
2
V is. We also note that if ψ corresponds to a bound
state with energy eigenvalue E
n
then
I =
1
3
E
n
. (5.34)
Thus it is useful to call 3I the conserved energy E
I
. In the SN equation in general we can
check:
53
• Preservation of the normalisation, at least when it is not aﬀected by the boundary
condition due to the sponge factors.
• Preservation of the action I = T +
1
2
V , at least when not aﬀected by the sponge.
• That the change in the potential function satisﬁes φ
t
= i
r
0
(
¯
ψψ
r
−ψ
¯
ψ
r
)dr ,at least in
the case of spherical symmetry.
In the case of evolving about a stationary state we can check
• That the phase factor should evolve at a constant rate at the corresponding energy of
the bound state at least for a while.
5.9 Mesh dependence of the methods
To be sure that the evolution method produces a valid solution up to a degree of numerical
error, we need to show that as the mesh and time steps decrease that the method converges
to a solution. In the case of the ﬁnite diﬀerence method we expect errors to be second
order in time and space steps. We expect this since the truncation error in the case of just
solving the linear Schr¨ odinger equation is second order. We also know that the method is
unconditionally stable, that is there is no requirement on the ratio
∆t
∆x
2
, so we can reﬁne the
mesh in both direction independently and expect convergence. To test for this convergence
we can use “Richardson Extrapolation”. In a case where we expect the error to be like
O
h
∼ I +Ah
p
, (5.35)
where O
h
is the calculated value with space or time step size h, I actual value and p the
order of the error. To get just a function of p we can calculate:
O
h
1
−O
h
3
O
h
2
−O
h
3
, (5.36)
where h
1
,h
2
and h
3
are three diﬀerent values of the step size. We note in the case of the
spectral methods we do not expect the method to be second order in space, but we do still
expect it to be second order in time.
5.10 Large time behaviour of solutions
For the Schr¨ odinger equation with a ﬁxed potential, the general solution will consist of a
linear combination of bound states and a scattering state. For large times, the scattering
state will disperse leaving the bound states. For the SN equations, we have seen in chapter 4
54
that the bound states apart from the ground state are all linearly unstable, and we shall see
the same thing for nonlinear instability in chapter 6. Consequently, we might conjecture
that the solution at large values of time for the SN equations with arbitrary initial data
would be a combination of a multiple of the ground state, rescaled as in subsection 1.2.2 to
have total probability less than one, together with a scattering state containing the rest of
the probability but which disperses. We shall see support for this conjecture in chapter 6.
Assuming the truth of this picture, if the conserved energy is negative initially we can
obtain a bound for the probability remaining in the multiple of the ground state.
What we mean by a multiple of the ground state is the following: if ψ
0
(r) is the ground
state then consider ψ
0α
(r) = α
2
ψ
0
(
r
α
). It follows from the transformation rule for the SN
equations (subsection 1.2.2), that this is a stationary state but with:
∞
0
ψ
0α

2
= α, (5.37)
We also note that
E
0α
= α
3
E
0
. (5.38)
So if we start with an initial value E
I
of the action which is negative then we claim
E
I
= E
S
+α
3
E
0
, (5.39)
for some α, where E
S
is the scattered action and α
3
E
0
is the energy due to the remaining
probability near the origin. So then we have that
α
3
=
(E
S
+E
I
)
E
0

, (5.40)
since the scattered energy is positive, which leads to the inequality
α
3
>
E
I
E
0
. (5.41)
55
Chapter 6
Results from the numerical
evolution
6.1 Testing the sponges
To test the ability of the sponge factor to absorb an outward going wave, we use the initial
condition of an outward going wave from (5.5). We can use the exponential function of
(5.20) at t = 0,
0
50
100
150
200
0
50
100
150
200
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
time
Lump moving outwards
r

r
ψ

Figure 6.1: Testing the sponge factor with wave moving oﬀ the grid
56
0
50
100
150
200
0
50
100
150
200
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
time
Lump moving outwards
r

r
ψ

Figure 6.2: Testing sponge factor with wave moving oﬀ the the grid
rψ =
C
√
σ
(σ
2
+ 2it)
1
2
[exp(−
(r −vt −a)
2
2(σ
2
+ 2it)
+
ivr
2
−
iv
2
t
4
)
−exp(−
(r +vt +a)
2
2(σ
2
+ 2it)
−
ivr
2
−
iv
2
t
4
)] (6.1)
Here v is chosen so that this corresponds to an outward going wave and is suﬃciently
large so that the wavefunction will scatter, C is the normalisation factor and a and σ
determine the position and shape of the distribution.
In ﬁgure 6.1 we plot a Gaussian shell moving oﬀ the grid, here the sponge factor is
−10 exp(0.5(r − R
end
)). This sponge factor gives a gradual increase toward the boundary,
but we notice that there is some reﬂection oﬀ the boundary.
In ﬁgure 6.2 we plot an Gaussian shell moving oﬀ the grid, here the sponge factor is
−2 exp(0.2 ∗ (r − R
end
)). We note that this sponge factor work betters than one used for
ﬁgure 6.1 since it reﬂects less back from the boundary.
6.2 Evolution of the ground state
The ground state is a stationary state and is linearly stable by chapter 4 and we expect it to
be nonlinearly stable for small perturbations, since it has the lowest energy. We note that
due to inaccuracy in the calculation of the ground state and the numerical error that will be
57
0
20
40
60
80
100
120
140
0
5
10
15
20
25
30
−4
−2
0
2
4
r
time
evolution of the angle , starting with the ground state
p
h
a
s
e
Figure 6.3: The graph of the phase angle of the ground state as it evolves
made in the evolution, the numerical evolution will actually be the evolution of the ground
state with a small perturbation, but this shouldn’t make any diﬀerence to the overall end
result here.
Since the ground state is a solution to the timeindependent SN equations we expect
that there should be a constant change in phase corresponding to the energy of the state
and the absolute value of the wavefunction should be constant.
Here we do get a constant phase (see ﬁgure 6.3) and we can check that it is of the correct
value by calculating the energy of the wavefunction.
We can also check the conservation of energy and the conservation of normalisation.
The next thing to try is the long time evolution of the ground state. Now we evolve the
ground state for a long time, about 1E5 in the time scale which is about 5E5 time steps.
We ﬁnd a growing oscillation which reaches a bound and stabilises at a ﬁxed oscillation, see
ﬁgures 6.4, 6.5.
By adjusting the time step, we can reduce the amplitude of the limiting oscillation. We
also can adjust the sponge factor to absorb more at the boundary to reduce the amplitude
of the boundary. So therefore we know that this oscillation about the ground state is
introduced by numerical error. We also deduce that the errors occur due to the long tail
of the ground state at which the wavefunction is approximately zero and small error here
will have a great eﬀect on the normalisation and the energy so causing more error close to
58
0 0.5 1 1.5 2 2.5 3
x 10
4
0.038
0.04
0.042
0.044
0.046
0.048
0.05
0.052
graph to show the boundness of the error when evolving around ground state
time
a
b
s
w
a
v
e
−
f
u
n
c
t
i
o
n
n
e
a
r
o
r
i
g
i
n
Figure 6.4: The oscillation about the ground state
0
5
10
15
20
25
30
35
40
45
50
0
0.5
1
1.5
2
2.5
3
3.5
x 10
4
0
0.2
0.4
time
r
evolution about ground state
r

ψ

Figure 6.5: Evolution of the ground state
59
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
Eigenvalues of the perturbation equation
Figure 6.6: The eigenvalue associated with the linear perturbation about the ground state
the origin.
Now we want to consider an evolution of the ground state with deliberately added
perturbations. The aim here being to see if we can induce ﬁnite perturbation around the
ground state. We expect that these would be at the frequencies determined by the ﬁrst
order linear perturbation about the ground state and the corresponding higher order linear
perturbations, which are all imaginary.
We take a ground state, add on to it an exponential distribution and evolve. Then we
can Fourier transform the absolute value of the wavefunction near the origin to obtain the
diﬀerent frequencies that make up the oscillation. We can compare these observed frequen
cies to those which were obtained in chapter 4 doing the ﬁrst order linear perturbation.
We use in our evolution method, N = 50 Chebyshev points and dt = 1 as the time step
to obtain n = 170, 000 sampling points. With this amount of sampling points and time
step we should be able to obtain the frequencies to an accuracy of
π
ndt
= 3.6E − 5. In
ﬁgure 6.7 we plot a section of the oscillation about the ground state at a given point. In
ﬁgure 6.8 we plot the Fourier transform of the oscillation about the ground state at a given
point. In table 6.1 we present the frequencies observed (see ﬁgure 6.8) with those obtained
in chapter 4, which are plotted in ﬁgure 6.6.
60
0 500 1000 1500 2000 2500 3000 3500 4000
0.256
0.257
0.258
0.259
0.26
0.261
graph rψ near the origin, evolving about the ground state
time

r
ψ

a
t
r
=
3
.
6
Figure 6.7: The oscillation about the ground state at a given point
−3.6 −3.4 −3.2 −3 −2.8 −2.6 −2.4 −2.2 −2
−16
−15
−14
−13
−12
−11
−10
−9
−8
−7
log(frequency)
l
o
g
(
f
o
u
r
i
e
r
t
r
a
n
s
f
o
r
m
)
Fourier transform of oscillation about the ground state
Figure 6.8: The Fourier transform of the oscillation about the ground state
61
linearised frequencies frequencies log of size of peak .
±0.0341 0.0341 −8.300
±0.0603 0.0602 −7.632
±0.0688 0.0687 −8.265
±0.0731 0.0730 −9.038
±0.0765 0.0764 −11.603
±0.0810 0.0810 −13.431
±0.0867 0.0866 −14.466
±0.0934 0.0943 −13.102
±0.1012 0.1028 −14.377
±0.1099 0.1071 −15.115
±0.1196 0.1204 −16.322
Table 6.1: Linearised eigenvalues of the ﬁrst state and observed frequencies about it, n =
170, 000
6.3 Evolution of the higher states
6.3.1 Evolution of the second state
We note that whether or not the higher stationary states are nonlinearly stable they should
evolve for some time like stationary states before numerical errors grow suﬃciently to make
them decay.
For the evolution of the second state we ﬁnd that we have again the correct phase factor
(see ﬁgure 6.9).
There are a few issues to look at with the evolution of the higher stationary states when
the state are evolved for long enough.
1. Do they decay when perturbed into some scatter and some multiple of the ground
state? Numerical calculation of a stationary state is only correct up to numerical
errors, so if the state is nonlinearly unstable it is likely that it will decay from just
evolving.
2. How long does the state take to decay?
3. What is the amount of probability left in the ground state?
4. Is this above the estimate of section 5.10 based on the assumption that the energy
that is scattered is positive?
As we can see in ﬁgure 6.10 the second state is decaying, emitting scatter and leaving
something that looks like a multiple of the ground state.
62
0
20
40
60
80
100
0
10
20
30
40
50
−3
−2
−1
0
1
2
r
evolution of the angle , starting with the second state
time
p
h
a
s
e
Figure 6.9: The graph of the phase angle of the second state as it evolves
0
50
100
150
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
0
0.1
0.2
0.3
0.4
Figure 6.10: The long time evolution of the second state in Chebyshev methods
63
We can see in ﬁgure 6.11 the evolution of the second state. This is decaying into
something that looks like a multiple of the ground state, but here we have used in a lower
tolerance in the computation and the result is that the second state takes longer to decay
which would correspond to the error growing at a smaller rate.
As in the case of the ground state we can consider rψ at a particular point to obtain
a time series. The resulting time series divides up into three sections. The ﬁrst section is
the evolution of the second stationary state, that is the wavefunction is just a perturbation
about the second state. The next section is the part where it decays into the ground state
emitting scatter oﬀ the grid. The ﬁnal section corresponds to a “multiple” of the ground
state with a perturbation about it.
The ﬁrst section has an exponentially growing oscillation which we expect to be the
unstable growing mode obtained in the linear perturbation theory (chapter 4). To conﬁrm
the expectation that this is the growing mode we consider modifying the function so that
we would just have the oscillation with no growing exponential. To remove the growing
exponential we consider:
f(t) =
rψ(r
1
, t) −rψ(r
1
, 0)
exp(real(λ)t)
+rψ(r
1
, 0) (6.2)
where λ is the eigenvalue with nonzero real part obtained in solving (3.24) about the second
state and r
1
is the point at which the time series has been obtained, which is near the
origin. We deﬁne A to be such that A = rψ(r
1
, 0). We note that f(t) − A is almost just
an oscillation and we plot this in ﬁgure 6.12.
We then Fourier transform f(t) −A (see ﬁgure 6.13) and we obtain approximately one
frequency which is at 0.0095, which is approximately that of the growing mode in the linear
perturbation theory. Here we used n = 6000 sampling points and dt = 1 was the time step.
The growing mode here provides information about the time taken for the state to decay,
since the perturbation has to be signiﬁcantly large before the nonlinear eﬀects will cause
the decay of the wavefunction.
The second section of the time series we shall ignore, since it is neither perturbation
about the second state nor perturbation about the ground state.
We expect the third section to be an oscillation about a “multiple” of the ground state
and so we should get frequencies of oscillation but the linear perturbation frequencies should
be transformed to make up for the diﬀerence in normalisation. What we ﬁnd is the nor
malisation is still decreasing, so we are unable to obtain any consistent frequencies.
We can plot the resulting probability in the region in ﬁgure 6.14, with bound on the
action from section 5.10. In ﬁgure 6.14 the normalisation and the action bound seem to be
converging toward the same value.
64
0
2
4
6
8
10
x 10
4
0
50
100
150
200
0
0.05
0.1
0.15
0.2
0.25
r
Evolution of the second state
time

r
ψ

Figure 6.11: Evolution of second state tolerance E9 approximately
0 2000 4000 6000 8000 10000 12000
−3
−2
−1
0
1
2
3
x 10
−9
f(t)−A
time
Figure 6.12: f(t) −A
65
−7 −6 −5 −4 −3 −2 −1 0 1 2
−30
−29
−28
−27
−26
−25
−24
−23
−22
−21
log(frequency)
l
o
g
(
f
o
u
r
i
e
r
t
r
a
n
s
f
o
r
m
)
Fourier transform of f(t)−A
Figure 6.13: Fourier transformation of f(t) −A
0 1 2 3 4 5 6 7 8 9 10
x 10
4
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
norm
action bound
time
Decay of second state
p
r
o
b
a
b
i
l
i
t
y
Figure 6.14: Decay of the second state
66
0
500
1000
1500
2000
0
50
100
150
0
0.05
0.1
0.15
0.2
time
r
Evolution of second bound state

r
ψ

Figure 6.15: The short time evolution of the second bound state
Although we have found the second state to be unstable as expected and the second state
grows initially with an exponential mode, we can obtain more of the linear perturbation
modes. We consider the evolution of the second state for a short time with an added
perturbation to see if it does give any frequencies, and then check whether these agree with
the linear perturbation theory. We note that since we can only evolve for a short time
before the state decay the accuracy will be low. In ﬁgure 6.15 we plot the evolution of the
second state with an added perturbation, and in ﬁgure 6.16 we plot the oscillation about
the second state at a ﬁxed radius.
In table 6.2 we present the observed frequencies about the second state compared with
the eigenvalues of the linear perturbation. We note that the accuracy of the observed
frequencies here is 0.002.
In ﬁgure 6.17 we plot the Fourier transform of the second state with added perturbation,
note that the graph is not smooth due to the low accuracy.
6.3.2 Evolution of the third state
Now we consider the evolution of the third stationary state. In ﬁgure 6.18 we plot the
evolution of the third state, as expected we see that it is unstable and decays, emitting
scatter and leaving a “multiple” of the ground state.
67
0 50 100 150 200 250 300 350 400 450
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
x 10
−3
section of the section bound state frequency oscillation

r
ψ

a
t
f
i
x
e
d
p
o
i
n
t
time
Figure 6.16: Oscillation about second state at a ﬁxed radius
−6 −5.5 −5 −4.5 −4 −3.5 −3
−14
−13.5
−13
−12.5
−12
−11.5
−11
−10.5
−10
log(frequency)
l
o
g
(
f
o
u
r
i
e
r
t
r
a
n
s
f
o
r
m
)
fourier transform of freqency oscillation about second state
Figure 6.17: Fourier transform of the oscillation about the second state
68
linearised eigenvalues frequencies
0.0000
0 −0.0030i
0 −0.0086i
−0.0014 −0.0100i
0.0014 + 0.0100i 0.0114
0 −0.0153i
0 −0.0211i 0.0190
0 −0.0276i 0.0266
0 −0.0351i 0.0342
0 −0.0434i 0.0416
0 −0.0526i 0.0530
0 −0.0627i 0.0643
0 −0.0737i 0.0726
0 −0.0855i 0.0853
0 −0.0982i 0.0987
0 −0.1118i 0.1123
0 −0.1263i 0.1247
0 −0.1417i 0.1419
0 −0.1579i 0.1577
Table 6.2: Linearised eigenvalue of the second state and frequencies about it.
We can again consider the perturbation about the third state initially as with the second
state before it decays, we ﬁnd that it is exponentially growing. We now transform as in
(6.2) but with λ now being the eigenvalue of the linear perturbation about the third state
with the maximum real part. In ﬁgure 6.19 we plot f(t) − A, now we Fourier transform
f(t) −A (see ﬁgure 6.20) and obtain two frequencies at 0.0022 and 0.0050 which correspond
to the imaginary parts of the two complex eigenvalues of the linear perturbation.
We plot the resulting probability in the region in ﬁgure 6.21, with bound on the ac
tion from section 5.10. In ﬁgure 6.21 the normalisation and the action bound seem to be
converging toward the same value.
6.4 Evolution of an arbitrary spherically symmetric Gaus
sian shell
In this section we consider the evolution of Gaussian shells or lumps (5.20) varying the
three associated parameters. The mean velocity v the rate the shell is expanding outwards,
we note the centre of mass is not moving. The mean position a, the radial position of the
peak of the shell. The width σ of the shell. Since varying these factors will produce varied
initial conditions, we can perform a mesh analysis and a time step analysis on the evolution
69
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
x 10
4
0
50
100
150
200
250
300
350
0
0.1
0.2
0.3
0.4
r
Evolution of third state
time

r
ψ

Figure 6.18: Evolution of third state
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x 10
4
−3
−2
−1
0
1
2
3
x 10
−9
time
f
(
t
)
−
A
Figure 6.19: f(t) −A with respect to the third state
70
−10 −8 −6 −4 −2 0 2
−34
−32
−30
−28
−26
−24
−22
−20
log(frequency)
l
o
g
(
f
o
u
r
i
e
r
t
r
a
n
s
f
o
r
m
)
Figure 6.20: Fourier transform of growing mode about third state
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 10
4
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
time
Decay of third state
p
r
o
b
a
b
i
l
i
t
y
norm
action bound
Figure 6.21: Probability remaining in the evolution of the third state
71
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
v
r
e
m
a
i
n
i
n
g
p
r
o
b
a
b
i
l
i
t
y
time = 2500
time = 750
time = 1250
time = 2000
time = 2250
Evolution of lump with speed v, σ = 6,a = 50, N =40
Figure 6.22: Progress of evolution with diﬀerent velocities and times
as outlined in section 5.9. We also look at how much of the wavefunction is left at the
origin which we expect to form a “multiple” of the ground state in the sense of section 5.10.
We expect that the more energy in the system to begin with the less energy and therefore
probability eventually will remain near the origin.
6.4.1 Evolution of the shell while changing v
In ﬁgure 6.22 we plot the remaining probability at diﬀerent times and at diﬀerent initial
mean velocities v, while a and σ are constant initially. We notice that the negative values
of v correspond to an outward going wavefunction moving oﬀ the grid and a positive v is
an inward going wavefunction. We also notice that the graph is not symmetric about v = 0
even though the wavefunction energy is symmetric about v = 0.
We compare the diﬀerence in diﬀerent time steps to the result in ﬁgure 6.23 and note that
the diﬀerence is small. We calculate the “Richardson fraction” for diﬀerent values of h
i
’s
we plot the diﬀerent Richardson fractions in ﬁgure 6.24.
We also show in table 6.3 the mean calculated value against the values expected for
quadratic convergence in time. Since the values are close to the expected values we are
justiﬁed in supposing that the convergence is quadratic in the time step.
We can also plot the variation with in the number of Chebyshev points to show conver
gence with increasing N (this time it does not make sense to do Richardson Extrapolation).
72
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
−2.5
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
x 10
−3
dt = 2.5
dt = 5/3
dt = 5/4
v
d
i
f
f
e
r
e
n
c
e
i
n
p
r
o
b
a
b
i
l
i
t
y
t
o
d
t
=
1
Evolution of lump with speed v, σ = 6,a = 50, N =40
Figure 6.23: Diﬀerence of the other time steps compared with dt = 1
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
2
2.5
3
3.5
4
4.5
5
5.5
v
R
i
c
h
a
r
d
s
o
n
f
r
a
c
t
i
o
n
three different Richardson fractions
h
1
= 5,h
2
= 2.5,h
3
= 1
h
1
= 2.5,h
2
= 1.6,h
3
= 1
h
1
= 1.6,h
2
= 1.25,h
3
= 1
Figure 6.24: Richardson fraction done with diﬀerent h
i
’s
73
h
1
h
2
h
3
expected value median value calculated
5 2.5 1 4.5714 4.7094
2.5 1.667 1 2.9531 2.9761
1.667 1.25 1 3.1605 3.1652
Table 6.3: Richardson fractions
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
−3
−2
−1
0
1
2
3
x 10
−4
v
N=40
N=50
Evolution with a=50, σ=6, time = 2500
D
i
f
f
e
r
e
n
c
e
i
n
p
r
o
b
a
b
i
l
i
t
y
t
o
N
=
6
0
Figure 6.25: Diﬀerence in N value
The plot is in ﬁgure 6.25.
6.4.2 Evolution of the shell while changing a
We plot in ﬁgure 6.26 the remaining probability at diﬀerent times with diﬀerent mean
positions a initially, while v = 0 and σ is constant initially. We also plot in ﬁgure 6.27 the
values for the remaining probability with diﬀerent time steps.
We can again calculate the “Richardson fraction” as in section 5.9 in the time steps,
in ﬁgure 6.28 we plot the “Richardson fraction” at diﬀerent values of a. In table 6.4 we
show the mean calculated value against the expected values and again we see that we have
quadratic convergence in the time step.
We can also plot the variation with in the number of Chebyshev points to show conver
gence with increasing N (this time it does not make sense to do Richardson Extrapolation).
The plot is in ﬁgure 6.29.
74
0 20 40 60 80 100 120 140 160 180
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
a
p
r
o
b
a
b
i
l
i
t
y
r
e
m
a
i
n
i
n
g
Evolution of lump with v=0, σ = 6, dt = 0.625, N= 40
time = 2500
time = 2000
time = 1500
time = 1250
time = 750
time = 500
Figure 6.26: Evolution of lump at diﬀerent a
0 20 40 60 80 100 120 140 160 180
−0.01
0
0.01
0.02
a
Evolution of lump with v=0, σ = 6, N= 40
d
i
f
f
e
r
e
n
c
e
i
n
r
e
m
a
i
n
i
n
g
p
r
o
b
a
b
i
l
i
t
y
w
i
t
h
d
t
=
0
.
6
2
5
dt = 2.5
dt = 1.25
dt = 0.833
Figure 6.27: Comparing diﬀerences with diﬀerent time steps
75
0 50 100 150
3
3.5
4
4.5
5
5.5
6
a
R
i
c
h
a
r
d
s
o
n
f
r
a
c
t
i
o
n
Richardson fraction at different values
h
1
= 2.5, h
2
= 1.25, h
3
=0.625
h
1
= 1.25, h
2
= 0.833, h
3
=0.625
Figure 6.28: Richardson fraction with diﬀerent h
i
’s
0 20 40 60 80 100 120 140 160
−5
−4
−3
−2
−1
0
1
2
3
4
5
x 10
−3
a
D
i
f
f
e
r
e
n
c
e
i
n
p
r
o
b
a
b
i
l
i
t
y
t
o
N
=
8
0
Evolution with v=0,σ =6 time =2500
N=20
N=40
N=60
Figure 6.29: Diﬀerence in N value
76
h
1
h
2
h
3
expected value median value calculated
2.5 1.25 0.625 5 5.0055
1.25 0.833 0.625 3.8571 3.8588
Table 6.4: Richardson fractions
5 6 7 8 9 10 11 12 13 14 15
0.4
0.45
0.5
0.55
0.6
0.65
0.7
0.75
σ
r
e
m
a
i
n
i
n
g
p
r
o
b
a
b
i
l
i
t
y
Evolution of lump with σ,v = 0,a = 50, N =40
time = 2500
time = 2250
time = 2000
time = 1250
Figure 6.30: Evolution of lump at diﬀerent σ
6.4.3 Evolution of the shell while changing σ
We plot in ﬁgure 6.30 the remaining probability at diﬀerent times with diﬀerent widths σ
initially, while v = 0 and a = 50. We also plot in ﬁgure 6.31 the values for the remaining
probability at diﬀerent time steps.
We can again calculate the “Richardson fraction”, using data obtain at diﬀerent time
steps. In ﬁgure 6.32 we plot the “Richardson fraction” at diﬀerent values of σ. In table 6.5
we show the mean calculated value against the expected values again we have quadratic
convergence.
h
1
h
2
h
3
expected value median value calculated
0.72 0.625 0.55 2.455 2.458
0.83 0.625 0.55 1.908 1.914
Table 6.5: Richardson fractions
77
5 6 7 8 9 10 11 12 13 14 15
−2
−1.5
−1
−0.5
0
0.5
1
1.5
x 10
−4
σ
d
i
f
f
e
r
e
n
c
e
i
n
p
r
o
b
a
b
i
l
i
t
y
t
o
d
t
=
0
.
5
5
Evolution of lump with σ,v = 0,a = 50, N =40
dt = 0.625
dt = 0.72
dt = 0.83
Figure 6.31: Comparing diﬀerences with diﬀerent time steps
5 6 7 8 9 10 11 12 13 14 15
1.5
2
2.5
3
3.5
4
4.5
5
5.5
h
1
= 0.72,h
2
= 0.625,h
3
= 0.55
h
1
= 0.83,h
2
= 0.72,h
3
= 0.55
Richardson fractions
σ
R
i
c
h
a
r
d
s
o
n
f
r
a
c
t
i
o
n
Figure 6.32: Richardson fractions
78
5 6 7 8 9 10 11 12 13 14 15
−6
−4
−2
0
2
4
6
8
x 10
−4
σ
Evolution with v=0,a=50, time = 2500
D
i
f
f
e
r
e
n
c
e
i
n
p
r
o
b
a
b
i
l
i
t
y
t
o
N
=
6
0
N=50
N=40
Figure 6.33: Diﬀerence in N value
We can also plot the variation with in the number of Chebyshev points to show conver
gence with increasing N (this time it does not make sense to do Richardson Extrapolation).
The plot is in ﬁgure 6.33.
6.5 Conclusion
In this chapter, we have evolved the SN equations using diﬀerent initial condition, using the
methods of Chapter 5. We have checked the ability of the sponge factors to absorb outward
going wavefunctions moving oﬀ the grid. We have checked that the evolution method
converges to the solution by evolving the SN equations with diﬀerent initial conditions
and changing the time steps and the number of Chebyshev points in the grid. To check
convergence with respect to the time step we have used Richardson extrapolation and found
it to be converging like the second order in the time steps.
The numerical calculations show that:
• The ground state is stable under full nonlinear evolution.
• Perturbations about the ground state oscillate with the frequencies obtained by the
linear perturbation theory.
79
• Higher states are unstable and will decay into a “multiple” of the ground state, while
emitting some scatter oﬀ the grid.
• The decay time for the higher states is controlled by the growing linear mode obtained
in linear perturbation theory.
• Perturbations about higher states will oscillate for a little while (until they decay)
according to the linear oscillation obtained by the linear perturbation theory.
• The evolution of diﬀerent Gaussian shells indicates that any initial condition appears
to decay as predicted in section 5.10, that is they scatter to inﬁnity and leave a
“multiple” of the ground state.
80
Chapter 7
The axisymmetric SN equations
7.1 The problem
There are two ways of adding another spatial dimension to the problems we are solving.
In this chapter, we consider solving the SN equations for an axisymmetric system in 3
dimensions. Then the wavefunction is a function of the polar coordinates r and θ but
independent of the polar angle. In chapter 8, we consider the SN equations in Cartesian
coordinates with a wavefunction independent of z.
For the axisymmetric problem, we shall ﬁrst ﬁnd stationary solutions. This will give
some stationary solutions with nonzero total angular momentum including a dipolelike
solution that appears to minimise the energy among wave functions that are odd in z. We
then consider the timedependent problem. In particular we evolve the dipolelike state,
which turns out to be nonlinearly unstable  the two regions of probability density attract
each other and fall together leaving as in chapter 6 a multiple of the ground state.
7.2 The axisymmetric equations
We consider the SN equations which in the general nondimensionalised case are:
iψ
t
= −∇
2
ψ +φψ, (7.1a)
∇
2
φ = ψ
2
. (7.1b)
Now consider spherical polar coordinates:
x = r cos αsinθ, (7.2a)
y = r sinαsinθ, (7.2b)
z = r cos θ, (7.2c)
81
where r ∈ [0, ∞) and α ∈ [0, 2π) , θ ∈ [0, π].
The Laplacian becomes:
∇
2
ψ =
1
r
∂
2
(rψ)
∂r
2
+
1
r
2
sinθ
∂(sinθ
∂ψ
∂θ
)
∂θ
+
1
r
2
sin
2
θ
∂
2
ψ
∂α
2
, (7.3)
so in the case where ψ is only dependent on θ and r the SN equations become:
iψ
t
= −
1
r
∂
2
(rψ)
∂r
2
+
1
r
2
sinθ
∂(sinθ
∂ψ
∂θ
)
∂θ
+φψ, (7.4a)
ψ
2
=
1
r
∂
2
(rφ)
∂r
2
+
1
r
2
sinθ
∂(sinθ
∂V
∂θ
)
∂θ
. (7.4b)
Now setting u = rψ the equations become:
iu
t
= −u
rr
+
1
r
2
sinθ
(sinθu
θ
)
θ
+φu, (7.5a)
u
2
r
= φ
rr
+
1
r
2
sinθ
(sinθφ
θ
)
θ
. (7.5b)
For the boundary conditions we still want ψ to be ﬁnite at the origin so that the wave
function is wellbehaved, and to satisfy the normalisation condition we require that ψ →0
as r → ∞. So we have that u(0, θ) = 0 for all θ ∈ [0, π] and that u(r, θ) → 0 for all θ
as r → ∞. We note that these boundary conditions are spherically symmetric so that the
solutions in the spherically symmetric case are still solutions. We know that ψ
θ
= 0 at
θ = 0 and θ = π since otherwise we would have a singularity in the Laplacian.
The stationary SN equations are in the general nondimensionalised case:
E
n
ψ = −∇
2
ψ +φψ, (7.6a)
∇
2
φ = ψ
2
, (7.6b)
so the axially symmetric stationary SN equation are:
E
n
u = −u
rr
+
1
r
2
sinθ
(sinθu
θ
)
θ
+φu, (7.7a)
u
2
r
= φ
rr
+
1
r
2
sinθ
(sinθφ
θ
)
θ
. (7.7b)
7.3 Finding axisymmetric stationary solutions
To ﬁnd axisymmetric solutions of the stationary SN equations we proceed to adapt Jones
et al [3] as follows,
1. Take as an initial guess for the potential φ =
−1
(1 +r)
.
2. Using the potential φ we now solve the timeindependent Schr¨ odinger equation that
is the eigenvalue problem ∇
2
ψ −φψ = −E
n
ψ with ψ = 0 on the boundary.
82
3. Select an eigenfunction in such a way that the procedure will converge to a stationary
state. (This needs care, see below.)
4. Once the eigenfunction has been selected calculate the potential due to that eigen
function.
5. Let the new potential be the one obtained from step 4 above but symmetrise potential
since we require that φ should be symmetric around the α =
π
2
so that numerical errors
do not cause the wavefunction to move along the axis (the solution we want should
have its centre of gravity at the centre of our grid.) Small errors in symmetry will
cause the solution to move along the zaxis and oﬀ the grid.
6. Now provided that the new potential does not diﬀer from the previous potential by a
ﬁxed tolerance in the norm of the diﬀerence, stop else continue from step 2.
In step 2 the method we use to solve the eigenvalue problem involves Chebyshev diﬀer
entiation in two directions that is in r and θ.
In the case of the Hydrogen atom we know explicitly the wavefunction, that is a solution
of Schr¨ odinger equation with a
1
r
potential. In this case the number of zeros in the radial
direction and the axial directions follow a characteristic pattern, because the solutions are
separable. We can use this in our selection procedure in step 3 to get started, but as the
iteration process continues the pattern of zeros wont be ﬁxed and we may need another
method. We also note in the case of the Schr¨ odinger equation with a
1
r
potential that each
stationary state in the axially symmetric case has diﬀerent values of constants E and J
2
.
These values do not change much under iteration of the stationary state problem and we
can use this to obtain the next solution with values of E, J
2
which diﬀer only by a small
amount. Here E is :
E =
R
3
−
¯
ψ∇
2
ψ +φψ
2
d
3
x, (7.8)
and J
2
is :
J
2
=
R
3
¯
ψ[
1
sinθ
∂(sinθ
∂ψ
∂θ
)
∂θ
+
1
sin
2
θ
∂
2
ψ
∂α
2
]d
3
x. (7.9)
We use a combination of the above properties to select solutions at step 3.
7.4 Axisymmetric stationary solutions
We present in table 7.2 the ﬁrst few stationary states of the axially symmetric solution of
the SN equations called axp1 to axp8 together with energy and angular momentum. For
83
comparison we give the energy of the ﬁrst few spherically symmetric stationary states in
table 7.1.
Axp1 in ﬁgure 7.1 is the ground state which is from table 7.2, has vanishing J
2
. Axp2
in ﬁgure 7.3 is a dipole state. It is odd as a function of z, and minimises the energy among
odd functions. It was previously found in [22] and has nonzero J
2
. Axp3 and axp6 are the
second and third spherical states. The others are higher multipoles, but we are not able
to ﬁnd a simple classiﬁcation based on, say, energy and angular momentum as there would
be for the states of the Hydrogen atom.
We plot the graphs the stationary states that appear in table 7.2, in ﬁgures 7.1, 7.3, 7.5,
7.7, 7.9, 7.11, 7.13, 7.15 and the contour plots of these stationary states in ﬁgures 7.2, 7.4,
7.6, 7.8, 7.10, 7.12, 7.14, 7.16.
Number of state Energy Eigenvalue
0 0.16276929132192
1 0.03079656067054
2 0.01252610801692
3 0.00674732963038
4 0.00420903256689
Table 7.1: The ﬁrst few eigenvalues of the spherically symmetric SN equations
Energy J name
0.1592 zero axp1
0.0599 5.1853 axp2
0.0358 0.002 axp3
0.0292 2.3548 axp4
0.0263 17.155 axp7
0.0208 3.1178 axp8
0.0162 5.2053 axp5
0.0115 1.9E6 axp6
Table 7.2: Table for E and J
2
of the axially symmetric SN equations
7.5 Timedependent solutions
Now we consider the evolution of the axisymmetric timedependent SN equations (7.4).
We use the same method as in section 5.7, but instead of solving the Schr¨ odinger equation
and the Poisson equation in one space dimension we now need to solve them in two space
dimensions.
84
−60
−40
−20
0
20
40
60
0
20
40
60
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
z
R
ψ
Figure 7.1: Ground state, axp1
−50 −40 −30 −20 −10 0 10 20 30 40 50
0
10
20
30
40
50
R
z
Contour plot
Figure 7.2: Contour plot of the ground state, axp1
85
−100
−80
−60
−40
−20
0
20
40
60
80
100
0
20
40
60
80
100
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
z
R
ψ
Figure 7.3: “Dipole” state next state after ground state , axp2
−80 −60 −40 −20 0 20 40 60 80
0
10
20
30
40
50
60
70
80
90
Contour Plot
Figure 7.4: Contour Plot of the dipole, axp2
86
−150
−100
−50
0
50
100
150
0
20
40
60
80
100
120
−0.01
−0.005
0
0.005
0.01
0.015
0.02
0.025
0.03
z
R
ψ
Figure 7.5: Second spherically symmetric state, axp3
−100 −80 −60 −40 −20 0 20 40 60 80 100
0
20
40
60
80
100
Contour plot
z
R
Figure 7.6: Contour Plot of the second spherically symmetric state, axp3
87
−200
−100
0
100
200
0
50
100
150
200
−0.01
0
0.01
0.02
z
R
ψ
Figure 7.7: Not quite the 2nd spherically symmetric state ,axp4
−150 −100 −50 0 50 100 150
0
50
100
150
z
R
contour plot
Figure 7.8: Contour plot of the not quite 2nd spherically symmetric state, axp4
88
−200
−100
0
100
200
0
50
100
150
200
−4
−2
0
2
4
6
8
10
x 10
−3
R
z
ψ
Figure 7.9: Not quite the 3rd spherically symmetric state E = −0.0162, axp5
−150 −100 −50 0 50 100 150
0
50
100
150
z
R
contour plot
Figure 7.10: Contour plot of Not quite the 3rd spherically symmetric state E = −0.0162,
axp5
89
−200
−100
0
100
200
0
50
100
150
200
−4
−2
0
2
4
6
8
10
x 10
−3
z
R
ψ
Figure 7.11: 3rd spherically symmetric sate ,axp6
−150 −100 −50 0 50 100 150
0
50
100
150
z
R
contour plot
Figure 7.12: Contour plot of 3rd spherically symmetric state ,axp6
90
−100
−80
−60
−40
−20
0
20
40
60
80
100
0
20
40
60
80
100
−0.015
−0.01
−0.005
0
0.005
0.01
0.015
z
R
ψ
Figure 7.13: Axially symmetric state with E = −0.0263, axp7
−80 −60 −40 −20 0 20 40 60 80
0
10
20
30
40
50
60
70
80
90
z
R
Contour Plot
Figure 7.14: Contour plot of axp7  double dipole
91
−200
−150
−100
−50
0
50
100
150
200
0
50
100
150
200
−5
0
5
10
15
x 10
−3
R
z
ψ
Figure 7.15: State with E = −0.0208 and J = 3.1178 ,axp8
−150 −100 −50 0 50 100 150
0
20
40
60
80
100
120
140
160
180
z
R
Contour plot
Figure 7.16: Contour plot of the state with E = −0.0208 and J = 3.1178 ,axp8
92
We want to use ADI (alternating direction implicit) method to solve the Schr¨ odinger
equation, so we need to split the Laplacian into two parts dependent on each coordinate.
We take the operator for the r direction to be:
L
1
=
1
r
∂
2
∂r
2
, (7.10)
which is the r derivative part of the equation which acting on u = rψ. This leaves the
remaining operator to be:
L
2
=
1
r
[
∂
2
∂θ
2
+cot(θ)
∂
∂θ
], (7.11)
which is the θ derivative part of the equation, and which acts on u. We note that the
last operator is not just a function of one variable, but the important part is that it is
diﬀerential operators in one variable, which is enough to use a ADI(LOD) method to solve
the Schr¨ odinger equation as used by Guenther [10]. The ADI scheme is:
(1 −iL
1
)S = (1 +iL
2
)U
n
, (7.12a)
(1 −iL
2
)T = (1 +iL
1
)S, (7.12b)
(1 +iφ
n+1
)U
n+1
= (1 −iφ
n
)T, (7.12c)
Now we also need a similar way to solve the Poisson equation for the potential. Let
w = rV then the discretized equation for the potential is:
−L
1
w
i,j
−L
2
w
i,j
= f
i,j
, (7.13)
where f
i,j
=
u
i,j

2
r
2
i
is the density distribution. Rather than solving this equation as it
stands (that is N
2
× N
2
matrix problem where N is the number of grid points), we use a
PeacemanRachford ADI iteration [1] on the equation. Let φ
0
i,j
= 0 be an initial guess for
the solution of the equation and deﬁne the iteration to be:
−L
1
φ
n+1
i,j
+ρφ
n+1
i,j
= ρφ
n
i,j
+L
2
φ
n
i,j
+f
i,j
, (7.14a)
−L
2
φ
n+2
i,j
+ρφ
n+2
i,j
= ρφ
n+1
i,j
+L
1
φ
n+1
i,j
+f
i,j
, (7.14b)
where ρ is small factor chosen such that the iteration converges. We continue with the
iteration until it converges.
We can now evolve the SN equations using the above method starting with the dipole
(ﬁrst axially symmetric state) and evolving with the same sponge factors as that used
in Chapter 6. We ﬁnd that again the state behaves like that of a stationary state that
is evoling which constant phase, before it decays. It decays in the same way as in the
93
−100
0
100
0
50
100
0
0.01
0.02
0.03
z
t = 0
R

ψ

−100
0
100
0
50
100
0
0.02
0.04
z
t = 1400
R

ψ

−100
0
100
0
50
100
0
0.01
0.02
z
t = 2000
R

ψ

−100
0
100
0
50
100
0
0.02
0.04
0.06
z
t = 2600
R

ψ

Figure 7.17: Evolution of the dipole
spherically symmetric states emitting scatter oﬀ the grid and leaving a “multiple” of the
ground state behind. One diﬀerence here is that the angular momentum is lost with the
wave scattering oﬀ the grid, to leave a “multiple” of the ground state. In ﬁgure 7.17 we
plot the result of the evolution.
In this we can also check the method for convergence, using Richardson fraction in
time step. We consider three diﬀerent time steps for evolution of the dipole, where dt =
0.25, 0.5, 1. We plot in ﬁgure 7.18 the average Richardson fraction, which if the convergence
is quadratic should be 0.2 and if it is linear it should be 0.33. Here we note that the
convergence is linear in time. We also plot the Richardson fraction in the case of dt =
0.125, 0.25, 0.5 in ﬁgure 7.19 and note that again this shows that the convergence is linear
in time.
We can also check how the method converges with decreasing the mesh size. In ﬁg
ure 7.20 we plot the evolution of the dipole with diﬀerent mesh size N = 30, N2 = 25
instead of N = 25, N2 = 20. Where N is the number of Chebyshev poins in the r direction
and N2 is the number of point in the θ direction. We note that the state still decays into
a “multiple” of the ground state.
94
0 500 1000 1500 2000 2500 3000 3500 4000
0.25
0.26
0.27
0.28
0.29
0.3
0.31
0.32
0.33
0.34
0.35
time
Richardson fraction
R
i
c
h
a
r
d
s
o
n
f
r
a
c
t
i
o
n
Figure 7.18: Average Richardson fraction with dt = 0.25, 0.5, 1
0 500 1000 1500 2000 2500 3000 3500 4000
0.305
0.31
0.315
0.32
0.325
0.33
0.335
0.34
Richardson fraction
R
i
c
h
a
r
d
s
o
n
f
r
a
c
t
i
o
n
time
Figure 7.19: Average Richardson fraction with dt = 0.125, 0.25, 0.5
95
−100
0
100
0
50
100
0
0.01
0.02
z
t = 0
R

ψ

−100
0
100
0
50
100
0
0.01
0.02
z
t = 1400
R

ψ

−100
0
100
0
50
100
0
0.01
0.02
z
t = 2000
R

ψ

−100
0
100
0
50
100
0
0.01
0.02
z
t = 2600
R

ψ

Figure 7.20: Evolution of the dipole with diﬀerent mesh size
96
Chapter 8
The TwoDimensional SN
equations
8.1 The problem
In this chapter, we consider the SN equations in a plane, that is in Cartesian coordinates x,
y. We shall ﬁnd a dipolelike stationary solution, and some solutions which are like rigidly
rotating dipoles. These rigidly rotating solutions are unstable, however, and will merge,
radiating angular momentum.
8.2 The equations
Consider the 2dimensional case, that is, the case where the Laplacian is just
∇
2
ψ = (
∂
2
∂x
2
+
∂
2
∂y
2
)ψ, (8.1)
with the boundary conditions of zero at the edge of a square, instead of on the sphere. So
the nondimensionalized Schr¨ odingerNewton equations then in 2D are :
iψ
t
= −ψ
xx
−ψ
yy
+φψ (8.2a)
φ
xx
+φ
yy
= ψ
2
(8.2b)
To evolve the full nonlinear case we require to use the same method as section 5.7, that
is, we need a method to solve the Schr¨ odinger equation and Poisson equation. That is, we
require a method to solve the Schr¨ odinger equation in 2D as well as the Poisson equation.
To model the Schr¨ odinger equation on this region, we still, as in the case of the one
dimension need a numerical scheme that will preserve the normalisation as it evolves.
We can use a CrankNicholson method here, but we need to modify this since the
matrices become large and no longer tridiagonal in the case of ﬁnite diﬀerences. Therefore
we use an ADI (alternating direction implicit) method to solve the Schr¨ odinger equation.
97
This works when V = 0. The ADI method approximates CrankNicolson to the second
order.
We can introduce to deal with a nonzero potential an extra term in the scheme. We
the same scheme as used in Guenther [10] to intoduce the potential so we have the scheme:
(1 −iD2
x
)S = (1 +iD2
y
)U
n
(8.3a)
(1 −iD2
y
)T = (1 +iD2
x
)S (8.3b)
(1 +iV
n+1
)U
n+1
= (1 −iV
n
)T (8.3c)
The last equation (8.3c) deals with the potential, and D2 is the denotes a Chebyshev
diﬀerentiation matrix or a ﬁnitediﬀerence method for approximating the derivative in the
direction indicated, by the subscript. We also need a quicker way to solve the Possion
equation in 2D. The discretised version of the Possion equation we want to solve is:
−D2
x
φ
i,j
−D2
y
φ
i,j
= f
i,j
(8.4)
where f
i,j
= ψ
2
.
Rather than solving this equation as it stands (that is, N
2
×N
2
matrix problem where
N is the number of grid points), we use a PeacemanRachford ADI iteration [1] on the
equation as described below. Let φ
0
i,j
= 0 be an initial guess for the solution of the equation
and then we deﬁne the iteration to be,
−D2
x
φ
n+1
i,j
+ρφ
n+1
i,j
= ρφ
n
i,j
+D2
y
φ
n
i,j
+f
i,j
, (8.5a)
−D2
y
φ
n+2
i,j
+ρφ
n+2
i,j
= ρφ
n+1
i,j
+D2
x
φ
n+1
i,j
+f
i,j
, (8.5b)
where ρ is a small number chosen to get convergence. We proceed with the iteration until
it converges, that is, φ
n+2
−φ
n
 is less than some tolerance.
8.3 Sponge factors on a square grid
Here we consider sponge factors on the square grid in two dimensions, we choose sponge
factor to be
e = min[1, e
0.5(
√
(x
2
+y
2
)−20)
], (8.6)
since we do not want any corners in the sponge factors. We plot a graph of this sponge
factor in ﬁgure 8.1.
98
−30
−20
−10
0
10
20
30
−30 −20 −10 0 10 20 30
0
0.2
0.4
0.6
0.8
1
y
x
sponge factor for square grid
Figure 8.1: Sponge factor
Again to test the sponge factors we send an exponential lump oﬀ the grid, where the
initial function is of the form:
ψ = e
−0.01((x−a)
2
+(y−b)
2
))
e
ivy
, (8.7)
a wavefunction centred initially at x = a and y = b and moving in the y direction with
velocity v. We plot in ﬁgures 8.2, 8.3 and 8.4 the lump moving oﬀ the grid as a test of the
sponge factor. We note that most of the lump is absorbed.
8.4 Evolution of dipolelike state
As in chapter 6 and chapter 7 we can consider now the evolution of a higher state, that is,
a state with higher energy than the ground state. We can ﬁnd such stationary states by
using the same method as for axially symmetric state section 7.3, but modiﬁed to the 2D
case by changing the diﬀerential operators. In this case we ﬁnd that the ﬁrst stationary
state after the ground state has two lumps like that of the dipole (see ﬁgure 8.5).
Now we can use the method of section 8.2 to evolve this stationary state with sponge
factors. We see that the dipolelike state evolves for a while like a stationary state, that is
rotating at constant phase, and then it decays and becomes just one lump, approximately
the ground state (see ﬁgure 8.6).
99
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 0
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 12
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 24
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 36
Figure 8.2: Lump moving oﬀ the grid with v = 2, N = 30 and dt = 2
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 48
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 60
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 72
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 84
Figure 8.3: Lump moving oﬀ the grid with v = 2, N = 30 and dt = 2
100
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 96
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 108
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 120
−20
0
20
−20
0
20
0
0.02
0.04
0.06
t = 132
Figure 8.4: Lump moving oﬀ the grid with v = 2, N = 30 and dt = 2
−40
−30
−20
−10
0
10
20
30
40
−40
−20
0
20
40
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
x
y
ψ
Figure 8.5: Stationary state for the 2dim SN equations
101
−40
−20
0
20
40
−40
−20
0
20
40
0
0.05
0.1
x
t = 0
y
ψ
−40
−20
0
20
40
−40
−20
0
20
40
0
0.05
0.1
x
t = 4400
y
ψ
−40
−20
0
20
40
−40
−20
0
20
40
0
0.05
0.1
x
t = 4800
y
ψ
−40
−20
0
20
40
−40
−20
0
20
40
0
0.05
0.1
Figure 8.6: Evolution of a stationary state
8.5 Spinning Solution of the twodimensional equations
To get a spinning solution we could try setting up two ground states or lumps distance apart
and set them moving around each other by choosing the initial velocities of the lumps. We
could instead try to seek a solution, which satisﬁes:
ψ(r, θ, t) = e
−iEt
ψ(r, θ +ωt, 0), (8.8)
where ω is a real scalar and r and θ are the polar coordinates. This is redeﬁning the
stationary state so that rotating solution are considered. In Cartesian coordinates (8.8)
becomes:
ψ(x, y, t) = e
−iEt
ψ(xcos(ωt) +y sin(ωt), −xsin(ωt) +y cos(ωt), 0), (8.9)
Now we let,
X = xcos(ωt) +y sin(ωt), (8.10a)
Y = −xsin(ωt) +y cos(ωt), (8.10b)
102
which is a rotation of ωt. We note that
dX
dt
= Y and
dY
dt
= −X.
Then i
∂ψ
∂t
becomes:
i
∂ψ
∂t
= e
−iEt
[Eψ(X, Y, 0) + iωY ψ
X
(X, Y, 0) −iωXψ
Y
(X, Y, 0)], (8.11)
where ψ
X
denotes partial diﬀerentiation of ψ in the ﬁrst variable and ψ
Y
in the second
variable.
So substituting (8.9) into 2dimensional SN equations we have:
Eψ + iωY ψ
X
−iωXψ
Y
= −∇
2
ψ +φψ, (8.12a)
∇
2
φ = ψ
2
, (8.12b)
we note that these equations are dependent on X and Y only. Note that
¯
ψ is a solution
with −ω instead of ω, which corresponds to a solution rotating in the other direction. If ψ
is a function of r then [Y ψ
x
− Xψ
y
] vanishes and we get the ordinary stationary states in
that case regardless of what value ω is.
We can modify the methods used to calculate the stationary state solution in the axially
symmetric case (section 7.3), to calculate the numerical solution of (8.12) at a ﬁxed ω. We
note that ψ(X, Y, 0) is no longer going to be of just one phase.
Do nontrivial solutions of these equations exist that is a solution such that [Y ψ
x
−Xψ
y
]
does not vanish, hence a solution that will spin? We can ﬁnd such a solution with ω = 0.005
which we plot the real part in ﬁgure 8.7 and the imaginary part in ﬁgure 8.8.
We can use the timedependent evolution method as in section 8.2, with the initial data
being a spinning solution. In ﬁgure 8.9 we plot the evolution of the state before it decays
showing that the state does spin about x = 0 and y = 0. In ﬁgure 8.10 we plot the section
of the evolution showing that the state does decay, emitting scatter oﬀ the grid and leaving
a “multiple” of the ground state.
8.6 Conclusion
For the 2dimensional SN equations we can conclude that:
• They behave as the spherically symmetric and axisymmetric equations that is the
ground state is stable and the higher states are unstable.
• Spinning solutions do exist and they decay like the other solutions to a “multiple” of
the ground state.
103
−20
−10
0
10
20
−20
−10
0
10
20
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
x
spinning solution ω =0.005
y
r
e
a
l
(
ψ
)
Figure 8.7: Real part of spinning solution
−20
−10
0
10
20
−20
−10
0
10
20
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
x
spinning solution ω =0.005
y
i
m
a
g
(
ψ
)
Figure 8.8: Imaginary part of spinning solution
104
−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
x
t = 0
y

ψ

−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
x
t = 200
y

ψ

−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
x
t = 400
y

ψ

−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
x
t = 600
y

ψ

Figure 8.9: Evolution of spinning solution
−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
0.08
x
t = 5200
y

ψ

−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
0.08
x
t = 6400
y

ψ

−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
0.08
x
t = 7600
y

ψ

−40
−20
0
20
40
−40
−20
0
20
40
0
0.02
0.04
0.06
0.08
x
t = 8800
y

ψ

Figure 8.10: Evolution of spinning solution
105
Bibliography
[1] W.F.Ames, Numerical Methods for Partial Diﬀerential Equations 2nd ed (1977)
[2] E.R.Arriola, J.Soler, Asymptotic behaviour for the 3D Schr¨ odingerPoisson system in
the attractive case with positive energy App.Math.Lett.12 (1999) 16
[3] D.H.Bernstein, E.Giladi, K.R.W.Jones, Eigenstate of the Gravitational Schr¨ odinger
Equation. Modern Physics Letters A13 (1998) 23272336
[4] D.H.Bernstein, K.R.W.Jones, Dynamics of the SelfGravitating BoseEinstein Conden
sate. (Draft)
[5] J.Christian, Exactly soluble sector of quantum gravity Phys.Rev. D56 (1997) 4844.
[6] L.Diosi, Gravitation and the quantummechanical localization of macroobjects.
Phys.Lett 105A (1984) 199202
[7] L.Diosi, Permanent State Reduction: Motivations, Results and ByProducts. Quantum
Communications and Measurement. Plenum (1995) 245250
[8] L.C.Evans, “Partial diﬀerential equations” AMS Graduate Studies in Mathematics 19
AMS (1998)
[9] A.Goldbery, H.Schey, J.Schwartz, Computergenerated motion pictures of one
dimensional quantum mechanical transmission and reﬂection phenomena Am.J.Phys
35 3 (1967) 177186
[10] R.Guenther, “A numerical study of the time dependent Schr¨ odinger equation coupled
with Newtonian gravity” Doctor of Philosophy thesis for University of Texas at Austin
(1995).
[11] H.Lange, B.Toomire, P.F.Zweifel, An overview of Schr¨ odingerPoisson Problems. Re
ports on Mathematical Physics 36 (1995) 331345
[12] H.Lange, B.Toomire, P.F.Zweifel, Timedependent dissipation in nonlinear Schr¨ odinger
systems 36 (1995) 12741283 Journal of Mathematical Physics.
106
[13] R.Illner, P.F.Zwiefel, H.Lange, Global existence, uniqueness and asymptotic behaviour
of solutions of the WignerPoisson and Sch¨ odingerPoisson systems. Math.Meth in
App.Sci.17 (1994) 349376
[14] I.M.Moroz, K.P.Tod, An Analytical Approach to the Schr¨ odingerNewton equations.
Nonlinearity 12 (1999) 20116
[15] I.M.Moroz, R.Penrose, K.P.Tod, Sphericallysymmetric solutions of the Schr¨ odinger
Newton equations. Class Quantum Grav. 15 (1998) 27332742
[16] K.W.Morton, D.F.Mayers, Numerical Solution of Partial Diﬀerential Equations (1994)
[17] R.Penrose, On gravity’s role in quantum state reduction. Gen.Rel.Grav. 28 (1996)
581600
[18] R.Penrose, Quantum computation, entanglement and state reduction Phil.Trans.R.Soc.
(Lond) A 356 (1998) 1927
[19] D.L.Powers, Boundary Value Problems, Academic Press 1972.
[20] W.Press, S.Teukolsky, W.Vetterting, B.Flannery Numerical Recipes in Fortran (second
edition)
[21] R.Ruﬃni, S.Bonazzola, Systems of SelfGravitating Particles in General Relativity and
the concept of an Equation of State. Phys.Rev.187 (1969) 1767
[22] B.Schupp, J.J. van der Bij, An axiallysymmetric Newtonian boson star. Physics Let
ters B 366 (1996) 8588
[23] H.Schmidt, Is there a gravitational collapse of the wavepacket ? (preprint)
[24] H.Stephani, “Diﬀerential equations: their solution using symmetries” Cambridge: CUP
(1989)
[25] K.P.Tod, The ground state energy of the Schr¨ odingerNewton equations Phys.Lett.A
280 (2001) 173176
[26] Private correspondence with K.P.Tod.
[27] L.N.Trefethen, Spectral Methods in MATLAB, SIAM 2000.
[28] L.N.Trefethen, Lectures on Spectral Methods.(Oxford Lecture Course)
107
Appendix A
Fortran programs
A.1 Program to calculate the bound states for the SN equa
tions
! integrates the spherically symmetric SchrodingerNewton equations
! Integrates the system of 4 coupled equations
! dX/dt = FN(X,Y)
! dY/dt = FN(X,Y)
!
! using the NAG library routine D02PVF D02PCF
! using the NAG library routines X02AJF XO2AMF
!
! Program searchs for bound states and the values of
! r for which they blow up
!
Program SN
Implicit None
External D02PVF
External D02PCF
External OUTPUT
External X02AJF
External X02AMF
External FCN
External COUNT
Double precision, dimension(4) :: Y,YP,YMAX,THRES,YS
Double precision, dimension(4) :: PY,PY2
Double Precision, dimension(32*4) :: W
Double Precision :: RATIO,INSIZE,Y1P,INT1,INT2
Double Precision :: X,XEND,STEP,TOL,XE,HSTART
Double Precision :: X02AJF,X02AMF
Double Precision :: Y1,STEP2,XBLOW,XBLOWP
INTEGER :: DR,DR2,DOUT,GOUT,ENCO,ENCO2
INTEGER :: N, I,IFAIL,METHOD , LENWRK
CHARACTER*1 :: TASK
Logical :: ERRAS
108
! compute a few things !
N = 4
INSIZE = 0.01
Y1P = 1.2
X = 0.0000001
XEND = 2000
XE = X
TOL = 100000000000*X02AJF()
THRES(1:4) = Sqrt(X02AMF())
METHOD = 2
TASK =’U’
ERRAS = .FALSE.
HSTART = 0.0
LENWRK = 32*4
IFAIL = 0
GOUT = 0
! Ask for user input .
! to find root below given value
!
Y(3) = 1.0
Y(2) = 0.0
Y(4) = 0.0
! Ask for upper value
Print*, " TOL ", TOL
Print*, " Enter upper value "
Y1 = 1.09
Y(1) = Y1
PY(1:4) = Y(1:4)
PY2(1:4) = Y(1:4)
! Step size
STEP = 0.0005
OPEN(Unit=’10’,file=’fort.10’,Status=’new’)
OPEN(Unit=’11’,file=’fort.11’,Status=’new’)
DO WHILE ( GOUT == 0 )
DR = 0
!
! D02PVF is initially called to give initial starting values to
! subroutine ,
! One has too determine the initial direction in which our
! first value blows up
Step2 = INSIZE ! Initial trial step in Y(1)
DR2 = 1 ! direction of advance in stepping
! DR is direction of blow up
CALL D02PVF(N,X,Y,XEND,TOL,THRES,METHOD,TASK,ERRAS,HSTART,W,LENWRK,IFAIL)
DO WHILE (DR == 0)
XE = XE + STEP
PY2(1:4) = PY(1:4)
PY(1:4) = Y(1:4)
109
IF (XE < XEND) THEN
CALL D02PCF(FCN,XE,X,Y,YP,YMAX,W,IFAIL)
END IF
IF (XE > 3*STEP) THEN
RATIO = (XE2*STEP)*PY2(1)/((XESTEP)*PY(1))
RATIO = RATIO  (XESTEP)*PY(1)/(XE*Y(1))
IF ((RATIO*RATIO) < 1E15) THEN
RATIO = (XESTEP)*PY(1)/(XE*Y(1))
IF (RATIO < 1) THEN
IF (RATIO > 1) THEN
IF (Y(1) > 0) THEN
DR = 1
END IF
IF (Y(1) < 0) THEN
DR =  1
END IF
END IF
END IF
END IF
END IF
IF (Y(1) > 2) THEN
DR = 1
ENDIF
IF (Y(1) < 2) THEN
DR = 1
ENDIF
IF (XE > XEND) THEN
DR = 10
ENDIF
ENDDO
! Do while is ended ...
XBLOWP = 0
XBLOW = 0
DO WHILE (DR /= 10)
Y1 = Y1STEP2*REAL(DR2)
!Print*, "Y(1) ", Y1
Y(3) = 1.0
Y(2) = 0.0
Y(1) = Y1
Y(4) = 0.0
XBLOWP = XBLOW
XBLOW = 0
ENCO = 0
ENCO2 = 0
DOUT = 0
X = 0.0000001
HSTART = 0.0
XE = X
110
CALL D02PVF(N,X,Y,XEND,TOL,THRES,METHOD,TASK,ERRAS,HSTART,W,LENWRK,IFAIL)
DO WHILE (DOUT == 0)
XE = XE + STEP
YS = Y
PY2(1:4) = PY(1:4)
PY(1:4) = Y(1:4)
IF (XE < XEND) THEN
CALL D02PCF(FCN,XE,X,Y,YP,YMAX,W,IFAIL)
END IF
CALL COUNT(YS,Y,ENCO,ENCO2)
IF (Y(1) < 0.0001) THEN
IF (Y(1) > 0.0001) THEN
XBLOW = XE
ENDIF
ENDIF
IF (XE > 3*STEP) THEN
RATIO = (XE2*STEP)*PY2(1)/((XESTEP)*PY(1))
RATIO = RATIO  (XESTEP)*PY(1)/(XE*Y(1))
IF ((RATIO*RATIO) < 1E15) THEN
RATIO = (XESTEP)*PY(1)/(XE*Y(1))
IF (RATIO < 1) THEN
IF (RATIO > 1) THEN
IF (Y(1) > 0) THEN
DOUT = 1
!Print*, "eup", XBLOW
IF (DR /= 1) THEN
STEP2 = STEP2/10
DR = 1
DR2 = DR2*(1)
END IF
END IF
IF (Y(1) < 0) THEN
DOUT = 1
!Print*, "edown", XBLOW
!Print*, "zeros", ENCO
IF (DR /= 1) THEN
STEP2 = STEP2/10
DR = 1
DR2 = DR2*(1)
END IF
END IF
END IF
END IF
END IF
END IF
IF (Y(1) > 1.5) THEN
DOUT = 1
!Print*, "up", XBLOW
111
IF (DR /= 1) THEN
STEP2 = STEP2/10
DR = 1
DR2 = DR2*(1)
ENDIF
ENDIF
IF (Y(1) < 1.5) THEN
DOUT = 1
!Print*, "down " ,XBLOW
!Print*, "zeros ", ENCO
IF (DR /= 1) THEN
STEP2 = STEP2/10
DR = 1
DR2 = DR2*(1)
ENDIF
ENDIF
IF (XE > XEND) THEN
DR = 10
DOUT = 1
ENDIF
IF (STEP2+Y1 == Y1 ) THEN
DR = 10
DOUT = 1
ENDIF
ENDDO
ENDDO
! obtained value for bound state now write it in file
! And look for another
Print*, " bound state ",Y1
PRINT*, " Number of zeros is ", ENCO
Print*, " Blow up value is ", XBLOW
Write(10,*),Y1 ,XBLOW,ENCO
! Now consider the intergation to get A / B^2 !!
Y(3) = 1.0
Y(2) = 0.0
Y(1) = Y1
Y(4) = 0.0
XBLOWP = XBLOW
XBLOW = 0
ENCO = 0
ENCO2 = 0
DOUT = 0
X = 0.0000001
HSTART = 0.0
XE = X
INT1 = 1
INT2 = 0
112
CALL D02PVF(N,X,Y,XEND,TOL,THRES,METHOD,TASK,ERRAS,HSTART,W,LENWRK,IFAIL)
DO WHILE (XE < XBLOWP)
XE = XE + STEP
PY2(1:4) = PY(1:4)
PY(1:4) = Y(1:4)
IF (XE < XEND) THEN
CALL D02PCF(FCN,XE,X,Y,YP,YMAX,W,IFAIL)
END IF
INT1 = INT1  STEP*XE*Y(1)*Y(1)
INT2 = INT2 + STEP*XE*XE*Y(1)*Y(1)
IF (XE > 3*STEP) THEN
RATIO = (XE2*STEP)*PY2(1)/((XESTEP)*PY(1))
RATIO = RATIO  (XESTEP)*PY(1)/(XE*Y(1))
IF ((RATIO*RATIO) < 1E15) THEN
RATIO = (XESTEP)*PY(1)/(XE*Y(1))
IF (RATIO < 1) THEN
IF (RATIO >  1) THEN
Print*, RATIO
IF (Y(1) > 0) THEN
DR = 1
END IF
IF (Y(1) < 0) THEN
DR =  1
END IF
END IF
END IF
END IF
END IF
IF (Y(1) > 2) THEN
DR = 1
ENDIF
IF (Y(1) < 2) THEN
DR = 1
ENDIF
IF (XE > XEND) THEN
DR = 10
ENDIF
ENDDO
Print*, "A is : ", INT1
Print*, "B is : ", INT2
Write(11,*) INT1,INT2
X = 0.0000001
XE = X
INSIZE = (Y1PY1)/10
Y1P = Y1
Y1 = Y1  0.0000001
Y(1) = Y1
Y(2) = 0.0
113
Y(3) = 1.0
Y(4) = 0.0
ENDDO
END PROGRAM SN
!
! FUNCTION EVALUATOR
!
SUBROUTINE FCN(S,Y,YP)
Implicit None
Double Precision :: S,Y(4),YP(4)
YP(1) = Y(2)
YP(2) =2.D0*Y(2)/SY(3)*Y(1)
YP(3) = Y(4)
YP(4) =2.D0*Y(4)/SY(1)*Y(1)
RETURN
END
!
! OUTPUT  subroutine
SUBROUTINE OUTPUT(Xsol,y)
Implicit None
Double Precision :: Xsol
Double Precision, DIMENSION(4) :: Y
PRINT*, Xsol,y(1),y(3)
WRITE(2,*), Xsol,y(1),y(3)
END
!
! Subroutine Count to count the bumps in our curve
SUBROUTINE COUNT(Yold,Ynew,Number,way)
Implicit None
Double Precision, DIMENSION(4) :: Yold, Ynew
Integer :: Number, way
If (Yold(1) > 0) THEN
If (Ynew(1) < 0) THEN
! we have just reached a turing point
way = 1
Number = Number + 1
ENDIF
ENDIF
114
If (Yold(1) < 0) THEN
If (Ynew(1) > 0) THEN
way = 1
Number = Number + 1
ENDIF
ENDIF
END
115
Appendix B
Matlab programs
B.1 Chapter 2 Programs
B.1.1 Program for asymptotically extending the data of the bound state
calculated by RungeKutta integration
% To work out the asymptotic solution %
% r the point to match to !!
% the nb2 is the solution matching to at the moment !!
% matching with exp(r)/r
% 1/r
%
b = 5000;
%step = nb2(2,1)  nb2(1,1);
yr = interp1(nb2(:,1),nb2(:,2),r);
yr2 = interp1(nb2(:,1),nb2(:,2),r+1);
d = real(log(yr*r/(yr2*(r+1))));
s = exp(d*r)/r;
c = yr/s;
vr = interp1(nb2(:,1),nb2(:,3),r);
vr2 = interp1(nb2(:,1),nb2(:,3),r+1);
f = (vr2*(r+1)vr*r)
e = vr*r r*f;
% then work out values of the function
for i = 1:b
tail(i,1) = nb2(1,1) + (i1)*step ;
tail(i,2) = c*exp(d*tail(i,1))/tail(i,1);
tail(i,3) = e/tail(i,1) + f;
end
B.1.2 Programs to calculate stationary state by Jones et al method
function [f,evalue] = cpffsn(L,N,n);
% function [f,evalue] = cpffsn(L,N,n);
%
[D,x] = cheb(N);
116
x2 = (1+x)*(L/2);
phi = 1./(1+x2);
[f,evalue] = cpfsn(L,N,n,x2,phi);
function [f,evalue] = cpfsn(L,N,n,x,phi)
% [f,evalue] = CPFSN(L,N,n,x,phi)
%
% solve the bounds states of the SchrodingerNewton Equation
% L length of interval
% N number of cheb points
% n the nth bound state
% x the grid points wrt function is required
% phi initial guess for potential
%
[D,xp] = cheb(N);
dist = L/2;
cx = (xp+1)*dist;
D2 = D^2/(dist^2);
D2 = D2(2:N,2:N);
newphi = interp1(x,phi,cx);
oldphi = newphi;
res = 1;
while (res > 1E13)
[ft,evalue] = cpesn(L,N,n,newphi,cx);
nom = inprod(ft,ft,cx,N);
ft = ft/sqrt(nom);
g = abs(ft(2:end1)).^2./cx(2:end1);
pu = D2\g;
pu = [0;pu;0];
newphi = pu(1:N)./cx(1:N);
newphi(N+1) = newphi(N);
res = max(abs(newphioldphi))
oldphi = newphi;
end
fx = interp1(cx,ft,x);
f =fx;
%sq = inprod2(fx,fx,x,N);
%f = interp1(sq*x,fx/sq,x);
%sq2 = inprod2(f,f,x,N);
%sq = sq*sq2;
%f = interp1(sq*x,fx/sq,x);
function [f,evalue] = cpesn(L,N,n,phi,x)
% [f,evalue] = CPESN(L,N,n,x)
%
% works out nth eigenvector of schrodinger equation with selfpotential
% initial guess of 1/(r+1) with the length of the interval L ,
% N is number of cheb points
117
% interpolates to x
%
[V,eigs] = cpsn(L,N,phi,x);
cx(1) = 0;
cx(N+1) = L;
dist = (cx(N+1)cx(1))/2;
for i = 1:(N+1)
cx(i) = (cos((i1)*pi/(N)))*dist+dist+cx(1);
end
E = diag(eigs);
E0 = 0*E;
for a = 1:n
[e,i] = max(EE0);
E0(i) = NaN;
end
ev = [0; V(:,i); 0];
f = interp1(cx,ev,x);
evalue = e;
function [V,eigs] = cpsn(L,N,phi,x)
% [V,eigs] = cpsn(L,N,phi,x)
%
% potential of phi wrt x with usual boundary conditions.
% using cheb differential operators
%
% N = number of cheb points
% L = length of interval
%
cx(1) = 0;
cx(N+1) = L;
dist = (cx(N+1)cx(1))/2;
for i = 1:(N+1)
cx(i) = (cos((i1)*pi/(N)))*dist+dist+cx(1);
end
%
% working out the cheb difff matrix
%
D = cheb(N);
D2W = D^2/dist^2;
D2 = D2W(2:N,2:N);
ZE = zeros(N,N);
I = diag(ones(N1,1));
%
%Matrices C1 and C2
%
% Matrices C1 and C2 to solve
% D2 + 1/rR = ER
%
118
cphi = interp1(x,phi,cx(2:N));
C1 = D2 + diag(cphi);
C2 = I;
%Compute eigenvalues
[V,eigs] = eig(C1,C2);
%cheb.m  (N+1)*(N+1) chebyshev spectral differentiaion
% Matrix via explicit formulas
function [D,x] = cheb(N)
if N==0 D=0; x=1; break, end;
ii = (0:N)’;
x = cos(pi*ii/N);
ci = [2;ones(N1,1);2];
D = zeros(N+1,N+1);
for j = 0:N; %compute one column at a time
cj = 1; if j==0  j==N cj = 2; end
denom = cj*(x(ii+1)x(j+1)); denom(j+1) = 1;
D(ii+1,j+1) = ci.*(1).^(ii+j)./denom;
if j > 0 & j< N
D(j+1,j+1) = .5*x(j+1)/(1x(j+1)^2);
end
end
D(1,1) = (2*N^2+1)/6;
D(N+1,N+1) = (2*N^2+1)/6;
B.2 Chapter 4 Programs
B.2.1 Program for solving the Schr¨ odinger equation
% schrodinger equation with potential from the bound state
% To create function of nb1
% with cheb point spaces
% cos(i*pi/2k)
% N = number of points
clear bx1 bv1 br1
dist = (nb1(l,1)nb1(1,1))/2;
for i = 1:(N+1)
bx1(i) = (cos((i1)*pi/(N)))*dist+dist+nb1(1,1);
br1(i) = interp1(nb1(:,1),nb1(:,2),bx1(i));
bv1(i) = interp1(nb1(:,1),nb1(:,3),bx1(i));
br2(i) = interp1(nb2(:,1),nb2(:,2),bx1(i));
br3(i) = interp1(nb3(:,1),nb3(:,2),bx1(i));
end
% working out the cheb difff matrix
%
D = cheb(N);
D2 = D^2/dist^2;
119
D2 = D2(2:N,2:N);
D = D(1:N,1:N);
V0 = diag(bv1(2:N)+Estate);
I = diag(ones(N1,1));
%Matrices C1 and C2
%for the schrodinger equation
%D2 + V0 = e
%
C1 = D2 + V0;
C2 = I;
%
%Compute eigenvalues
[V,eigs] = eig(C1,C2);
E = diag(eigs);
hold off
subplot(1,1,1);
plot(diag(eigs),’r*’);
grid;
pause;
[y,i] = min(E)
subplot(3,2,1);
plot(bx1(2:N),V(1:N1,i)./bx1(2:N)’)
f = V(1:N1,i).^2;
f(2:N) = f(1:N1);
f(1) = 0;
int = dist*(D\f);
int(1)
title(’first eigenvector ’);
grid on;
subplot(3,2,2);
ratio = (V(1,i)./bx1(2))/br1(2);
ratio2 = ratio.^2/int(1);
ratio2 = ratio2^(1/3);
error2 = interp1(bx1(2:N),br1(2:N)’*ratio,ratio2*bx1(2:N));
error = error2’  V(1:N1,i)./bx1(2:N)’;
plot(bx1(2:N),error)
title(’error to nb1’)
E(i) = max(E)+1;
[y2,i2] = min(E)
subplot(3,2,3);
plot(bx1(2:N),V(1:N1,i2)./bx1(2:N)’)
f = V(1:N1,i2).^2;
f(2:N) = f(1:N1);
f(1) = 0;
int = dist*(D\f);
int(1)
title(’second eigenvector ’);
grid on;
120
subplot(3,2,4);
ratio = (V(1,i2)./bx1(2))/br2(2);
ratio2 = ratio.^2/int(1);
ratio2 = ratio2^(1/3);
error2 = interp1(nb2(:,1),nb2(:,2)*ratio,ratio2*bx1(2:N));
error = error2’  V(1:N1,i2)./bx1(2:N)’;
plot(bx1(2:N),error)
title(’error to nb2’)
E(i2) = max(E)+1;
[y2,i3] = min(E)
subplot(3,2,5);
plot(bx1(2:N),V(1:N1,i3)./bx1(2:N)’)
f = V(1:N1,i3).^2;
f(2:N) = f(1:N1);
f(1) = 0;
int = dist*(D\f);
int(1)
title(’third eigenvector is:’);
grid on;
subplot(3,2,6);
ratio = (V(1,i3)./bx1(2))/br3(2);
ratio2= ratio.^2/int(1);
ratio2 = ratio2^(1/3);
error2 = interp1(nb3(:,1),nb3(:,2)*ratio,ratio2*bx1(2:N));
error = error2’  V(1:N1,i3)./bx1(2:N)’;
plot(bx1(2:N),error)
title(’error to nb3’)
B.2.2 Program for solving the O() perturbation problem
% first order perturbation
% To create function of nb1
% with cheb point spaces
% cos(i*pi/2k)
% N = number of points
clear bx1 bv1 br1
dist = (nb1(l,1)nb1(1,1))/2;
for i = 1:(N+1)
bx1(i) = (cos((i1)*pi/(N)))*dist+dist+nb1(1,1);
br1(i) = interp1(nb1(:,1),nb1(:,2),bx1(i));
bv1(i) = interp1(nb1(:,1),nb1(:,3),bx1(i));
end
%
% working out the cheb difff matrix
%
D = cheb(N);
D2W = D^2/dist^2;
D2 = D2W(2:N,2:N);
121
V0 = diag(bv1(2:N));
R0 = diag(br1(2:N));
ZE = zeros(N1,N1);
I = diag(ones(N1,1));
%Matrices C1 and C2
%for the S_N equations
% subject to the equations
% 2R0A + D2W = 0
% D2B + V0B = eA
% D2A  V0A + R0W = eB
% with boundary conditions
% A = 0 , B =0 W = 0 at endpoints
% DA = 0 at infinity
% DB = 0 at infinity
C1 = [2*R0,ZE,D2;ZE,D2+V0,ZE;D2V0,ZE,R0];
C2 = [ZE,ZE,ZE;I,ZE,ZE;ZE,I,ZE];
%Compute eigenvalues
[V,eigs] = eig(C1,C2);
j = sqrt(1);
hold off
E = j*diag(eigs);
Enew = sort(E);
plot(Enew(1:N),’r*’);
grid
title(’Eigenvalues of the perturbed SN equation ’)
B.2.3 Program for the calculation of (4.10)
% abint.m  program to work out integral of A*B
%
D = cheb(N);
D = D(1:N,1:N);
f = abs(V(1:N1,i)).^2;
f(2:N) = f(1:N1);
f(1) = 0;
A = dist*(D\f);
g = abs(V(N:2*N2,i)).^2;
g(2:N) = g(1:N1);
g(1) = 0;
B = dist*(D\g);
f = V(1:N1,i);
g = V(N:2*N2,i);
f = conj(f);
fg = f.*g;
fg(2:N) = fg(1:N1);
fg(1) =0;
int = dist*(D\fg);
122
ratio = sqrt(A(1))*sqrt(B(1))\abs(int(1))
B.2.4 Program for performing the RungeKutta integration on the O()
perturbation problem
% see the solution of the first order perturbation equations
% intial data
%
global nb1;
e = eigenvalue;
global e;
y0 = [0 1 0 1 0 1]’;
y0(2) = initial value for y0(2);
y0(4) = initial value for y0(4);
y0(6) = initial value for y0(6);
X0 = 0.000001;
Xfinal = L  length of the interval;
F = ’yfo’;
Xspan = [X0,Xfinal];
[xp,yp] = ode45(F,Xspan,y0);
subplot(3,1,1)
plot(xp,imag(yp(:,3)./xp))
title(’A/r’)
subplot(3,1,2)
plot(xp,imag(yp(:,1)./xp))
title(’B/r’)
subplot(3,1,3)
plot(xp,imag(yp(:,5)./xp))
title(’W/r’)
function Yprime = yfo(X,Y)
% this is function for the first order pert equations
% where V , R are the vaule of the potential and wave function
% related to the zero th order perturbation.
% e is the eigenvalue ( e = i\lambda )
global e
Yprime(1) = Y(2);
Yprime(2) = V(X)*Y(1)+e*Y(3);
Yprime(3) = Y(4);
Yprime(4) = V(X)*Y(3)+e*Y(1)+R(X)*Y(5);
Yprime(5) = Y(6);
Yprime(6) = 2*R(X)*Y(3);
Yprime = Yprime’;
B.3 Chapter 6 Programs
B.3.1 Evolution program
%snevc2.m
123
%Iteration scheme for the schrodingernewton equations
%using crank nicolson + cheb ,
% Modified 13/3/2001  for perturbation about ground state.
clear i; clear norm; clear Eu;
clear fgp; clear g; clear psi; clear psi1; %clears used data formats
hold off
N = 70;
N2 = N;
R = 350; count = 1;
dt = 0.5; one = 1;
[D,x] = cheb(N);
D2 = D^2;
D2 = D2(2:N,2:N);
x = 0.5*((1x)*R);
u = cpfsn(R,N2,3,x,1./(x+1)); %calculate stationary solution
x = x(2:N); u = u(2:N);
dist = 0.5*R;
D2 = D2/dist^2;
t = 0; nu = dt;
a = 50; v = 0;%0.1;
av = a+v*t;
vs = (v^2*t)/4;
sigma = 15;
rsig = sqrt(sigma);
brack = 1/(sigma^2);
rbrack = sqrt(brack);
wave = rbrack*rsig*exp(0.5*brack*(xav).^2+(0.5*i*v*x)i*vs);
wave2= rbrack*rsig*exp(0.5*brack*(x+av).^2(0.5*i*v*x)i*vs);
pert = wavewave2;
% alternative initial data !
uold = u; %+0.01*pert;
u = uold;
fwe = u;
rho = abs(u(1:end)).^2./x(1:end);
pu = D2\rho;
phi = pu./x;
% solve the potential equation
% sponge factors
e = 2*exp(0.2*(xx(end)));
%e = zeros(N1,1);
o = zeros(N1,1);
Eu(count) = abs(u(5));
fgp(count) = abs(u(1));
fgp2(count) = abs(u(15));
% start of the iteration scheme
for zi2 = 1:100
for zi = 1:1000
count = count +1;
124
phi1 = phi; % initial guess !
Res = 9E9;
while abs(Res) > 3E8
u = uold;
for z = 1:N1
g(z) = (i+e(z))*nu/2;
psi1(z) = one*(i*dt/2)*(phi1(z)+o(z));
psi(z) = one*(i*dt/2)*(phi(z) +o(z));
end
maindaig = ones(N1,1)+psi1’;
d = (ones(N1,1)psi’).*u  g.’.*(D2*u);
b = diag(maindaig) + diag(g)*D2;
u = b\d;
unew = u;
% this was the solving of the P.D.E !
% work out potential for u^n+1 %%
rho = abs(u(1:end)).^2./x(1:end);
pu = D2\rho;
phi1 = pu./x;
%then recalution of the result ! to check
% Now to check the L infinty norm of the residual
u = uold;
for z = 1:N1
g(z) = (i+e(z))*nu/2;
psi1(z) = one*(i*dt/2)*(phi1(z)+o(z));
psi(z) = one*(i*dt/2)*(phi(z) +o(z));
end
maindaig = ones(N1,1)+psi1’;
d = (ones(N1,1)psi’).*u  g.’.*(D2*u);
b = diag(maindaig) + diag(g)*D2;
u = b\d;
Res = max(abs(uunew));
%Res = max(abs(phi1phi))
% end of checking the L inifinity norm .%
end
Eu(count) = abs(u(5));
fgp(count) = abs(u(1));
fgp2(count) = abs(u(30));
norm(zi2) = inprod(u,u,x,N1);
pe(zi2) = pesn(u,x,phi,N);
t = t + dt;
%fwe(:,end+1) = u;
uold = unew;
phi = phi1;
%save fp.mat fgp
%save Ep.mat Eu
%save nn.mat norm
end
125
fwe(:,end+1) = u;
save fp.mat fgp fgp2 %Eu
save nn.mat fwe norm
%tone = t*ones(N1,1);
%plot3(tone,x,abs(u),’b’);
%drawnow
%hold on
%hold on
%plot(x,phi1,’g’);
%plot(nb1(:,1),abs(nb1(:,1).*nb1(:,2)),’r’)
%plot(x,angle(u),’k’);
end
B.3.2 Fourier transformation program
N = max(size(f))/2;
dt = 0.5;
freq = pi/(N*dt);
n = 2*N;
Fn = fft(f);
Fn = [conj(Fn(N+1)) Fn(N+2:end) Fn(1:N+1)]; % rearrange frequency points
Fn = Fn/n; % scale results by fft length
idx = N:N;
Fn(N+1) = 0;
idxs = idx*freq;
plot(log(idxs(N+1:2*N+1)),log(abs(Fn(N+1:2*N+1))))
xlabel(’log(frequency)’);
ylabel(’log(fourier transform)’);
B.4 Chapter 7 Programs
B.4.1 Programs for solving the axisymmetric stationary state problem
function [f,aphi,evalue,lres,count] = cp2fsn(dis,N,N2,n,l)
% function [f,aphi,evalue] = cp2fsn(dis,N,N2,n,l)
%
%
warning off
[D,x] = cheb(N);
[E,y] = cheb(N2);
dis2 = pi/2;
x2= dis*(x+1); % x is the radius points.
y2 = dis2*(y+1);
x2= x2(2:N);
D = D/dis;
E = E/dis2;
D2 = D^2;
E2 = E^2;
126
D = D(2:N,2:N);
D2 = D2(2:N,2:N);
I = eye(N2+1);
fac1 = diag(1./x2.^2);
fac2 = diag((cot(y2eps)+cot(y2+eps))/2);
% initial guess for potential
for i = 1:N2+1
for j = 1:N1
phi((N1)*(i1)+j) = 1./(1000*x2(j)+1);
end
end
aphi = phi;
L = kron(I,D2) + kron(E2,fac1)+ kron(fac2*E,fac1); %Laplacian kinda of !
% actually Le Laplacian for axially symetric case !
newphi = phi;
oldphi = newphi;
res = 1;
mang = 0; mener = 0;
count = 0;
% start of a iteration
while(res > 1E4)
count = count +1;
[ft,evalue,mener,mang] = cp2esnea(dis,N,N2,mener,mang,newphi,n,l);
% get nth evector correponding to potential
% next aim to work out the potential, corresponding to evector !
data = ft;
norms
ft = ft/nom;
clear g;
for i = 1:N2+1
for j = 1:N1
g((N1)*(i1)+j) = abs(ft((N1)*(i1)+j))^2/x2(j);
end
end
g = g.’;
pu = L\g;
for i = 1:N2+1
for j = 1:N1
nwphi((N1)*(i1)+j) = pu((N1)*(i1)+j)/x2(j);
end
end
for i = 1:N2+1
for j = 1:N1
newphi((N1)*(i1)+j) = 0.5*(nwphi((N1)*(i1)+j)+nwphi((N1)*(N2+1i)+j));
end
end
res2 = abs(max(newphinwphi));
127
res = abs(max(newphioldphi))/abs(max(newphi))
oldphi = newphi;
data = newphi;
%axpl
%drawnow;
lres = res;
if count > 50
lres = res;
res = 0;
end
end
f = ft;
aphi = newphi;
function [f,evalue,ener,ang] = cp2esnea(dis,N,N2,pener,pang,phi,n,l);
%
%
[V,eigs] = cp2sn2(dis,N,N2,phi);
En = diag(eigs);
[e,ii] = sort(real(En));
W = 0;
b = 0;
vclose = 10E8;
found = 0;
limit = floor(max(ii)/4);
for as = 1:limit
data = V(:,ii(as));
norms2
if (nom > 0.04)
data = data/nom;
enr;
[n2,l2] = fnl(data,N,N2);
close = (penerener)^2 + 1E4*(pangang)^2+ 0.1*((n2n)^2+ 10*(l2l)^2);
if (close < vclose)
vclose = close;
end
end
end
while(W == 0);
b = b+1;
data = V(:,ii(b));
norms2
if (nom > 0.04)
data = data/nom;
enr;
W = 1;
[n2,l2] = fnl(data,N,N2);
close = (penerener)^2 + 1E4*(pangang)^2+ 0.1*((n2n)^2+ 10*(l2l)^2);
128
end
if (W == 1)
W = 0;
if (close < vclose*1.001)
%axpl2
%drawnow;
W = 1;
end
end
%if (W == 1)
%W = input(’correct state’);
%end
end
evalue = En(ii(b));
f = V(:,ii(b));
% find n and l number of the given grid ..
function [n,l] = fnl(data,N,N2);
% where the u is r\phi
%
for i = 1:N2+1
for j = 1:N1
u(j,i) = data((N1)*(i1)+j);
end
end
clear no noo;
j = 0;
for i = 1:N1
j = j+1;
no(i) = cz2sn(u(i,1:N2+1));
if (no(i) == 1)
j = j 1;
end
if (j > 0)
noo(j) = no(i);
end
end
l = mean(no);
%if (j > 0)
% l = mean(noo);
%end
clear no2 noo2;
i = 0;
for j = 1:N2+1
i = i +1;
no2(j) = cz2sn(u(1:N1,j));
if (no2(j) == 1)
i = i 1;
129
end
if (i > 0)
noo2(i) = no2(j);
end
end
n = mean(no2);
%if (i > 0)
%n = mean(noo2);
%end
%l = ceil(l);
%n = ceil(n);
%l = round(l);
%n = round(n);
% to calculate the norm of a
% axially sysmetric data set!
%
% Jab = r^2sin \alpha
%
clear ener
clear ang
ener = 0;
ang = 0;
[E,y] = cheb(N2);
dis2 = pi/2;
y2 = pi/2*(y+1);
[D,x] = cheb(N);
x2= dis*(x+1);
x2= x2(2:N);
D = D/dis;
E = E/dis2;
D2 = D^2;
E2 = E^2;
D = D(2:N,2:N);
D2 = D2(2:N,2:N);
I = eye(N2+1);
I2 = eye(N1);
fac1 = diag(1./x2.^2);
fac2 = diag((cot(y2+eps)+cot(y2eps))/2);
L = +kron(I,D2)+kron(E2,fac1)+kron(fac2*E,fac1); %Laplacian kinda of !
J = kron(E2,I2) + kron(fac2*E,I2);
%data2 = data;
%for i = 1:N2+1
%for j = 1:N1
%data2((N1)*(i1)+j) = data((N1)*(i1)+j)/x2(j);
%end
130
%end
nab2 = L*data;
aab2 = J*data;
for i = 3:N2+1
for j = 2:N1
darea = (x2(j)x2(j1))*(y2(i)y2(i1));
jab = sin(y2(i));
jab2 = sin(y2(i));
ener = ener  jab*abs(data((N1)*(i1)+j))^2*darea*phi((N1)*(i1)+j);
ener = ener  jab*conj(data((N1)*(i1)+j))*nab2((N1)*(i1)+j)*darea;
ang = ang  jab*conj(data((N1)*(i1)+j))*aab2((N1)*(i1)+j)*darea;
end
end
norms2
% to calculate the norm of a
% axially sysmetric data set!
%
% Jab = r^2sin \alpha
%
clear nom nom2
% Setup the grid
nom = 0; nom2 = 0;
[E,y] = cheb(N2);
y2 = pi/2*(y+1);
[D,x] = cheb(N);
x2= dis*(x+1); % x is the radius points.
x2= x2(2:N);
for i = 2:N2+1
for j = 2:N1
darea = (x2(j)x2(j1))*(y2(i)y2(i1));
jab = sin(y2(i));
nom = nom + jab*abs(data((N1)*(i1)+j))^2*darea;
end
end
nom = nom/2;
nom = sqrt(nom);
% cz2sn.m
% function to work out number of crossing given matrix array...
function number = cz2sn(W);
%
a = max(size(W));
count = 0;
posne = 2;
eps2 = 1E4*eps;
if (W(1) > eps2)
posne = 1;
131
end
if (W(1) < eps2)
posne = 0;
end
for i = 2:a
if (posne == 1)
if W(i) < eps2
posne = 0;
count = count+1;
end
end
if (posne == 0)
if W(i) > eps2
posne = 1;
count = count+1;
end
end
if (posne == 2)
if W(i) > eps2
posne = 1;
end
if W(i) < eps2
posne = 0;
end
end
end
if posne == 2
count = 1;
end
number = count;
function [V,eigs] = cp2sn2(dis,N,N2,phi);
% [V,eigs] = cp2sn2(dis,N,N2,phi);
%
% program to solve the stationary schrodinger in axially symmetric case.
% r,\alpha are the variable in which we solve for at the
% present both has N points !
% boundary condition are different on the \alpha !
% i.e we expect phi to be N1*N1 vector !
%
[E,y] = cheb(N2);
y2 = pi/2*(y+1);
dis2 = pi/2;
[D,x] = cheb(N);
x2= dis*(x+1); % x is the radius points.
x2= x2(2:N);
D = D/dis;
E = E/dis2;
132
D2 = D^2;
E2 = E^2;
D = D(2:N,2:N);
D2 = D2(2:N,2:N);
I = eye(N2+1);
fac1 = diag(1./x2.^2);
fac2 = diag((cot(y2eps)+cot(y2+eps))/2);
L = +kron(I,D2)+kron(E2,fac1) + kron(fac2*E,fac1); %Laplacian kinda of !
L = L + diag(phi);
%
[V,eigs] = eig(L);
B.4.2 Evolution programs for the axisymmetric case
% nsax2.m SchrodingerNewton equation
% on an axially symmetric grid 
% full nonlinear method using chebyshev, ADI(LOD) wave equation
% that the wave equation done in chebyshev differention
% matrix in 2d
clear xx2 yy2
load dpole
[DD,x3] =cheb(N);
x3 = dis*(x3+1);
x3 = x3(2:N);
[EE,y3] = cheb(N2);
y3 = pi/2*(y3+1);
clear DD EE
for cc = 1:N1
for cc2 = 1:N2+1
f2(cc2,cc) = f((cc21)*(N1)+cc);
fred(cc2,cc) = aphi((cc21)*(N1)+cc);
xx2(cc2,cc) = x3(cc)*cos(y3(cc2));
yy2(cc2,cc) = x3(cc)*sin(y3(cc2));
end
end
dis = 50;
grav = 1;
clear xx yy vv
% grid setup %
N = 25;
N2 = 20;
[E,y] = cheb(N2);
dis2 = pi/2;
y2 = pi/2*(y+1);
[D,x] = cheb(N);
x2 = dis*(x+1);
133
x2 = x2(2:N);
D = D/dis;
E = E/dis2;
% create xx,yy
for i = 1:N2+1
for j = 1:N1
xx(i,j) = x2(j)*cos(y2(i));
yy(i,j) = x2(j)*sin(y2(i));
end
end
% initial data %
%vv = interp2(xx2,yy2,f2,xx,yy);
vv = f2;
%vv = 0.1*sqrt(xx.^2+yy.^2).*exp(0.01*((xx).^2+(yy).^2));
%vva = 0.9*sqrt(xx.^2+yy.^2).*exp(0.01*((xx+30).^2+(yy).^2));
%vv = vv + vva;
% time step !
dt = 0.5;
% plotting the function on the screen at intervals !
plotgap = round((2000/10)/dt); dt = (2000/10)/plotgap;
vvold = vv;
vvnew = 0*vv;
phi = 0*vv;
pot = phi;
% sponge factors
%e = 0*vv;
e = 1*exp(0.3*(sqrt(xx.^2+yy.^2)  2*dis)); %i.e no sponge factors
%% calculate of differentiation matrices
D2 = D^2;
D2 = D2(2:N,2:N);
DD = D(2:N,2:N);
E2 = E^2;
I = eye(N2+1);
I2 = eye(N1);
fac1 = diag(x2);
fac2 = diag((cot(y2+eps)+cot(y2eps))/2);
fac3 = diag(1./x2);
fac2(1,1) = 1E10;
fac2(end,end) = 1E10;
L1 = E2 + fac2*E;
L2 = D2;
% calculate the starting potential %
u = grav*vv; % wavefunction
v = 0*pot; % initial guess of new pot for potadi
phi = padi(dis,u,v,N,N2); % calculation of the potential
%phi = 0*pot;
time =0;
pic = vv;
134
% evolution loop begin here ....
for n = 0:30*plotgap
t = n*dt;
if rem(n,plotgap) == 0, %  Plot results at multiples of plotgap
%subplot(2,2,n/plotgap+1),
%[xxx,yyy] = meshgrid(dis:dis*.02:dis,dis:dis*.02:dis);
%vvv = interp2(xx,yy,vv,xxx,yyy,’cubic’);
%mesh(xxx,yyy,abs(vvv)), % colormap([0 0 0])
%mesh(xx,yy,phi)
time(end+1) =t;
pic(:,:,end+1) =vv;
%view(that);
%hold off
%mesh(xx,yy,abs(vv)./sqrt(xx.^2+yy.^2));
%surf(xx,yy,imag(vv))
%hold on
%surf(xx,yy,abs(vv)) %./sqrt(xx.^2+yy.^2));
%shading interp
%axis([1 1 1 1 1 1]),
%surf(xx,yy,abs(uaa))
%axis([2*dis 2*dis 0 2*dis 0.2 0.2])
%title([’t = ’ num2str(t)]), drawnow
% calculate of the norm..
%cn2d
%cnorm
end
Res = 1;
pot = phi;
npot = phi;
while (abs(Res) > 1E4) % set up for the iteration ..
% calculate of the partials
urr = zeros(N2+1,N1);
% for a = 1:N2+1
% urr(a,1:N1) = (L2*vv(a,1:N1).’).’;
% end
uaa = zeros(N2+1,N1);
for a = 1:N1
uaa(1:N2+1,a) = (1/x2(a)^2)*L1*vv(1:N2+1,a);
end
%  ADI(LOD)  %
vv2 = 0*vv;
% (1+dr)Un2 = (1da)Un1
for a = 1:N2+1
sv2 = vv(a,1:N1);
ds = sqrt(1)*uaa(a,1:N1) + e(a,1:N1).*uaa(a,1:N1);
d = (sv2 + 0.5*dt*ds).’;
135
bs2 = sqrt(1)*L2 + diag(e(a,1:N1))*L2;
b = eye(N1)  0.5*dt*bs2;
vv2(a,1:N1) = (b\d).’;
end
% end part1 %
% caculate urr,uaa
% for a = 1:N1
% uaa(1:N2+1,a) = (1/x2(a)^2)*L1*vv2(1:N2+1,a);
% end
for a = 1:N2+1
urr(a,1:N1) = (L2*vv2(a,1:N1).’).’;
end
% (1+da)Un3 = (1dr)Un2
vv3 = 0*vv2;
for a = 1:N1
sv = vv2(1:N2+1,a);
ds = sqrt(1)*urr(1:N2+1,a) + e(1:N2+1,a).*urr(1:N2+1,a);
d = sv + 0.5*dt*ds;
bs2 = (1/x2(a)^2)*sqrt(1)*L1 + (1/x2(a)^2)*diag(e(1:N2+1,a))*L1;
b = eye(N2+1)  0.5*dt*bs2;
vv3(1:N2+1,a) = b\d;
end
% end part2 %
% (1V)Un4 = (1+V)Un3
vvnew = 0*vv3;
for a = 1:N1
d = vv3(1:N2+1,a) + 0.5*sqrt(1)*dt*vv3(1:N2+1,a).*pot(1:N2+1,a);
bpm2 = sqrt(1)*diag(npot(1:N2+1,a));
b = eye(N2+1)  0.5*dt*bpm2;
vvnew(1:N2+1,a) = b\d;
end
% end part3 %
% now the calculate of the potential of the new potential
% due to the new wavefuntion
u = grav*vvnew; % wavefunction
v = 0*pot; % initial guess of new pot for potadi
n2pot = padi(dis,u,v,N,N2); % calculation of the potential
%n2pot = 0*pot;
Res = norm(n2potnpot);
npot = n2pot; % +0.5*pot;
%pot = npot;
end
phi = n2pot;
vv = vvnew;
%inaxprod(abs(vv),abs(vv),dis,N,N2,1)
save axtest pic time N N2 xx yy dis
end
136
function phi = padi(dis,u,v,N,N2)
% padi.m
% Aim: To use PeacemanRachford ADI iteration
% to find the solution to the poisson equation in 2D,with axially
% sysmetric case.
%
% u is the wavefunction !
% v is the potential !
% grid setup !
[E,y] = cheb(N2);
y2 = pi/2*(y+1);
dis2 = pi/2;
[D,x] = cheb(N);
x2 = dis*(x+1); % x is the radius points.
x2 = x2(2:N);
D = D/dis;
E = E/dis2;
% calculation of the differnetiation matrics
D2 = D^2;
E2 = E^2;
D2 = D2(2:N,2:N);
D = D(2:N,2:N);
I = eye(N2+1);
I2 = eye(N1);
rho = 0.017;
rho2 = 0.017;
fac1= diag(1./x2);
fac2= diag((cot(y2+eps)+cot(y2eps))/2);
fac2(1,1) = 1E5;
fac2(end,end) = 1E5;
L1 = E2 + fac2*E;
L2 = D2;
% readjust density function !
uu = abs(u).^2*fac1;
umod = uu;
% uu =u;
% start of the iteration
Res = 1E12;
while (abs(Res) > 1E3)
% calculate wrr,waa
wrr = zeros(N2+1,N1);
%for a = 1:N2+1
% wrr(a,1:N1) = (L2*v(a,1:N1).’).’;
%end
waa = zeros(N2+1,N1);
137
for a= 1:N1
waa(1:N2+1,a) = (1/x2(a)^2)*L1*v(1:N2+1,a);
end
%  ADI  %
% (L2+rho)S = (L1+rhoumod)Wn
A = L2+rho*I2;
vv = 0*v;
for a = 1:N2+1
sv2 = rho*v(a,1:N1) + waa(a,1:N1)  umod(a,1:N1); %abs(uu(a,1:N1)).^2;
vv(a,1:N1) = (A\(sv2.’)).’;
end
% end part1 of ADI
% calculate waa,wrr
%for a = 1:N1
%waa(1:N2+1,a) = L1*vv(1:N2+1,a);
%end
for a = 1:N2+1
wrr(a,1:N1) = (L2*vv(a,1:N1).’).’;
end
newv = 0*v;
% part2 ADI
%A = L1+rho2*I;
% (L1+rho2)Wn+1 = (L2+rho2umod)S
for a = 1:N1
sv = rho2*vv(1:N2+1,a) + wrr(1:N2+1,a)  umod(1:N2+1,a); %abs(uu(1:N2+1,a)).^2;
A = L1*(1/x2(a)^2) + rho2*I;
newv(1:N2+1,a) = A\sv;
end
% end of ADI
test = abs(newvv);
Res = norm(test(1:N2+1,1:N1));
padires = Res;
v = newv;
end
fac = diag(1./x2);
phi = v*fac;
B.5 Chapter 8 Programs
B.5.1 Evolution programs for the two dimensional SN equations
% ns2d.m SchrodingerNewton equation
% on a 2d grid 
% full nonlinear method using chebyshev, ADI(LOD) wave equation
% that the wave equation done in chebyshev differention
% matrix in 2d
dis = 40;
grav = 1;
138
speed =0;
% grid setup %
N = 36;
[D,x] = cheb(N);
x = dis*x;
y = x’;
D = D*1/dis;
[xx,yy] = meshgrid(x,y);
load twin
[DD,x2] = cheb(N2);
x2 = dis2*x2(2:N2);
y2 = x2’;
for cc = 1:N21
for cc2 = 1:N21
f2(cc,cc2) = f((cc1)*(N21)+cc2);
end
end
%f3 = 0.5*(f2+f2(N21:1:1,:));
[xx2,yy2] = meshgrid(x2,y2);
vv = interp2(xx2,yy2,f2,xx,yy);
for cc = 1:N+1
for cc2 = 1:N+1
if isnan(vv(cc,cc2)) == 1
vv(cc,cc2) = 0;
end
end
end
dt = 1;
% plotting the function on the screen at intervals !
plotgap = round(400/(2*dt)); dt = 400/(2*plotgap);
% initial data %
%vv = 0.07*exp(0.01*((xx15).^2+(yy+15).^2))*exp(sqrt(1)*0.5);
%vva = 0.05*exp(0.01*((xx+35).^2+(yy35).^2));
%vv = vv + vva;
% to add initial velocity
%vv = 0.07*exp(0.02*((xx).^2+yy.^2)).*exp(sqrt(1)*speed*(yy+xx)/sqrt(2));
%vva = 0.07*exp(0.01*((xx+10).^2+yy.^2)).*exp(sqrt(1)*speed*yy);
%vv = vv+vva;
%initial velocity perhaps %.*exp(sqrt(1)*0.1*xx);
vvold = vv;
vvnew = 0*vv;
phi = 0*vv;
pot = phi;
% sponge factors
e = 0*vv;
e = min(1,1*exp(0.5*(sqrt(xx.^2+yy.^2)(dis10))));
%% calculate of differentiation matrices
D2 = D^2;
139
D2 = D2(2:N,2:N);
% calculate the starting potential %
u = grav*vv; % wavefunction
v = 0*pot; % initial guess of new pot for potadi
phi = potadi(dis,u,v,N); % calculation of the potential
time = 0;
pic = vv;
% evolution loop begins here
for n = 0:50*plotgap
t = n*dt;
if rem(n,plotgap) == 0, %  Plot results at multiples of plotgap
%subplot(2,2,n/plotgap+1),
time(end+1) = t;
pic(:,:,end+1) = vv;
%[xxx,yyy] = meshgrid(dis:dis*.02:dis,dis:dis*.02:dis);
%vvv = interp2(xx,yy,vv,xxx,yyy,’cubic’);
%mesh(xxx,yyy,abs(vvv)), % colormap([0 0 0])
%mesh(xx,yy,phi)
%view(that);
%mesh(xx,yy,imag(vv));
%axis([1 1 1 1 1 1]),
%axis([dis dis dis dis 0.03 0.03])
%title([’t = ’ num2str(t)]), drawnow
% calculate of the norm..
%cn2d
%cnorm
end
Res = 1;
pot = phi;
npot = phi;
while (abs(Res) > 1E3) % set up for the iteration ..
% calculate uxx
uyy = zeros(N+1,N+1);
for a = 2:N
uyy(a,2:N) = (D2*vv(a,2:N).’).’;
end
uxx = zeros(N+1,N+1);
for a = 2:N
uxx(2:N,a) = D2*vv(2:N,a);
end
%  ADI(LOD)  %
vv2 = 0*vv;
% (1+dy)Un2 = (1dx)Un1
for a = 2:N
sv2 = vv(a,2:N);
ds = sqrt(1)*uxx(a,2:N) + e(a,2:N).*uxx(a,2:N);
d = (sv2 + 0.5*dt*ds).’;
140
bs2 = sqrt(1)*D2 + diag(e(a,2:N))*D2;
b = eye(N1)  0.5*dt*bs2;
vv2(a,2:N) = (b\d).’;
end
% end part1 %
% caculate uxx,uyy
for a = 2:N
uxx(2:N,a) = D2*vv2(2:N,a);
end
for a = 2:N
uyy(a,2:N) = (D2*vv2(a,2:N).’).’;
end
% (1+dx)Un3 = (1dy)Un2
vv3 = 0*vv2;
for a = 2:N
sv = vv2(2:N,a);
ds = sqrt(1)*uyy(2:N,a) + e(2:N,a).*uyy(2:N,a);
d = sv + 0.5*dt*ds;
bs2 = sqrt(1)*D2 + diag(e(2:N,a))*D2;
b = eye(N1) 0.5*dt*bs2;
vv3(2:N,a) = b\d;
end
% end part2 %
% (1V)Un4 = (1+V)Un3
vvnew = 0*vv3;
for a = 2:N
d = vv3(2:N,a)  0.5*sqrt(1)*dt*vv3(2:N,a).*pot(2:N,a);
bpm2 = sqrt(1)*diag(npot(2:N,a));
b = eye(N1) + 0.5*dt*bpm2;
vvnew(2:N,a) = b\d;
end
% end part3 %
% now the calculate of the potential of the new potential
% due to the new wavefuntion
u = grav*vvnew; % wavefunction
v = 0*pot; % initial guess of new pot for potadi
n2pot = potadi(dis,u,v,N); % calculation of the potential
Res = norm(n2potnpot);
npot = n2pot; % +0.5*pot;
%pot = npot;
end
phi = n2pot;
%vv = 0.5*(vvnew  vvnew(:,N+1:1:1));
vv = vvnew;
end
save test3 pic time N xx yy dis
function phi = potadi(dis,u,v,N)
141
% potadi.m
% Aim: To use PeacemanRachford ADI iteration
% to find the solution to the poisson equation in 2D
%
% u is the wavefunction !
% v is the potential !
% grid setup !
[D,x] = cheb(N);
x = dis*x;
y = x’;
[xx,yy] = meshgrid(x,y);
D = D*1/dis;
% calculation of the differnetiation matrics
D2 = D^2;
D2 = D2(2:N,2:N);
I = eye(N1);
rho = +0.01;
% start of the iteration
Res = 1;
while (abs(Res) > 1E4)
% calculate uxx,uyy
uyy = zeros(N+1,N+1);
for a = 2:N
uyy(a,2:N) = (D2*v(a,2:N).’).’;
end
uxx = zeros(N+1,N+1);
for a= 2:N
uxx(2:N,a) = D2*v(2:N,a);
end
%  ADI  %
A = D2+rho*I;
vv = 0*v;
for a = 2:N
sv2 = rho*v(a,2:N) + uxx(a,2:N)  abs(u(a,2:N)).^2;
vv(a,2:N) = (A\(sv2.’)).’;
end
% end part1 of ADI
% calculate uxx,uyy
for a = 2:N
uxx(2:N,a) = D2*vv(2:N,a);
end
for a = 2:N
uyy(a,2:N) = (D2*vv(a,2:N).’).’;
end
newv = 0*v;
142
% part2 ADI
for a = 2:N
sv = rho*vv(2:N,a) +uyy(2:N,a)  abs(u(2:N,a)).^2;
newv(2:N,a) = A\sv;
end
% end of ADI
test = newvv;
Res = norm(test(2:N,2:N));
v = newv;
end
phi = v;
143
Acknowledgements
I would like to thank my supervisors Paul Tod and Irene Moroz, Roger Penrose for the idea behind my thesis, Nick Trefethen for his help in use of spectral methods and his excellent lectures, Ian Sobey for his helpful comments and Lionel Mason for getting me started with this project. I would also like to thank the EPSRC for their ﬁnancial support. Last but not least I would like thank my mother, father and sister for their support.
Abstract
The Schr¨dingerNewton (SN) equations were proposed by Penrose [18] as a model for o gravitational collapse of the wavefunction. The potential in the Schr¨dinger equation is o the gravity due to the density of ψ2 , where ψ is the wavefunction. As with normal Quantum Mechanics the probability, momentum and angular momentum are conserved. found numerically by Moroz et al [15] and Jones et al [3]. The ground state which has the lowest energy has no zeros. The higher states are such that the (n + 1)th state has n zeros. We consider the linear stability problem for the stationary states, which we numerically solve using spectral methods. The ground state is linearly stable since it has only imaginary eigenvalues. The higher states are linearly unstable having imaginary eigenvalues except for n quadruples of complex eigenvalues for the (n + 1)th state, where a quadruple consists ¯ ¯ of {λ, λ, −λ, −λ}. Next we consider the nonlinear evolution, using a method involving an iteration to calculate the potential at the next time step and CrankNicolson to evolve the Schr¨dinger equation. To absorb scatter we use a sponge factor which reduces the reﬂection o back from the outer boundary condition and we show that the numerical evolution converges for diﬀerent mesh sizes and time steps. Evolution of the ground state shows it is stable and added perturbations oscillate at frequencies determined by the linear perturbation theory. The higher states are shown to be unstable, emitting scatter and leaving a rescaled ground state. The rate at which they decay is controlled by the complex eigenvalues of the linear perturbation. Next we consider adding another dimension in two diﬀerent ways: by considering the axisymmetric case and the 2D equations. The stationary solutions are found. We modify the evolution method and ﬁnd that the higher states are unstable. In 2D case we consider rigidly rotationing solutions and show they exist and are unstable. We ﬁrst consider the spherically symmetric case, here the stationary solutions have been
. .3. . . Variational formulation . . . . . . . . . . . . . . . . . . . . Separating the O( ) equations . . . . . . . .5 Linearising the SN equations . . . . . . . .1 2. . . . . . .4 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computing the sphericallysymmetric stationary states . . 3 Linear stability of the sphericallysymmetric solutions 3. . Lagrangian form . . Alternative method . . . .2. . . . . . . .1 1. . Conserved quantities . . . . . . . . . . . . . . .3 1. . . . . . . . . . . .4 1. . . . . . . .2 2. . . . . . . . Lie point symmetries . . . . . . . . . . . . . . . . . . . . . . . Existence and uniqueness .3. . . . . . . . . . . . . . . . . .4 1. .3 1. . . . . . . . . . . . . . . . . . . . Conclusion . . . .1 1. . . . . . . . . . Rescaling . . . . . . . . . Dispersion . . . .1 2. . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An inequality on Re(λ) . . . . . . . Negativity of the energyeigenvalue . .6 1. . . . . . . . . . . . . . . . . 1. . . 1 1 4 4 4 4 5 7 7 8 8 8 9 10 10 12 12 13 13 14 15 18 18 20 22 24 25 Analytic results about the timeindependent case . . . . . . . . . . . . . . . . . . . . . Plan of thesis . . . .3 3. . . .3 1. . . . . . . . . . . . . . . . . . . Boundary Conditions . . . . . .3 RungeKutta integration . . . . . . . . . . . . . . .1 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . .5 Existence and uniqueness . . . . . .1 3. . . . . . . . . . . . . . . . . . . . . . .2 1. . . . . .2. . . . . i . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . 2. . . .3. . . . . 2 Sphericallysymmetric stationary solutions 2. . . . . . . . . . .2 3.2 The equations motivated by quantum statereduction . . . . . . . . . Some analytical properties of the SN equations . . . . . . Restriction on the possible eigenvalues . . . . . . . . . . . . . . . . . . . . . .2 The equations . . . . . . Approximations to the energy of the bound states . . . . . . . .5 1. . .2. . . . .2 1. . . . . . . . . . . . .2. . . . . . . .Contents 1 Introduction: the Schr¨dingerNewton equations o 1. . . . . .
. . Evolution of the third state . .9 The problem . . . . . . . . . .4 5.3. . . . . .6 5. . . . Solution of the Schr¨dinger equation with zero potential . . . . . . . . . . . . . . . . . . . ii .5 4. . . . . . . . . Evolution of the shell while changing σ . . . .7 The method . . . . . . . . . . . . . Conditions on the time and space steps . . . . . . . . . . .4. . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 5. . . . . Evolution of the shell while changing a . . . .7 5. . . . . . . . . . . . . . . . . . . . Perturbation about the second state . Evolution of the ground state . . . . . .5 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 4. . . . . . . . . . . . . . . . .10 Large time behaviour of solutions 6 Results from the numerical evolution 6.1 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boundary conditions and sponges . . . . . . . . . . . . . . .2 6. . . . . . . . . . . . . . o Schr¨dinger equation with a trial ﬁxed potential . . . . . . . . . . . . . . . . . . . . . .1 6. . . . . . . Evolution of the higher states . . . . . . . .3. . . . . . . . . . . . . . . . . . . . 26 26 28 33 36 39 39 41 46 46 47 48 49 50 51 52 53 54 54 56 56 57 62 62 67 69 72 74 77 79 5 Numerical methods for the evolution 5. . . . .2 6. Perturbation about the higher order states . . . Conclusion . . Evolution of the shell while changing v .5 Evolution of the second state . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . .4 4. . Numerical methods for the Schr¨dinger equation with arbitrary time indeo pendent potential . . . . . o Numerical evolution of the SN equations . . . . . . . . . . . . . . . . .1 6. . . . . . . . . 5. . . . .1 6. . . . . . . . . . . . . . . .2 5. . . . . . . . . . . . . . . . Bound on real part of the eigenvalues . . Checks on the evolution of SN equations . . . . . . . . . . . . . . . . . . The perturbation about the ground state . . . . . . . . . . . . . . . . . . . . . . .4 Numerical solution of the perturbation equations 4. . . . . . . . .3 5. . . . . . . . . . . . . . . . . . . . . . . . .3 6. . . . . . . 6. . . Testing the numerical method by using RungeKutta integration .4 6. . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . .3 4. . . . . .6 4. .3 Testing the sponges . . . . . . . . . Evolution of an arbitrary spherically symmetric Gaussian shell . . . . . . . . . . . . . . .8 5. . Mesh dependence of the methods . . . . . . . . . . . . . . . . . . . . . . .2 6. . . . . . . . . . .
5 Chapter 8 Programs . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . .1 Programs for solving the axisymmetric stationary state problem . Axisymmetric stationary solutions . . . .3 7. . . . . . . . . . . . . . B. . . . . . . . B. . . . . . . .1 Evolution program . . .4 7.2. . . . . . . .7 The axisymmetric SN equations 7. . . . . . B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. . . . . . . . . . . . .2.4. . . . . . . . . . . . . . . . . . B. . . . . . . . . . . . . . . .1 Program for asymptotically extending the data of the bound state calculated by RungeKutta integration . . . B. . . . . . . . .2 Evolution programs for the axisymmetric case . . . . B. . . . . 108 116 116 116 116 119 119 121 122 123 123 123 126 126 126 133 138 138 8 The TwoDimensional SN equations 8. . . . . . . . . . The axisymmetric equations .4 8. . . . . . . . . . . . . . . . .3 Program for the calculation of (4. . B. . . . B. . . . . . . A Fortran programs A. . . . . . . . . The equations . . . . .2 7. . .2 Programs to calculate stationary state by Jones et al method . . . . . . . . . . . .1 8. . . . . . . .1 7.3 Chapter 6 Programs . . . . Sponge factors on a square grid . . . . . . . . . . . . . . .2 Chapter 4 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Program for solving the O( ) perturbation problem . . B. .4 Program for performing the RungeKutta integration on the O( ) perturbation problem . . . . . . . o B. . . Evolution of dipolelike state . . . . . . . . . . . . . . . B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . .3. . . . . .3. .1. . . . . . . . . . . . . . . . . . B. Timedependent solutions . .2 8. . . . . B. . . iii . . . . . . . . . . . . . . . . . .4. . . . Finding axisymmetric stationary solutions . . . . .2. . . . . . . . .2. . .5 The problem . . . . . . . . . . . . . . . . .1 Chapter 2 Programs . . . . . .5 8. . . 81 81 81 82 83 84 97 97 97 98 99 102 103 108 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. .4 Chapter 7 Programs . . . . B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Fourier transformation program . . . . . . .6 The problem . . .1 Program to calculate the bound states for the SN equations B Matlab programs B. . . . . .10) . . . . . .3 8.1 Program for solving the Schr¨dinger equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Evolution programs for the two dimensional SN equations . Spinning Solution of the twodimensional equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Bibliography 143 iv .
. . 4. . . . .11 The second eigenvector of the perturbation about the second bound state . . . . The lowest eigenvalues of the perturbation about the second bound state . . .16 The third eigenvector of the perturbation about the third bound state .6 4. 4.List of Figures 2. 4. . . Note the scales . All the computed eigenvalues of the perturbation about the ground state . . . .13 The eigenvalues of the perturbation about the third bound state . . . . . . . . . . .4 4. . . . . . . . . . .1 2. . . . . . . . . . . . . Loglog plot of the energy values with log(n) . . . . . . . . . . . The least squares ﬁt of the energy eigenvalues . . compare with ﬁgure 4. . . . Note 4. . . . . . . . 4. . . . . . . . . . . .4 4. . . . . . v . . . . . . . . . . . The third eigenvector of the perturbation about the ground state . . .7 4. The second eigenvector of the perturbation about the ground state . . . . . .3 4.12 The third eigenvector of the perturbation about the second bound state . . . . . . . . . . . . . . . .1 4. 30 31 31 32 32 34 17 17 29 29 15 16 All the computed eigenvalues of the perturbation about the second bound state 34 the scales . . . . 4.10 The ﬁrst eigenvector of the perturbation about the second bound state. .8 4. using RungeKutta integration. . . . The number of nodes with the Moroz et al [15] prediction of the number of nodes .4 . . . 4. . . . . .15 The third eigenvector of the perturbation about the third bound state .3 2. .2 2. . . .3 .17 The eigenvalues of the perturbation about the fourth bound state . . . . 4. . . compare with ﬁgure 4. . .19 The second eigenvector of the perturbation about the ground state. . . The ﬁrst eigenvector of the perturbation about the ground state. . . . . . . . . . . .18 The ﬁrst eigenvector of the perturbation about the ground state. . . The smallest eigenvalues of the perturbation about the ground state . . . . . .5 4. . . . . . . . . . . . . . . . . . . . . .14 The second eigenvector of the perturbation about the third bound state . . . . . . . .2 4. . . . . . . . . . The change in the sample eigenvalue with increasing values of N (L = 150) The change in the sample eigenvalue with increasing values of L (N = 60) . . . . . . . using RungeKutta integration. 35 35 36 37 37 38 38 40 41 42 4. . . . . . . . . 4. . .9 First four eigenfunctions . . . . . . . . . . . . . . . . . . . .
. . . . . . . The oscillation about the ground state . . . 6. . . . . . . . . . 6. . . . . .19 f (t) − A with respect to the third state . . . . 6. . . . . . . . . .15 The short time evolution of the second bound state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . using RungeKutta integration. . . . . . 4. . . . . . . . .24 Richardson fraction done with diﬀerent hi ’s . . . . . . .6 6. . . . . . . . . 4. . . . . . . .15 . . . . . . . . . . . . . .25 Diﬀerence in N value . . . using RungeKutta integration.23 The real part of third eigenvector of the perturbation about the third bound state.24 The imaginary part of third eigenvector of the perturbation about the third bound state. .20 Fourier transform of growing mode about third state . . .7 6. . . . . . . . . . . . The oscillation about the ground state at a given point . . . . . .16 . 6. 6. . . . . . . . . . .18 Evolution of third state .17 Fourier transform of the oscillation about the second state . . . . . Evolution of the ground state .13 Fourier transformation of f (t) − A . . . . . . . . 6. . . . . . . . . .21 Probability remaining in the evolution of the third state . . . .21 The second eigenvector of the perturbation about the second bound state. . . .8 6. . . .5 6. . . . . . .14 Decay of the second state . . .1 6. . . . . . . 6. . . . . . . 6. . . . . .4 6. . . . . .14 . . . . . . . . . . . . . . . . . . . . 6.12 f (t) − A . . . . . . . . . 4. . . . . . . . using RungeKutta integration. . . . . .23 Diﬀerence of the other time steps compared with dt = 1 . . . . . . . . . . . The graph of the phase angle of the ground state as it evolves . . . . 6. . . . . . . . . . . . . 6. . . . . .22 Progress of evolution with diﬀerent velocities and times . .22 The second eigenvector of the perturbation about the third bound state. . .9 Testing the sponge factor with wave moving oﬀ the grid .16 Oscillation about second state at a ﬁxed radius . . . . . . . . . . 6. 4. vi . . . .2 6. . . . . .3 6. . The Fourier transform of the oscillation about the ground state . . . . 6. . . .20 The ﬁrst eigenvector of the perturbation about the second bound state. . . . Testing sponge factor with wave moving oﬀ the the grid . . . . . . . .4. .10 The long time evolution of the second state in Chebyshev methods . compare with ﬁgure 4. . . . . . . 44 56 57 58 59 59 61 61 63 63 65 65 66 66 67 68 68 70 70 71 71 72 73 73 74 44 43 43 42 The eigenvalue associated with the linear perturbation about the ground state 60 6. compare ﬁgure 4. . . . . . . . . . . . .11 . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . compare with ﬁgure 4. The graph of the phase angle of the second state as it evolves . . . . . 6. . . . . . . . . . . . . . . . 6. . . compare with ﬁgure 4. . . . . . . compare with ﬁgure 4. . . . . . . . . . . .10 .11 Evolution of second state tolerance E9 approximately . . . . . . . using RungeKutta integration. . . . using RungeKutta integration.
. . . . .7 7.1178 . . . . . . . . . . . . . 6. . .5. . .axp6 . . . . .14 Contour plot of axp7 . . . .2 8.2 7. . . . . . . . . . . . . N = 30 and dt = 2 . . . . . . .25. . . . . . . . . . . . . .26 Evolution of lump at diﬀerent a . . . Not quite the 3rd spherically symmetric state E = −0. . . Evolution of a stationary state . .13 Axially symmetric state with E = −0. . . . . . . .125. . . . . . . . . . . . . . . . . 6. . . . . 75 75 76 76 77 78 78 79 85 85 86 86 87 87 88 88 89 89 90 90 91 91 92 92 94 95 95 96 99 100 100 101 101 102 7. . . . . . . . . . 6. .0208 and J = 3. . axp1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . axp2 .30 Evolution of lump at diﬀerent σ . . . .17 Evolution of the dipole . . . . . Second spherically symmetric state. . . . . . . . . . .0263. .28 Richardson fraction with diﬀerent hi ’s . .8 7. . . . . .19 Average Richardson fraction with dt = 0. . . . . . . axp1 . . . . . .5 . 6. . . . . . .1 7. . . .33 Diﬀerence in N value . . . .16 Contour plot of the state with E = −0. . . . . .0162. . . . . . . . . . .4 8. . . . . . . . . . . . . . . . . . . . . . . . . . .25. . . . . Stationary state for the 2dim SN equations . . “Dipole” state next state after ground state . . .9 Ground state. . . .27 Comparing diﬀerences with diﬀerent time steps . . . . axp3 . . . . . .3 7. . . . . . . . . 0. . . . . . . . . . . . . . . . 7.double dipole . . . . . . . Contour plot of the not quite 2nd spherically symmetric state. . . . . . . . . . . 6. . . . .15 State with E = −0. 6. . . . . . . . . . . . . .11 3rd spherically symmetric sate . . . . . . . 7. 0. . . axp2 . . Lump moving oﬀ the grid with v = 2. . . . . . . . axp3 . . . . . . . . . . . . . Contour Plot of the dipole. . . .1 8. . . . .10 Contour plot of Not quite the 3rd spherically symmetric state E = −0. . . . N = 30 and dt = 2 . 1 . . . . . . . . . . . . . . . . .29 Diﬀerence in N value . . . . . . . . . . . . . . . . . . . vii . . .4 7. . . .axp6 .18 Average Richardson fraction with dt = 0. . . . . . .0208 and J = 3. . .6 Sponge factor . . . . . . . . . . . 7. . Contour plot of the ground state.axp8 . . . axp5 . . . .0162. . . 0. . Not quite the 2nd spherically symmetric state . . . . . . . . . . . . . . . .5 8. . . . 7. . . 8. . . Contour Plot of the second spherically symmetric state. . . . . . Lump moving oﬀ the grid with v = 2. . . . . . . . . . . . . . . . 7. . . . . axp5 . . . . . . . . . . . . . . . . . . . . axp4 . . . . . . . . . . .3 8.5 7. . . . . .31 Comparing diﬀerences with diﬀerent time steps . .20 Evolution of the dipole with diﬀerent mesh size . . . . .32 Richardson fractions . . . . . . . . . . . . . . 7. . . . . . . . . . 7.axp4 . . . . . . . . . . . . . . .6 7. . . .12 Contour plot of 3rd spherically symmetric state . . . . 7.1178 . . . . . . . . . . . . N = 30 and dt = 2 . . . .axp8 . . . . 7. . . . . . . . .6. . . axp7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lump moving oﬀ the grid with v = 2. . . . . . . . . . . . . . . . 6. 7. 7. . . . . . . . . . . . . . . . . . . .
. . . . . .7 8. .8 8. . . . . . . . . . . . . . . . . . . . . . . . viii . . . . . . . . . . . Evolution of spinning solution . Imaginary part of spinning solution . . . . . . . .9 Real part of spinning solution . . . . . . . . . . . . .10 Evolution of spinning solution . . . . . . . . . . . . . . 104 104 105 105 8. . . . . . . . .8. . . . . . . . . . . . . . . . . .
This argument is motivated by a conﬂict between the basic 1 . One idea for a theory of quantum state reduction was put forward by Penrose [17]. To get the timeindependent form of the SN equations (1. The boundary conditions for the equations are the same as those for the usual Schr¨dinger o equation and the Poisson equation. The idea was that a superposition of two or more quantum states. ˜ 2m ∂t 2 Φ = 4πGmΨ2 . y. (1. (1.Chapter 1 Introduction: the Schr¨dingerNewton equations o 1.e we require the wave function to be smooth for all x ∈ R3 and normalisable. z)e−i Φ(x. 2 (1.2b) − 2 2m ˜ Ψ + mΦΨ = EΨ. z).1 The equations motivated by quantum statereduction The Schr¨dingerNewton equations (abbreviated as SN equations) are : o i ∂Ψ − 2 2 = Ψ + mΦΨ. G is the gravitational constant and t is time. m is the mass of the particle.3a) (1. z.2a) (1. Ψ is the wave function.1b) where is Planck’s constant. y. 2 ˜ E˜ t . y. [18]. ought to be unstable and reduce to one of the states within a ﬁnite time.1a) (1. we also require the potential function to be smooth and zero at large distances.3b) Φ = 4πGmΨ2 . t) = Ψ(x. which have a signiﬁcant amount of mass displacement between the states. t) = Φ(x. z.1) we consider the substitution Ψ(x. Φ is the potential. y. which gives the timeindependent SN equations. i.
Penrose [18] states that “the phenomenon of quantum state reduction is a gravitational phenomenon. This preferred basis would be the stationary quantum states. t. and that essential changes are needed in the framework of quantum mechanics in order that its principles can be adequately married with the principles of Einstein’s general relativity. They have also been considered by Bernstein. where TG = EG and EG is the gravitational selfenergy of the diﬀerence bewteen the mass distributions of the two superposed states. Our object in this thesis is to study the SN equations in the time dependent and time independent cases. φ = βΦ. r) where: ˜ We can consider the nondimensionalized SN equations. That is. ˜ t = γ t. This would be called a preferred basis. where ψ = ψ1 − ψ2 the diﬀerence of the two superposed states 2 R3 mass distributions. Moroz and Tod [14] prove some analytic properties of the SN equations. R) → ψ = αΨ.” According to Penrose. These equations have also been considered by Moroz. This idea requires that there be a special set of quantum states which collapse no further. Ruﬃni and Bonazzola [21] considered the problem of ﬁnding stationary bosonstars in the nonrelativistic spherically symmetric case which corresponds to ﬁnding stationary spherically symmetric solutions of the SN equations. superpositions of these states ought to decay within a certain characteristic average time of TG .4) 2 . Bernstein and Jones [4] have started considering a method for the dynamical evolution.principles of quantum mechanics and those of general relativity. (ψ. Φ. the 1 selfenergy is  ψ2 dV . via a transformation (Ψ. There the SN equations are the nonrelativistic limit of the governing KleinGordon equations. [The SN equations are a ﬁrst approximation we can consider the case where gravity is taken to be Newtonian and spacetime is nonrelativistic. φ. Giladi and Jones [3] who developed a better way than a shooting method for calculating the stationary solutions in the case of spherical symmetry. t. Penrose and Tod [15] where they computed stationary solutions in the case of spherical symmetry. (1. r = δR.] These equations were ﬁrst considered by Ruﬃni and Bonazzola [21] in connection with the theory of selfgravitating bosonstars. Bosonstars consist of a large collection of bosons under the gravitational force of their own combined mass.
7) which becomes. The (1.9) and δ= For γ we have γ= 32π 2 G2 m5 3 8πGm3 2 . δ2β 2m 2 4πGm2 = . β δ2 from this we deduce that becomes. (1. The Schr¨dinger equation o β 4πGmδ iα α Ψt = − 2 ˜ γ δ 2 Ψ + αβΨΦ.5b) then becomes.11) Now from (1.10) . (1. (1. ψ2 d3 X = 1 if α2 = δ −3 .5a) (1. (1. [15].5b) Normalisation is preserved.13a) (1. (1. δ 2m (1.12b) 2G2 m5 ˜ where E = E . which was the form of the equations consider in [14].5) the nondimensionalized time independent SN equations are Eψ = − 2 2 ψ + φψ. 3 . (1. We can eliminate the E term from (1. Ψ2 d3 x = 1 and 2 (1.12a) (1.6) Φ = α2 Ψ2 . 1 α2 δ 2 = 4πGm.14a) (1. E − φ = V. im m Ψ˜ = − 2 γβ t δ β so we then deduce that m = γβ and 2 Ψ + mΨΦ. ∂t 2 φ = ψ2 .12) to 2 2 φ = ψ2 . thus β = . which reduces (1. (1.12) by letting 2 ψ = S. i.such that the SN equation become : i ∂ψ = − 2 ψ + φψ.14b) V = −S2 .e gravitation equation (1.8) 2 m = so.13b) S = −SV.
ˆ φ = λ2 φ(λx. φ. t) global in time with ψ(x. (1. λ2 φ. λ2 t). strong ψ2 = 1. solution ψ(x.2. [ ψ· 1 ¯ ψ− 8π ψ(x)2 ψ(y)2 3 i ¯ ¯ d y + (ψ ψt − ψψt )]d3 xdt. x − y 2 = ψ2 . (1.1 Some analytical properties of the SN equations Existence and uniqueness Existence and uniqueness for solutions is established by the following simpliﬁed version of the theorem of Illner et al [13].19) where it is understood that φ is the solution of the Poisson equation with the appropriate Green’s function and consider the Lagrangian. the system (1.2. 2 2 2φ (1.20) See Christian [5] and Diosi [7] for details.3 Lagrangian form Note that (1. (1. Alternatively. λ−1 x.2. that is: Enew = λE. Given χ(x) ∈ H 2 ((R)3 ) with L2 norm equal to 1.17) (1. 0) = χ(x) and regularity properties for ψ and φ. λ2 t).5) can be obtained from the Lagrangian: [ ψ· i ¯ 1 ¯ ¯ ψ + φψ2 + (ψ ψt − ψψt )]d3 xdt. 4 . λ−2 t) is also a solution where λ is any constant. and consider ˆ ψ = λ2 ψ(λx.18) 1.2 1. one may solve (1. Suppose that ψ2 dV = λ−1 .15) V where λ is a real constant. t) is a solution of the SN equations then (λ2 ψ.1.16a) (1.2 Rescaling Note that if (ψ.5) has a unique.16b) V We also note that the rescaling of the solutions will also cause the rescaling of the energy eigenvalues (as well as the action). x. ˆ ˆ Then ψ and φ satisfy the SN equations with ˆ ˆ ψ2 dV = 1. Illner et al [13] give 1.
the centreofmass follows a straight line.23) that is. Deﬁne ρ = ψ2 .26) (1. ˙ Ji = −Sij.27) ρxi d3 x. Deﬁne the centre of mass < xi > by < xi >= then ˙ < xi > = P i . Sij where ψi = ∂φ ∂ψ and φi = then ∂xi ∂xi ¯ ¯ ¯ ¯ = −ψ ψij − ψψij + ψi ψj + ψi ψj + 2φi φj − δij φk φk .29) 5 .1.22b) (1. (1.j .28) and then.25) Ji d3 x.24) so that. ˙ (1. (1. the total probability is conserved. (1. all of which are to be expected from linear quantum mechanics. (1.4 Conserved quantities The system (1. (1. where the dot denotes diﬀerentiation with respect to time.5) admits several conserved quantities.21a) (1. The total angular momentum is Li = ijk xj Jk d 3 x.i . as expected.21c) ¯ ¯ Ji = −i(ψψi − ψ ψi ). ρ = −Ji.22a) (1.2. it follows that ˙ Li = 0.21b) (1. Therefore P = ρd3 x = constant. (1. Next deﬁne the total momentum Pi by Pi = Then ˙ Pi = 0. by the symmetry of Sij .
(1... We may deﬁne a sequence of moments qi.... φψ2 d3 x. xj Jk) d3 x. It is closely related to the action for the timeindependent equations..jk ..jkm by pi..32) ˙ φρd3 x.jkm = Then it follows from (1.jk and si.j ˙ (n) (n) (n) (n) x(i .j by qi..36a) (1. (n−1) (n−1) (1.36b) x(i .jk = si. n (1...22b) that qi.37b) pi. The total energy E = T + V is consequently not conserved.. 2 (1. n = npi.. .. xj d3 x....35) and other tensors pi. (1.5b) and it follows from this that the averaged ‘selfforce’ is zero. n (1. . . The identity 1 ρφi = (φi φj − δij φk φk )i 2 (1.e that < Fi >= φi ρd3 x = 0. .33) is a consequence of (1. .jkm . ˙ (n) (n) In particular this means that pi..37a) (1....31)  ψ2 d3 x. (1.jk and si.We deﬁne the kinetic energy T and potential energy V in the obvious way by T = V = Note from (1.30b) is conserved... i. .jk = nsi.5b) that ˙ φρd3 x = It follows from this that the quantity 1 E = T + V.jkm are always zero for steady solutions. We shall sometimes call this the conserved energy or the action. 6 .34) which may be thought of as Newton’s Third Law. xj Skm) d3 x.j = (n) (n) (n) (n) ρ xi ..30a) (1.
Thus d2 < x2 > > 8E. The analysis leading to the claim that there are no more Lie pont symmetries is straightforward but has not been published. ˆ (1. 1.5). t) + x.1. we ﬁnd ¨ qii = 2( so that (2) (2) x2 ρd3 x. 2 x → x = x + P (t). These consist of rotations and translations together. as a consequence of the maximum principle (see e.2.42) The dispersion grows at least quadratically with time if E is positive. (1. By a Galilean transformation. dt2 (1.5) was noted by Christian [5]. (1.41) Now recall that.38b) (1. [8]) φ is everywhere negative.6 Dispersion One of the moments is particularly signiﬁcant. namely the dispersion: < x2 >= qii = Following Arriola and Soler [2].g.39) ˙ xi Ji d3 x) = 2 Sii d3 x = (8 ψ2 + 2φψ2 )d3 x. with a generalised Galilean transformation which can be expressed as follows: −i ˙ i ˆ ψ → ψ = ψ(x + P . t) exp[ P . and then by a translation we can place the centreofmass at the origin. 7 . dt2 (1.x + 2 4 1 ¨ ˆ φ → φ = φ(x + P . but we cannot conclude this if E is negative. (1. we can reduce the total momentum to zero.5 Lie point symmetries Following the general method of Stephani [24] we can ﬁnd the Lie point symmetries of (1. These clearly are all Lie point symmetries.38c) ¨ which is a Galilean transformation if and only if P = 0. The Galilean invariance of (1.2.40) d2 < x2 > = 4E + 4T = 8E − 2V.P .38a) ˙ P 2 dt].
Note that.1 Analytic results about the timeindependent case Existence and uniqueness In Moroz and Tod [14] it was shown that the system (1.12) can be obtained from a variational problem. I= 1 1 [  ψ2 − 2 16π ψ(x)2 ψ(y)2 3 1 d y − Eψ2 ]d3 x.3 1. for each integer n there is a unique (up to sign) real sphericallysymmetric solution with n zeroes.43) is δI = 1 ¯ δ ψ(− 2 2 ψ + φψ − Eψ) + c.48) One expects that the direct method in the Calculus of Variations should now prove that the inﬁmum of I is attained.2 Variational formulation The system (1.12b) with the relevant Green’s function and considering. (1. is δ2I = 1 [ δψ2 + φδψ2 −  δψ2 − Eδψ2 ]d3 x. all with negative energyeigenvalue and ψ real.12) is elliptic) and that the minimising function will be the ground state found numerically by Moroz et al [15] and proved to exist by Moroz and Tod [14].1. while the second variation.46) from which we obtain (1.43) φ = ψ2 .) These authors did not show. subject to (1.44) or by solving (1. 2 4 2 (1.12a) as the expected EulerLagrange equations. 1.12) has inﬁnitely many sphericallysymmetric solutions. but it is believed that. the second variation (1. at the ground state.45) If we vary ψ → ψ + δψ. by seeking stationary points of the action 1 I = (E − E) = 2 subject to 2 1 1 1 [  ψ2 + φψ2 − Eψ2 ]d3 x.c]d3 x. φ → φ + δφ then the ﬁrst variation of (1. 8 .47) By exploiting various standard inequalities (Tod [25]) would show that the action is bounded below: E≥− 1 . 54π 4 (1. x − y 2 (1. (1.3.3.12b). and the energyeigenvalues increase monotonically in n to zero.47) cannot be negative. (It is easy to see that ψ may always be assumed real in stationary solutions. 2 (1. that the minimising function is analytic (since the system (1.
Following Tod [25].54a) (1.49) (1.1. − [E + 2 27π 4 3 3π −E0 1 T 2 on the right hand side.52) (1. Tod [25] shows that 1 4 0 ≤ (−V ) ≤ √ T 2. Since 3 (1. 2 Then as a consequence of (1. so that the kinetic and potential energies are separately bounded in terms of the action or conserved energy at all times. 3 as T > 0 by deﬁnition it follows that E < 0.12) it follows that Tij. we deﬁne the tensor 1 ¯ ¯ ¯ Tij = ψi ψj + ψi ψj + φi φj − δij (ψk ψk + φk φk + φψ2 − Eψ2 ).54c) (1.3. 2 (1. (1. 3 4 V = E.56a) (1. with 4 1 E = T + V we may solve for T to ﬁnd the bounds 2 1 1 1 1 √ ]2 . 2 With E = T + V this shows that for a steady solution 1 T = − E. We now prove that for any stationary state the E energyeigenvalue is negative. whence 0= so that 0= and 5 3E = T + V.54b) (1. T2 ≤ + [E + 2 27π 4 3 3π 1 1 1 √ ≥ ]2 .j d3 x = Tii d3 x. 2 3 3π (1.53) (1.51) 5 [− ψ2 − φψ2 + 3Eψ2 ]d3 x.j = 0.56b) These results still hold at each instant for the timedependent case.55) Arriola and Soler [2] have a stronger result.3 Negativity of the energyeigenvalue We write E0 for the energyeigenvalue of the ground state. 9 .50) (xi Tij ). 3 1 E = E.
2) or equivalently the translationally symmetric case. 4. In chapter 4 we solve the eigenvalue problem using spectral methods for the ﬁrst few stationary solutions and check the results using RungeKutta integration as well as conﬁrming the conditions on the eigenvalues. (See . 1.4) • The ground state is stable under the full (nonlinear) evolution. (See section 6. We consider the axisymmetric SN equations in chapter 7 and look at the evolution as well as the timeindependent equations. called a sponge factor to the Schr¨dinger equation to absorb scatter waves.4 Plan of thesis In this thesis we start with a review of the methods which can be used to compute the stationary solutions in the case of spherical symmetry (chapter 2).1.5 Conclusion The results obtained from considering the spherically symmetric case are: • The ground state is linearly stable. In chapter 8 we consider the 2dimensional SN equations (8. has n quadruples of complex eigenvalues as well as pure imaginary pairs.5) • Perturbations about the ground state oscillate with the frequencies obtained by the linear perturbation theory. Also in chapter 5 we consider adding a small heat term. The evolution is considered as well as the concept of ﬁnding two spinning lumps orbiting each other. (See section 6.3.24) to solve.2) • The linear perturbation about the nth excited state. (See section 6. In chapter 5 we consider the problem of ﬁnding a numerical method to evolve the timedependent SN equations and we consider the boundary conditions to put on the numerical problem.(See section 4. In chapter 6 we evolve o numerically the problem with diﬀerent initial conditions and check the convergence of the evolution method with diﬀerent mesh and time step sizes. We consider the evolution of the stationary states to see if they are stable and we also look at the stationary states with added perturbation and compare the frequencies of oscillation with the linear perturbation theory. while emitting some scatter oﬀ the grid. Also in chapter 3 we obtain restrictions on the eigenvalues analytically such that the eigenvalues are purely real or imaginary or an integral condition on the eigenvectors vanishes (see section 3.3) 10 state.5) • The higher states are unstable and will decay into a “multiple” of the ground state. obtaining an eigenvalue problem (3.4). which is to say the (n + 1)th sections 4. Then in chapter 3 we consider the case of the linear perturbation about the spherically symmetric stationary solutions.
(See section 6. (See spherically symmetric stationary solutions are. emitting scatter and leaving a “multiple” of the ground state. (See section 7. (See section 8.4) • There exist rotating solutions.4) pears to decay. (See section 8.3) • The evolution of diﬀerent exponential lumps indicates that any initial condition apsection 6.5) The results obtained from considering the 2dimensional case are: • Evolution of the higher states are unstable.4) The results obtained from considering the axially symmetric case are: • Stationary solutions that are axisymmetric exist and the ﬁrst one is like the dipole of the Hydrogen atom.• The decay time for higher states is controlled by the growing linear mode obtained in the linear perturbation theory.3) • Perturbations about higher states will oscillate for a while (until they decay) according to the linear oscillation obtained by the linear perturbation theory. that is they scatter and leave a “multiple” of the ground state.5) • Lumps of probability density attract each other and come together emitting scatter and leave a “multiple” of the ground state. (See section 6. that is it emits scatter oﬀ the grid 11 . (See section 8. (See section 7.4) • Evolution of the dipolelike solution shows that it is unstable in the same way as the leaving a “multiple” of the ground state and that lumps of probability density attract each other. but these are unstable.
12 . where V0 is the initial value of the potential V .3a) (2. V. r (2.Chapter 2 Sphericallysymmetric stationary solutions 2.3b) x2 S 2 dx. So (1.2b) C −kr e . λ2 V. (i. r) is a solution then so is (λ2 S. At large r bounded solutions to (2.e.2a) (2. = −rS 2 . We also B . (2. V0 = V (0)).1 The equations In the case of spherical symmetry and timeindependence we can assume without loss of generality ψ is real.1b) note that if (S.1a) (2.14) becomes (rS) (rV ) = −rSV. λ−1 r). (2. r xS 2 dx.1) decay like V =A− S= where A = V0 − B= ∞ 0 ∞ 0 We have the boundary conditions S → 0 as r → ∞ and S = 0 = V at r = 0.
I have modiﬁed the integration in such way that the program will integrate over small steps using a 4th order RungeKutta NAG routine. There are various methods for obtaining the correct initial values which correspond to stationary states. then plotting the solution so far. The problem with this routine is that as the function blows up the step size decreases so that the tolerance remains low. The numerical technique used by Moroz et al [15] to obtain the stationary states uses fourthorder RungeKutta integration on (2. which is the exponential blow up of solutions. From information on which side of the axis the solution becomes unbounded we can reﬁne the initial conditions to obtain the eigenfunction. −2Y2 = − Y 2 Y4 . S(0) be set equal to a chosen constant.1 Computing the sphericallysymmetric stationary states RungeKutta integration Now (2.4d) . = r (2. to avoid the problem which occurs with RungeKutta integrations. Using this method we are able to obtain the ﬁrst 50 values for S0 or equivalently the ﬁrst 50 energy levels.4). The initial values are picked so that the boundary conditions at r = 0 are satisﬁed and then other values are guessed. which involves integration until some value for r and looking at the value of S at that point. The method used by Moroz et al [15] was to integrate up to a ﬁxed value in the domain. In the case where V0 = 1 we say that 1.1) can be rewritten as a system of four ﬁrst order ODEs Y1 Y2 Y3 Y4 where Y1 = S and Y3 = V . This increases the computation time needed. To calculate the stationary states. Another is a shooting method. starting at r = 0 and integrating outwards towards inﬁnity. then modifying the initial values such that the S at that point is zero. I have used a modiﬁed shooting method.2. It will also terminate if the solution for S blows up exponentially. waiting for the routine to fail or blow up. 13 = Y2 .2 2.2. and reﬁne the guess based on which way the function blows up. −2Y4 − Y42 .4a) (2. The normalisation invariance allows for either V (0). The states are obtained by reﬁning the initial guess so that a solution which tends to zero when r is large is obtained. Also the routine will fail if the routine takes to long.088.5 is large since the ﬁrst state occurs around 1. After each step it will terminate if the solution is too large in absolute magnitude.4c) (2. r = Y4 .4b) (2. so the value for r cannot be too large.
For the stationary spherically symmetric case it was shown in Moroz et al [15] that the eigenvalue is 2G2 m5 A ˜ , E= 2 B2 (2.5)
where A and B are given by (2.3), and V0 is the initial value of the potential function V . 2A Using the above method we calculated A, B and which is the energy up to a factor B 2 m5 G and which compare with the ﬁrst 16 eigenvalues of Jones et al [3]. The ﬁrst of 2 20 eigenvalues calculated with the above routine are given in Table 2.1. We also plot in ﬁgure 2.1 the ﬁrst four spherically symmetric states normalised such that Note that the nth state has (n − 1) zeroes or “nodes”. Number of zeros 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Energy Eigenvalue 0.16276929132192 0.03079656067054 0.01252610801692 0.00674732963038 0.00420903256689 0.00287386420271 0.00208619042678 0.00158297244845 0.00124207860434 0.00100051995162 0.00082314193054 0.00068906850493 0.00058527053127 0.00050327487416 0.00043737620824 0.00038362194847 0.00033920111442 0.00030207158301 0.00027072080257 0.00024400868816 0.00022106369652 ψ2 d3 x = 4π.
Jones et al [3] Eigenvalues 0.163 0.0308 0.0125 0.00675 0.00421 0.00288 0.00209 0.00158 0.00124 0.00100 0.000823 0.000689 0.000585 0.000503 0.000437 0.000384
Table 2.1: The ﬁrst 20 eigenvalues
2.2.2
Alternative method
An alternative method, which we use later on in chapters 4 and 6 to compute the eigenfunction at the Chebyshev points is given below. Jones et al [3] used an iterative numerical scheme for computing the nnode stationary states, instead of using a shooting method. An outline of their method is as follows:
14
0.1 0.08 0.06
first state
20 15 10
x 10
−3
second state
ψ
0.04 0.02 0
ψ
5 0 −5 0
−3
10
20 r third state
30
40
0
20
−3
40
60 r fourth state
80
100
120
8 6 4
x 10
6 4 2 0 −2
x 10
ψ
2 0 −2
0
50
100
r
150
200
ψ
0
100
r
200
300
Figure 2.1: First four eigenfunctions 1. Set an outer radius R. 2. Supply an initial guess for un = rψn . 3. Solve for Φ in ∂ 2 Φ 2 ∂Φ + = r−2 u2 . n ∂r2 r ∂r
(2.6)
4. Solve for the nnode eigenvalue and eigenfunction of ∂ 2 un = 2un (Φ − ). ∂r2 (2.7)
5. Iterate the previous two steps until the eigenvalue converges suﬃciently, a typical criterion being that the change in
n
from one iteration to the next is less than 10−9 .
n
6. Iterate the previous ﬁve steps, increasing R until in R.
stops changing with the change
2.3
Approximations to the energy of the bound states
2A closely follow a leastsquares ﬁt of the B2 (2.8)
Jones et al [3] claim, that the energy eigenvalues formula: En = −
α , (n + β)γ
15
0 −0.5 −1 −1.5 −2 Energy −2.5 −3 −3.5 −4 −4.5 −5
x 10
−3
2A/B2
least squares fit
0
5
10
15
20
25 30 number of nodes
35
40
45
50
Figure 2.2: The least squares ﬁt of the energy eigenvalues where α = 0.096, β = 0.76 and γ = 2.00. This appears to be a very good ﬁt of the ﬁrst 50 eigenvalues. The eigenvalues and the ﬁt are plotted in ﬁgure 2.2. Moroz et al [15] claim V2 V2 that the the number of zeros is of the order of exp( 0 ). Figure 2.3 shows that exp( 0 ) is 2 2 S0 S0 an over estimate for the number of nodes, which appears to converge as n gets large. The loglog plot ﬁgure 2.4 shows that we have a gradient of 2 as n gets large. This is −α what we expect since En = is such a good ﬁt when γ = 2. (n + β)γ
16
5 3 3.60 50 40 exp(V0/S0) 2 2 30 20 10 0 0 10 20 30 number of nodes 40 50 60 Figure 2.4: Loglog plot of the energy values with log(n) 17 .5 4 4.5 Figure 2.3: The number of nodes with the Moroz et al [15] prediction of the number of nodes −1 −2 −3 −4 −5 log(−E ) n −6 −7 −8 −9 −10 −11 0 0.5 2 log n 2.5 1 1.
(3.1a) (3.Chapter 3 Linear stability of the sphericallysymmetric solutions 3. Finally equating the powers of 0: we obtain iψ0t + 2 ψ0 = ψ0 φ0 . t) + .5) gives: iψ0t + 2 ψ0 + [i ψ1t + 2 ψ1 ] + 2 2 [i ψ2t + 2 ψ2 ] = (3. t) + . . t) + where 2 2 ψ2 (r.2b) ¯ ¯ ¯ (ψ0 ψ2 + ψ1 ψ1 + ψ0 ψ2 ). .1b) φ2 (r.2a) ψ0 φ0 + (ψ0 φ1 + ψ1 φ0 ) + 2 2 ¯ ¯ ψ0 + (ψ0 ψ1 + ψ0 ψ1 ) + (ψ0 φ2 + ψ1 φ1 + ψ2 φ0 ). . φ0 = ψ0 2 . . (3. t) + φ1 (r. . Substitution of (3. 1.1 Linearising the SN equations In this chapter 3 we shall set up the linear stability problem.1) into the SN equations (1. t) + ψ1 (r.5) of the form : ψ = ψ0 (r.3a) (3. We look for a solution to (1.4b) 2 . 18 (3. t) + φ = φ0 (r. .3b) 2 1: iψ1t + 2 ψ1 = ψ0 φ1 + ψ1 φ0 . ready for numerical solution in chapter 4. 2 φ0 + 2 φ1 + 2 2 φ2 = (3. ¯ ¯ φ1 = ψ0 ψ1 + ψ0 ψ1 .4a) (3.
7a) (3.6a) (3. (3. we seek solutions of the form ψ1 = R1 (r.9a) (3.12a) (3. 2 (rV0 )rr = −rR0 . t)e−iEt .10a) (3. φ1 = φ1 . (3. ¯ ¯ ¯ φ2 = ψ0 ψ2 + ψ1 ψ1 + ψ0 ψ2 . ¯ (rφ0 )rr = rψ0 ψ0 .9) simpliﬁes to give i(rR1 )t + (rR1 )rr = R0 (rφ1 ) − V0 (rR1 ). For convenience we introduce P = rφ1 .7b) (3. iRt + Rrr = R0 P − V0 R. (3.5b) 2 We consider the case of spherical symmetry so that 1 1 2 f = 2 (r2 fr )r = (rf )rr .8b) Substituting into (3.8a) (3. The O( ) problem then becomes.11a) (3.4). φ0 = E − V0 (r). then (3.2: iψ2t + 2 ψ2 = ψ0 φ2 + ψ1 φ1 + ψ2 φ0 . (3.10b) (3. R = rR1 .3) becomes r r i(rψ0 )t + (rψ0 )rr = rψ0 φ0 . Note that P and R must vanish at the origin.6b) (3. where R1 is complex and φ1 is real so that (3. where R0 is real.9b) ¯ (rφ1 )rr = rR0 (r)e−iEt ψ1 + rR0 (r)eiEt ψ1 . To eliminate e−iEt . 19 .5a) (3. so that (rR0 )rr = −rR0 V0 . the O( ) problem we have i(rψ1 )t + (rψ1 )rr = r(E − V0 )ψ1 + rφ1 R0 (r)e−iEt .12b) ¯ ¯ ¶rr = R0 R + R0 R. Since we are interested in the stability of the stationary problem we take ψ0 = R0 (r)e−iEt . ¯ ¯ (rφ1 )rr = R0 (rR1 ) + R0 (rR1 ).11b) (3.
while the coeﬃcient of eλt gives ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ iλ(A − B) + (Arr − Brr ) = R0 W − V0 (A − B). (3.12b) we obtain ¯ ¯ ¯ Wrr eλt + Wrr eλt = R0 ((A + B)eλt + (A − B)eλt ) + ¯ ¯ ¯ ¯ R0 ((A + B)eλt + (A − B)eλt ). Now since P is real we note that W1 = W2 . the coeﬃcient of eλt gives iλ(A + B) + (Arr + Brr ) = R0 W − V0 (A + B). P = W1 eλt + W2 eλt . ¯ Since R0 = R0 we have Wrr = 2R0 A. Substituting into (3.18) 20 . Substituting into (3. W1 and W2 are functions which are time independent.19) (3. As we are considering the spherical symmetric case they depend upon r ¯ ¯ only. B. eλt and noting that we can do this only if λ = λ so that λ ¯ ¯ is not real.3.15) (3. eλt gives. ¯ (3.17) ¯ ¯ ¯ ¯ (3. (3. and ¯ ¯ ¯ ¯ ¯ ¯ Wrr = R0 (A + B) + R0 (A − B).2 Separating the O( ) equations We look for a solution of the O( ) problem in the form ¯ ¯ ¯ R = (A + B)eλt + (A − B)eλt . ¯ Wrr = R0 (A − B) + R0 (A + B). ¯ ¯ ¯ ¯ ¯ i(λ(A + B)eλt + λ(A − B)eλt ) + (Arr + Brr )eλt + (Arr − Brr )eλt = ¯ ¯ ¯ ¯ ¯ R0 (W eλt + W eλt ) − V0 ((A + B)eλt + (A − B)eλt ).14) ¯ ¯ Equating the coeﬃcients of eλt . ¯ ¯ Wrr = 2R0 A. so we can let W = W1 = W2 .16) Again equating coeﬃcients of eλt . (3.12a) gives.20a) (3.13b) where we assume for now that λ is not real and A.13a) (3.20b) (3.
Therefore Brr + V0 B = −iλA.27) we obtain iλ(a + ib) + (ar r + ibrr ) = R0 W − V0 (a + ib).25a) (3.13b) reduces to.21b) To summarise. so that equating real and imaginary parts gives Wrr = 2R0 a. so we have. R0 W = iλ(A + B) + (Arr + Brr ) + V0 (A + B).29c) (3. arr + V0 a = R0 W − λb.28b) . (3.26b) If we let A = a + ib where a and b are real functions of r. If λ is real.24b) (3.22) (3.27b) (3. ¯ (R0 W ) = R0 W = −iλ(A − B) + (Arr − Brr ) + V0 (A − B). and substitute into (3.which is just one equation.12) gives. the O( ) problem leads to three coupled linear O. ¯ Wrr eλt = R0 Aeλt + R0 Aeλt . Wrr = 2R0 a. ¯ Wrr = R0 (A + A).23) (3. 21 (3.D. Brr + V0 B = −iλA.29a) (3. P = W eλt .24a) (3.24c) Substituting into the O( ) equations (3.E’s for the perturbation: Wrr = 2R0 A. Equating coeﬃcients of eλt we obtain iλA + Arr = R0 W − V0 A.27a) (3. brr + V0 b = −λa.29b) (3.21a) (3.25b) (3. Arr + V0 A = R0 W − iλB. we note that (3. (3.28a) (3. R = Aeλt . (3. and R0 W = iλB + Arr + V0 A. iλAeλt + Arr eλt = R0 W eλt − V0 Aeλt .26a) (3.
b are real functions of r. W ) then the Wrr = 2R0 A. We note that (3.28).32c) (3.3 Boundary Conditions 1 We now consider the boundary conditions for the O( ) problem (3. W ) becomes. Arr + V0 A = R0 W − iλB.We consider the real eigenvalues of Wrr = 2R0 A. Brr + V0 B = iλA.28). B. So if λ is an eigenvalue then so are λ.33b) (3. That is in the case of λ complex.33a) (3.24) covers both. eigenvalues exist in groups of four. W ) → (A. −B. ¯ ¯ and if λ is an eigenvalue then −λ is an eigenvalue.31b) (3.32a) (3. b = iR0 and W = 0.30) become Wrr = 2R0 a.24) transformed by (A. (3. Wrr = 2R0 A.31a) (3.32b) (3. Since φ1 = P and r 1 R1 = R and we require φ0 + φ1 to be a physically representable potential function and r 22 .−λ ¯ and −λ. We note that λ = 0 will be an eigenvalue of (3. which are the same as (3. while W is real. Arr + V0 A = R0 W + iλB. Then (3. (3. otherwise 3. ¯ Arr + V0 A = R0 W − iλB.12). (3.33c) they exist in pairs or the singleton λ = 0.30c) Suppose that the eigenvalues are such that A is real.30a) (3. Brr + V0 B = −iλA. arr + V0 a = R0 W − λb. We can therefore set A = a and B = ib where a. so (3. brr + V0 b = −λa.30b) (3. Also if of (A. B.31c) equations become ¯ ¯ ¯ so if λ is eigenvalue then −λ is a eigenvalue. B. ¯ Brr + V0 B = −iλA. with a = 0. but this corresponds to just a rotation in the phase factor. W ) → (A. This implies that B is imaginary.
40a) (3.40b) ¯ (RR)dr. (3.40b) implies that R2 → 0 as r → ∞. and the remaining two integrals become. (3. (3.e the following integral exists: ∞ 0 (R0 + R1 )(R0 + R1 )r2 dr. to the potential function. (3. Hence R → 0 as r → ∞ is a necessary but not a suﬃcient condition for the wavefunction to be normalisable. i. R1 → ﬁnite values as r → 0. ∞ 0 (3. so that the boundary condition that R → 0 as r → ∞ should be suﬃcient. We expect that the solution for which this is not the case will blow up exponentially.35) This becomes ∞ 0 ¯ ¯ ¯ (R0 R0 + (R1 R0 + R0 R1 ) + 2 ¯ R1 R1 )r2 dr. in terms of R.37) coeﬃcient of 1 ∞ 0 ¯ ¯ (R1 R0 + R0 R1 )r2 dr.39) The O(1) condition (3. This implies φ1 .37) requires the wavefunction of the stationary spherically symmetric problem to be normalisable.36) and upon equating coeﬃcients of coeﬃcient of 0 we require the following integrals to exist: ∞ 0 ¯ (R0 R0 )r2 dr. We can therefore choose φ0 → 0 and φ1 → 0 as r → ∞. so that P (0) = 0. corresponding to the freedom we have of where to set the zero on 23 .34) The condition that R0 + R1 be a physically representable wavefunction is that it be normalisable.R0 + R1 to be a physically representable wavefunction.38) coeﬃcient of 2 ¯ (R1 R1 )r2 dr. We also have an arbitrary scaling the energy scale. ¯ The integral (3. For the potential φ0 + φ1 we require function to be wellbehaved at r = 0. The integral (3. as R0 behaves like eEr as r → ∞.40a) exists provided R+R does not grow exponentially with r. (3. ∞ 0 (3. ∞ 0 ¯ (R + R)R0 rdr. R(0) = 0. φ1 and R1 must be wellbehaved functions of r at r = 0.
V0 B2 − Br 2 dr.e that λ2 is real .42b) ¯ ABdr = 0 ∞ ∞ ¯ ¯ Brr B + V0 B Bdr.and φ imply that. Combining the two results we have that: ∞ ¯ −iλ 0 ABdr . ¯ ¯ ¯ ¯ ¯ When R = (A + B)eλt + (A − B)eλt and P = W eλt + W eλt . (3. (3. the boundary conditions on R R(0) = 0 ⇒ P (0) = 0 ⇒ R(∞) = 0 ⇒ (rP )(∞) = 0 ⇒ A(0) = 0. (3.44) = 0 We note that the R. W (0) = 0. So to summarise the boundary condition on A. W (0) = 0. We also consider: −iλ ∞ 0 ¯ ABdr = 0 ∞ ∞ ¯ ¯ ¯ Arr AV0 AA − R0 W Adr.45) ¯ is real if iλ is real or 0 λ ¯ ABdr = 0 which implies that ¯ is real i.44) is real since V0 is real. 24 . A(∞) = 0.42a) (3.H. ∞ ¯ ¯ iλ 0 ABdr ∞ (3.41) 3.43) is real since V0 is real. A(∞) = 0. = −rR0 V0 . V0 ) satisfy (rR0 ) (rV0 ) We now consider −iλ ∞ 0 ¯ ABdr = 0.43) = 0 We note that the R.H.24) where (R0 . B(0) = 0. B(∞) = 0. The following 2 = −rR0 . B and W are A(0) = 0. B(0) = 0. B(∞) = 0.4 Restriction on the possible eigenvalues ∞ 0 We claim that λ2 is real for the perturbation equations unless proof is due to Tod [26]. 2 (3. 1 V0 A2 − Ar 2 − Wr 2 dr.S of (3. W (∞) = 0.S of (3. W (∞) = 0. Consider (3. Hence either λ2 λ 0 ∞ ¯ ABdr = 0.
48) and set λ = λR + iλI to ﬁnd from (3. as in Tod [25]. (3.47) .50) so ﬁnally λR  ≤ 4 √ −E.46) and then by the Holder and Sobolev inequalities. 25 .3.51) Note that if the normalisation is diﬀerent to one we need to rescale the E value. (3.49) E 3 )4 .2 to ﬁnd 4 R0 3 ≤ C1 (− 3 R0 3 ) 3 . we ﬁnd  with C1 = 23 3π 3 4 4 ¯ R0 B W  ≤ C1 ( A2 ) 2 ( 1 B2 ) 2 ( 1 R0 3 ) 3 . 3 (3. We choose a normalisation of the perturbation so that A2 = cos2 θ. 2 (3. B2 = sin2 θ. 2 (3. 9π 2 (3. First we obtain i ¯ ¯ ¯ [λAA + λB B]d3 x = − ¯ R0 B W d3 x.24) it is possible to prove an inequality for the real part of λ .46) λR + iλI cos 2θ ≤ 2C1 sin θ cos θ( Now use Sobolev and Section 1.5 An inequality on Re(λ) From the linearised perturbation system (3.
1 . .24) are linear whereas the SN equations are not. N + 1. = DN . denoted DN . . We use Chebyshev polynomial interpolation to get a diﬀerentiation matrix. 1 .1) (4. See [27] for more details about spectral methods. . . N. . N + 1 and N + 1 is the number of Chebyshev points. is deﬁned to be such that v1 w1 v2 w2 . where the vi ’s are values of a function at the Chebyshev points. vN wN vN + 1 wN + 1 26 . . Deﬁne wi ’s by wi = p (xi ) for i = 0. We N note that these points correspond to zeros of the Chebyshev polynomials. (4. We can therefore solve these equations using spectral methods.Chapter 4 Numerical solution of the perturbation equations 4.1 The method The O(1) perturbation equations (3. The diﬀerentiation matrix for polynomials of degree N. N. . N. We let p(x) be the unique polynomial of degree N or less such that p(xi ) = vi for i = 0. N + 1 .3) . dx dx (4. with θ = cos−1 (x). . where the Chebyshev polynomials are such that pn (x) = cos(nθ).2) When using Chebyshev polynomials we sample our data at Chebyshev points that is at iπ xi = cos( ) where i = 0. They also satisfy the diﬀerential equation 1 dpn (x) d ((1 − x2 ) 2 ) = n2 pn (x). by approximating the problem to that of solving a matrix eigenvalue problem. . 1 . .
.. . 0 .. ...N Di. 2 The second derivative matrix is just DN since the p (x) is the unique polynomial of degree N or less through wi ’s. . 1] we L rescale points by Xi = (1 + xi ). . 27 .B and W be zero at the boundary is applied by deleting the ﬁrst and last rows and columns of the diﬀerentiation matrix in question.. 6 2N 2 + 1 . . .. . . 0 . =− 6 xi =− 2(1 − x2 ) i ci (−1)i+j cj (xi − xj ) 1 ≤ i ≤ N − 1. all diﬀerentiation matrix below are on the interval [0. . We note that since the interval we are interested in is [0. L].7) .. W (Xn−1 ) R0 (X1 ) 0 0 R0 (X2 ) .L] = D[−1. (4.j = where ci = 2 for i = 0 or i = N and ci = 1 otherwise. And R0 = A(Xn−1 ) B(X1 ) A . . .. . (i = j). 2 −V ˜ 0 I 0 W W 0 R0 −DN 0 A(X1 ) .. N where For the perturbation equations we therefore obtain the matrix eigenvalue equation ˜2 A −2R0 0 DN 0 0 0 A ˜2 (4. .1] . (4. . . This yields a (N − 1) × (N − 1) ˜ matrix denoted by D2 . . . We also need to rescale the diﬀerentiation matrix so 2 1 D[0. 0 . . 0 0 R(Xn−1 ) . W B(Xn−1 ) W (X1 ) . . . In this case it is the DN matrix. B(0) = 0 = B(L) 2 and W (0) = 0 = W (L).4a) Di. .0 = DN..6) .. 0 .i 2N 2 + 1 .. 0 . since A(0) = 0 = A(L).5) 0 DN + V0 0 B = iλ −I 0 0 B .which is : D0. B = . L] instead of [−1.. (4. . The requirement L that A. .
8) that is we are inverting or solving the equation for W in terms of A.00000011612065 ±0 − 0. . . 0 0 V (Xn−1 ) . 0 . We therefore rewrite (4.1. .. 0 . To compute these results. The diﬀerence between the calculated values for the eigenvalues turns out to be small so we can use (4. B..1 we plot eigenvalues obtained by solving (4.5) is singular. .06030198252911i ±0 − 0..We then solve this generalised eigenvalue problem to obtain the eigenvalues and eigenvectors. (4.. .. Recall the eigenvalue for the ground state in the nondimensionalized unit is 0.2 (note that the scale is diﬀerent) to see that up to these limits there are no eigenvalues other than imaginary ones. .1: Eigenvalues of the perturbation about the ground state 28 . (4.5).08665500294956i Table 4. we suspect that the eigenvalues might be inaccurate due to this. ...06882503249714i ±0 − 0. .. 0 .5) to obtain (A.07310346395426i ±0 − 0. . . .07654635649084i ±0 − 0. . . We note that since this generalised matrix eigenvalue problem (4..9) V0 = V0 (X1 ) 0 0 V0 (X2 ) . .2 The perturbation about the ground state We consider now the results obtained by the solving (4. In ﬁgure 4. which has no zeros...8) in ﬁgure 4. . We also plot all the eigenvalues obtained from solving (4.. .1).082. . excluding the near zero eigenvalue which does not correspond to a perturbation. . ±0.5) as: ˜2 DN ˜2 0 DN + (V0 + V ) 2 )−1 R ˜ + (V0 + V ) − 2R0 (DN 0 0 = iλ I 0 0 I A B A B ..5) about the ground state of the spherically symmetric SN equations (1. W ) straight away.. 4. The eigenvalues obtained are presented in table 4.03412557804571i ±0 − 0.08100785899036i ±0 − 0. 0 . 0 . . we used N = 60 and length of the interval L = 150.
0.04 0.6 −0.2 0 0.2: All the computed eigenvalues of the perturbation about the ground state 29 .06 0.02 0 −0.6 −0.2 0.02 −0.4 0.08 Eigenvalues of the perturbation equation 0.2 0 0.8 1 Figure 4.06 −0.4 −0.2 0.6 0.8 −0.04 −0.4 0.6 0.08 −1 −0.4 −0.1: The smallest eigenvalues of the perturbation about the ground state 150 Eigenvalues of the perturbation equation 100 50 0 −50 −100 −150 −1 −0.8 1 Figure 4.8 −0.
5 0 0 −5 −10 −15 −20 0 50 0 50 A/r 0 7 x 10 50 B/r 100 150 W/r 100 150 100 150 Figure 4. We note that the eigenvalues are symmetric about the real axis. 30 . Note the scales We note that the nearzero eigenvalue obtained corresponds to the trivial zeromode.6.3: The ﬁrst eigenvector of the perturbation about the ground state. To test convergence of the eigenvalues we plot graphs against increase in N and increase in L.W are very small compared to B. that is the perturbations are only oscillatory.2). We can also plot the graph of the eigenvalue 0. We note that all the eigenvalues are imaginary and that this agrees with the result obtained in section 3. 4.0765463562i as the value of N increases.r and W r in ﬁgures 4. and again this shows the eigenvalue converging with increasing L see ﬁgure 4. We note that this shows the eigenvalue is converging with N see ﬁgure 4.3. As an example we plot the graph of the eigenvalue 0. We conclude that the ground state is linearly stable.5 1 0. as the eigenfunction shows. up to numerical error. We plot the ﬁrst three eigenvectors for the ground state as 4. A B r.3 since A. we can deduce that this solution corresponds to a phase rotation of the background solution.10 5 0 −5 2 1. which was expected (section 3. In ﬁgure 4.4.0765463562i as a sample eigenvalue with increasing L instead of N .7.4.5.
5: The third eigenvector of the perturbation about the ground state 31 .4: The second eigenvector of the perturbation about the ground state 10 5 0 −5 4 2 0 −2 5 0 −5 −10 −15 0 50 A/r 0 50 B/r 100 150 0 50 W/r 100 150 100 150 Figure 4.10 5 0 −5 8 6 4 2 0 5 0 −5 −10 −15 0 50 0 50 A/r 0 50 B/r 100 150 W/r 100 150 100 150 Figure 4.
6: The change in the sample eigenvalue with increasing values of N (L = 150) 0 −0.007 −0.009 −0.001 −0.7: The change in the sample eigenvalue with increasing values of L (N = 60) 32 .006 −0.008 −0.01 100 110 120 130 140 L 150 160 170 180 190 Figure 4.18 16 14 12 x 10 −7 difference in eigenvalue 10 8 6 4 2 0 −2 30 40 50 60 N 70 80 90 Figure 4.004 −0.003 difference in eigenvalue −0.005 −0.002 −0.
01004023179300i 0 − 0. In ﬁgure 4.00139326930981 ±0.00139326930981 + 0.3 the calculated values of Q with the eigenvalues.23476646058070 0.2.00859859681740i −0. In the case where L = 145 and N = 60 we present in table 4. λ 0 − 0. when the unperturbed wavefunction has one zero and the energy eigenvalue is 0. The lowest eigenvalues about the second state are presented in table 4.36801505735057 3.2. (4.00300149174300i 0 − 0.00859859681740i −0.02761845529494i Table 4.3: Q for diﬀerent eigenvalues of the perturbation about the second state 33 .01004023179300i −0. We also plot all the eigenvalues obtained in ﬁgure 4.01533675005044i −0.01004023179300i 0.00139326930981 − 0.02105690867367i −0.533821320897923e − 14 3.9 on a diﬀerent scale.0308. This time we obtain some eigenvalues with nonzero real parts.00300149174300i −0.2: Eigenvalues of the perturbation about the second state We have some complex eigenvalues so if our results are to satisfy the result in section 3.10) 0. to see that up to these limits there are no other complex ones. To see whether this is the case up to numerical error we compute Q= If this much less than one then L ¯ 0 ABdr 1 1 L L ( 0 A2 dr) 2 ( 0 B2 dr) 2 ∞ ¯ we know that 0 ABdr =  . We note that the eigenvalues have the symmetries expected from section 4.01533675005044i Q 0.00139326930981 ±0 ±0 ±0 −0.00000003648902i 0 − 0.3 Perturbation about the second state We now consider perturbation about the second state. which we expected from the equations.89372677914625 Table 4.8 we plot the lower eigenvalues for the case where N = 60 and L = 150.01004023179300i +0.00000003648902i −0. Using the same method we compute the numerical solutions of the perturbation about the second bound state.4 ∞ ¯ we require that 0 ABdr = 0.919537745928816e − 14 0. ±0 ±0 ±0 ±0.22194825684122 0.4.
5 −3 Figure 4.03 −0.9: All the computed eigenvalues of the perturbation about the second bound state 34 .02 −0.5 −1 −0.03 0.04 −1.04 Eigenvalues of the perturbation equation 0.5 0 0.01 0 −0.02 0.8: The lowest eigenvalues of the perturbation about the second bound state 150 Eigenvalues of the perturbation equation 100 50 0 −50 −100 −150 −1.0.5 1 x 10 1.5 1 x 10 1.5 −1 −0.5 0 0.01 −0.5 −3 Figure 4.
10 5 0 −5 10 A/r 0 6 x 10 50 B/r 100 150 imaginary part 5 0 −5 20 15 10 5 0 0 50 100 150 0 50 W/r 100 150 Figure 4.11: The second eigenvector of the perturbation about the second bound state 35 . Note the scales 10 5 0 −5 15 10 5 0 −5 20 15 10 5 0 0 50 0 50 A/r 0 50 B/r 100 150 W/r 100 150 100 150 Figure 4.10: The ﬁrst eigenvector of the perturbation about the second bound state.
In ﬁgure 4. Note again that the ﬁrst near zero eigenvalue just corresponds to a phase rotation.10 5 0 −5 20 0 −20 −40 −60 20 10 0 −10 0 50 A/r 0 50 B/r 100 150 W/r 100 150 0 50 100 150 Figure 4.12 we plot the next two eigenfunctions corresponding to the next two eigenvalues.10 we plot the eigenvector corresponding to the near zero eigenvalue.15. 4.16 we plot the eigenfunction of a complex eigenvalue.4. In ﬁgure 4.5. so the results of section 3.11. In ﬁgure 4. Now for the fourth bound state with L = 700 and N = 100 we have eigenvalues which are presented in table 4.4 are conﬁrmed. In ﬁgure 4. the state with just two zeros in the unperturbed wavefunction.4 Perturbation about the higher order states We now consider the numerical method about the third state.4 can be conﬁrmed. The eigenvalues are presented in table 4. 4.13 we have the eigenvalues of the perturbation about the third state.15 the real part is plotted and in ﬁgure 4. 4. We note that the result of section 3. with N = 60 and L = 450.16 the imaginary part. Note that this is approximately B = R0 hence this eigenvector corresponds to a phase rotation. 36 . In ﬁgure 4.14 we plot the eigenfunction corresponding to the next eigenvalue after the near zero one.e a trivial zeromode. In ﬁgures 4.12: The third eigenvector of the perturbation about the second bound state We note that for the eigenvalues with nonzero real part Q is almost zero. i.
13: The eigenvalues of the perturbation about the third bound state 4 2 0 −2 5 0 −5 −10 6 4 2 0 −2 0 50 100 150 200 A/r 0 50 100 150 200 B/r 250 300 350 400 450 0 50 100 150 200 W/r 250 300 350 400 450 250 300 350 400 450 Figure 4.14: The second eigenvector of the perturbation about the third bound state 37 .8 x 10 −3 Eigenvalues of the perturbation equation 6 4 2 0 −2 −4 −6 −8 −6 −4 −2 0 2 4 x 10 6 −4 Figure 4.
16: The third eigenvector of the perturbation about the third bound state 38 .15: The third eigenvector of the perturbation about the third bound state 4 2 0 −2 5 A/r 0 50 100 150 200 B/r 250 300 350 400 450 imaginary part 0 −5 −10 6 4 2 0 −2 0 50 100 150 200 250 300 350 400 450 0 50 100 150 200 W/r 250 300 350 400 450 Figure 4.4 2 0 −2 4 2 0 −2 6 4 2 0 −2 0 50 100 150 200 A/r 0 50 100 150 200 B/r 250 300 350 400 450 0 50 100 150 200 W/r 250 300 350 400 450 250 300 350 400 450 Figure 4.
that is we convert (3.00368240791656i −0. 4.00297504982576i −0.6 Testing the numerical method by using RungeKutta integration The result obtained via the Chebyshev numerical methods can be veriﬁed by using RungeKutta integration as in section 2.00022482647750 ±0.24) into a set of ﬁrst order 39 .00168871696256i +0.00039265148039 ±0.00504446459212i +0.00308656938411i +0.00015985257765 ±0.00225911281180i +0.5 Bound on real part of the eigenvalues From section 3. In table 4. 4.00225911281180i −0.6 we compare this bound with the values obtained.00557035273905i Table 4.00431823282039i −0.5: Eigenvalues of perturbation about the fourth state There we notice that there are three quadruples.00220985235805i −0.1.00168871696256i −0.00031089278959i −0.00088225206795i +0.4: Eigenvalues of the perturbation about the third state ±0.00022482647750 ±0 ±0.00051994798783 ±0.±0.00000001236655 ±0 ±0.00308656938411i −0.00504446459212i −0.2.00185003885932i −0.00015985257765 ±0 −0.00134992819716i −0.00260936445117i −0.00000000975044 ±0 ±0.5 we have a bound on the real part of the eigenvalues of the perturbation equations. and see that it is satisfactorily conﬁrmed.00039265148039 ±0 −0.00340215999936i Table 4.00017460656919 ±0.00017460656919 ±0 ±0 ±0 ±0.00088225206795i −0.00078451579204i −0.00051994798783 ±0 ±0 ±0 ±0.
For the initial conditions on Y6 . = −V0 Y1 − iλY1 + R0 Y5 .0002248264775 0. = −V0 Y3 − iλY3 . Y5 = W and λ is an eigenvalue.11e) (4.11b) (4.0000652878367 Bound 0.D.0013932693098 0.0005199479878 0. Y3 (0) = 0 and Y5 (0) = 0.E’s.5 1 1.11d) (4. Y4 and 40 . Y1 Y2 Y3 Y4 Y5 Y6 = Y2 . Y3 = B.5 −4 Figure 4.11c) (4.5 −1 −0.0001143754114 0.5 2 x 10 2.11f) where Y1 = A. = 2R0 Y3 . = Y4 . (4.5 4 3 2 1 0 −1 −2 −3 −4 x 10 −3 Eigenvalues of the perturbation equation −5 −2.5 −2 −1.00048153805083 Table 4. The boundary conditions at the origin are Y1 (0) = 0.00073784267376 0.11a) (4.6: Bound of the real part of the eigenvalues O. = Y6 .00362396540325 0.00058275894209 0.17: The eigenvalues of the perturbation about the fourth bound state State 1 2 3 4 5 6 Numerical maximum real part 0 0.5 0 0.00100532361160 0.00157632209179 0.
21.11) for the ﬁrst two eigenvalues of (3.19 we plot the result doing a RungeKutta integration on (4.18. we plot the results obtained in ﬁgures 4.24) with R0 and V0 corresponding to the ﬁrst bound state. 4. the solutions obtain by RungeKutta integration correspond to the eigenvectors obtain by solving (4. we proceed in the same way. To test the perturbation about the third bound state.3 Y2 we can use the values obtained from the eigenvectors obtain when we solved the matrix eigenvalue problem.20. using RungeKutta integration.18: The ﬁrst eigenvector of the perturbation about the ground state.22 show the ﬁrst eigenvector of the perturbation about the third bound state. Proceeding to do RungeKutta integration on the results obtained with the perturbation about the second bound state in section 4. compare with ﬁgure 4.2. and have checked the results with a RungeKutta method. We note that except for the blowing up near the ends which we expect since the RungeKutta method is sensitive to inaccuracies in the initial data. we have analysed the linear stability of the stationary sphericallysymmetric states using a spectral method.23.5).3.24 show the second eigenvector with real and imaginary parts. In ﬁgures 4.7 Conclusion In this section. 4. 4. 4. The ﬁrst two eigenvalues are those worked out in section 4. 41 . Figures 4.10 5 0 −5 2 1 0 −1 0 −5 −10 −15 −20 0 5 10 15 A/r 0 7 x 10 5 10 15 20 B/r 25 30 35 40 0 5 10 15 20 W/r 25 30 35 40 20 25 30 35 40 Figure 4. The result plotted ﬁgure 4.
4 20 0 −20 −40 6 4 2 0 −2 0 −20 −40 −60 0 5 10 15 20 A/r 0 8 x 10 5 10 15 20 25 B/r 30 35 40 45 50 25 W/r 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 Figure 4.20: The ﬁrst eigenvector of the perturbation about the second bound state. compare with ﬁgure 4. using RungeKutta integration. compare with ﬁgure 4.10 42 .15 10 5 0 −5 15 10 5 0 0 −5 −10 −15 0 5 10 15 20 A/r 25 B/r 30 35 40 45 50 0 5 10 15 20 25 W/r 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 Figure 4. using RungeKutta integration.19: The second eigenvector of the perturbation about the ground state.
22: The second eigenvector of the perturbation about the third bound state. compare with ﬁgure 4. using RungeKutta integration.21: The second eigenvector of the perturbation about the second bound state. using RungeKutta integration. compare with ﬁgure 4.14 43 .11 5 0 −5 −10 5 0 −5 −10 8 6 4 2 0 0 20 40 60 80 A/r 0 20 40 60 80 100 B/r 120 140 160 180 200 0 20 40 60 80 100 W/r 120 140 160 180 200 100 120 140 160 180 200 Figure 4.10 5 0 −5 15 10 5 0 −5 20 15 10 5 0 0 50 0 50 A/r 0 50 B/r 100 150 W/r 100 150 100 150 Figure 4.
23: The real part of third eigenvector of the perturbation about the third bound state. using RungeKutta integration.15 40 A/r imaginary part 20 0 −20 10 0 20 40 60 80 100 B/r 120 140 160 180 200 imaginary part 0 −10 −20 −30 5 0 20 40 60 80 100 W/r 120 140 160 180 200 imaginary part 0 −5 −10 0 20 40 60 80 100 120 140 160 180 200 Figure 4.5 0 −5 −10 −15 20 10 0 −10 6 4 2 0 0 20 40 60 80 A/r 100 B/r 120 140 160 180 200 0 20 40 60 80 100 W/r 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200 Figure 4.16 44 .24: The imaginary part of third eigenvector of the perturbation about the third bound state. using RungeKutta integration. compare with ﬁgure 4. compare ﬁgure 4.
and in particular no zeromodes occur. has n quadruples of complex eigenvalues as well as pureimaginary pairs. Thus only the ground state. • the nth excited state. In section 3. which requires a numerical evolution of the nonlinear equations.We have checked convergence of the eigenvalues obtained by the spectral method under increase of the number N of Chebyshev points and the radius L of the spherical grid.3 showed that complex eigenvalues necessarily occur in quadruples ¯ ¯ (λ. is linearly stable. • no purely real eigenvalues appear. which is to say the (n+1)th state. which we saw in section 1. 45 .λ and −λ) while real or complex ones occur in pairs. The calculation of section 4. we found that complex eigenvalues could only arise if a certain integral of the perturbed wavefunction was zero.2 is the absolute minimiser of the conserved energy. and this is satisfactorily veriﬁed by numerical solutions found here. All other sphericallysymmetric stationary states are linearly unstable.−λ. we ﬁnd that: • the ground state is linearly stable. in that the eigenvalues are purely imaginary.4. We must now turn to the nonlinear stability of the ground state. In the numerical calculation.
2.2 to section 5. which is to say evolving the potential as well as the wavefunction. the aim is to ﬁnd and test a numerical technique for solving (1. Thus we have a preliminary picture of the evolution: any initial condition will disperse to large distances (i. On the basis of the linearised calculations of chapter 3 and chapter 4. in section 5.Chapter 5 Numerical methods for the evolution 5. o In particular. we consider how to test the method against an explicit solution of the zeropotential Schr¨dinger equation.1 The problem In the next two chapters.10 we introduce the notion of residual probability which we may describe as follows. In this chapter. we want to see how far the linearised analysis of chapter 3 and chapter 4 is an accurate picture.e. In section 5. we expect the ground state to be the only stable state. Chapter 6 contains the numerical results. and what o we can check with a nonzero but ﬁxed potential. We describe an iteration for this and list some checks which we can make on the reliability and convergence of the method. Since the problem is nonlinear. If the 46 .2 and with total probability less than one. We need a numerical method for o evolution. Finally. we consider what the problems are in this programme. we face the problem of evolving the full SN equations. and a technique for dealing with the boundary that will allow the wave function to escape from the grid. keeping reﬂection from the boundary to a minimum. and methods for dealing with them. we ﬁrst consider the simpler problem of solving the timedependent Schr¨dinger equation in a ﬁxed potential.5. oﬀ the grid) leaving a remnant at the origin consisting of a ground state rescaled as in subsection 1. In section 5.5) with the restriction of spherical symmetry.4 below. We want to evolve an initial ψ long enough to see the dispersive eﬀect of the Schr¨dinger equation and the concentrating eﬀect of the gravity.7. This is the content of section 5.
2) we get λ = 1 − iδt(k 2 + φ). (5. when D2 (eikx ) = −k 2 eikx . At each time step the modes get ampliﬁed by 47 . Letting ψ n+1 = λeikx and substituting into (5.2) where ψ n is the wavefunction at the nth time step and D2 is the approximation for the diﬀerentiation operator. but this would have several draw backs. Now we consider what happens to the mode eikx . We could consider renormalising the wavefunction as the numerical method progresses.5) We get an amplifying factor of λ= 1 . Examples of what D2 can be are ﬁnite diﬀerence or a Chebyshev diﬀerentation matrix. To see why this is consider the case of an explicit method given by: i ψ n+1 − ψ n = D2 (ψ n ) + φψ n . i ∂ 2ψ ∂ψ = − 2 + ψφ.3) this method leads to a growth in the normalisation of the mode. 5.initial condition is a state with negative energy and this preliminary picture is sound then we can estimate the residual probability remaining in this rescaled ground state. δt (5. 1 + iδt(k 2 + φ) (5. δt So here we have an ampliﬁcation factor of λ which provided k 2 + φ = 0 has λ > 1 and (5. that is solving.2 Numerical methods for the Schr¨dinger equation with o arbitrary time independent potential We consider numerical methods that solve the 1dimensional Schr¨dinger equation with o given initial data.4) (5. that is. ∂t ∂x (5.6) So in this case we note that λ < 1 provided k 2 + φ = 0. that is: i ψ n+1 − ψ n = D2 (ψ n+1 ) + φψ n+1 .1) We note that Numerical Recipes on Fortran [20] and Goldberg et al [9] use a numerical method to solve the Schr¨dinger equation which they say needs to preserve the Hamiltonian o structure of the equation or equivalently the numerical method is time reversible. this method leads to decaying modes. If we used an implicit method instead.
o.7) which has an amplifying factor of λ= 1 1 − 2 iδt(k 2 + φ) . Now the wave evolves with a change in phase factor (λ) of: λ= 1 1 − 2 iδt(k 2 ) .diﬀerent amounts so the ratio in diﬀerent modes will change.t 2 6 k 6 δt3 12 (5.9) which is just that of the factor obtained in (5. Since the boundary conditions are straightforward a spectral method. 1 + 1 iδt(k 2 ) 2 2 (5. 2 2 while the actual phase factor should be : e−ik 2 δt (5. Suppose we consider the case of the linear Schr¨dinger equation in one o dimension with φ = 0 which has solutions of the form ψk (x.11) So to make the other terms small we can require factor is correct but only to this order. To get a numerical method that will preserve the constants of the evolution of the equation (namely the normalisation) we need to use a CrankNicholson method. Expanding λ in powers of δt we have: i 1 i i λ = (1 − δtk 2 − δt2 k 4 + δt3 k 6 + . .. 5. With this method the discretization of the Schr¨dinger equation is: o i 1 φ ψ n+1 − ψ n = − (D2 (ψ n+1 ) + D2 (ψ n )) + (ψ n+1 + ψ n ). Also for the SN equation renormalisation would require rescaling the space axis which would need a interpolation at each step. 2 2 4 2 1 2 4 iδt3 k 6 = 1 − iδk 2 t − δt k − + . that is Chebyshev diﬀerentation matrix.. but that does not mean that the phase is correct. to be small.10) = 1 − iδtk 2 − δt2 k 4 ik 6 δt3 + + h.)(1 − δtk 2 ). is seen to give better results for less computation time compared to ﬁnite diﬀerence method.3 Conditions on the time and space steps We note that the CrankNicholson method might be unitary. . 1 + 1 iδt(k 2 + φ) 2 (5. as in [9]. δt 2 2 (5.8) which is unitary (λ = 1) since it is in the form of a number over its complex conjugate.8). t) = e−ik t eikx . The phase 48 ..
The discretized method in the Schr¨dinger equation is : o i ψ n+1 + ψ n 1 = [−D2 (ψ n+1 ) − D2 (ψ n ) + φ(ψ n+1 + ψ n )].4 Boundary conditions and sponges The boundary conditions which we want to impose on the Schr¨dinger equation are that o the wavefunction must remain ﬁnite as r tends to zero and the wavefunction must tend to zero as r tends to inﬁnity. −D2 (ψ N ) + φψ N = Eψ N .15) Expanding out with  δtE  < 1 we have. (5. i(ψ n+1 − ψ n ) = and so we obtain. 2 ψ n+1 = [1 − δtE + δt3 E 3 12 (−iδtE)2 (−iδtE)3 + + O(δt4 )]ψ n . Substitution into the method gives. The problem with this is that only in a certain few cases will we know the actual 49 . 2 2 (5.13) δtE n+1 δtE n ψ + ψ .16) In order that the phase is calculated correctly we require that the error due to the term be small.14) iδtE n iδtE −1 ) (1 − )ψ . δt 2 (5. To try to impose these boundary conditions numerically we can solve the Schr¨dinger equation o in rψ = χ in which case it becomes: iχt = −χrr + φχ. ψ r where ψ is the actual analytic solution. n+1 ψi = (1 + (5. for any N . We expect the solution to evolve like ψ n = e(−iEt) ψ 0 where E is the energy of the stationary solution. 2! 4 (5. since otherwise the wavefunction would not be normalisable.Now suppose that ψ is a stationary state solution of the Schr¨dinger equation with o potential. The other boundary condition can be approximated in one of the following ways: • We can at a given r = R say set that χ = solution.17) The boundary condition at r = 0 is just a matter of setting χ = 0 for the ﬁrst point in the domain.12) Now since ψ n and ψ n+1 are stationary states we know that. 2 2 (5. 5.
This is a wave bump “moving” at a velocity (5. we consider instead of the (5. (5.1(r−R)) .21) but in this case we will have the limiting solution: 50 a σ We note that C will not exist when exp(− σ2 − v 4 ) = 1 which is when v = 0 and a = 0. 5. condition is approximated. There will still be some reﬂection oﬀ the sponge but this is a smaller eﬀect. In the spherically symmetric case with zero potential a solution to the Schr¨dinger o equation is: rψ = χ = √ C σ (σ 2 + 2) 1 2 [exp(− (r − vt − a)2 ivr iv 2 t + − ) 2(σ 2 + 2it) 2 4 (5.20) − exp(− (r + vt + a)2 ivr iv 2 t − − )]. o (i + e(r))χt = −χrr + φχ. 2(σ 2 + 2it) 2 4 we shall call this a moving Gaussian shell. The problem with this boundary condition is that it will • The other thing we can do to reduce the eﬀect of probability bouncing back is to Schr¨dinger equation the equation. .19) so as give a smooth function which only has an eﬀect at the boundary since for small r the sponge factor eﬀectively vanishes.• We set the condition that χ = 0 at r = R for R large so that the other boundary reﬂect all the outward going probability and send it back towards the origin.5 Solution of the Schr¨dinger equation with zero potential o To check that our numerical method is functioning correctly we aim to check it in the case of spherical symmetry with the same boundary condition where a solution is known both analytically and numerically. That is. This also gives a check on the boundary conditions.18) where the function e(r) is a strictly negative (one sign) so that the equation acts like a heat equation to reduce the probability which is assumed to be heading oﬀ the grid. σ 4 2 2 2 that χ = 0 at r = 0 and χ = 0 at r = ∞. introduce sponge factors at r = R were R is large. There will also be a reduction in the normalisation integral. when t = 0. C is chosen such that the wavefunction is normalised. Here we have the boundary condition v starting at a distance a from the origin. For example we take e(r) = e(0. That is: √ −a2 v 2 σ 2 1 = C 2 π[1 − exp( 2 − )].
rψ = χ = where A is such that Ar (σ 2 + 2it) 2 3 exp(− r2 ). which should be the case since the numerical method preserves this. that is. We note that this will 5. where T =  ψ2 d3 x. We note that we cannot numerically model the boundary condition at r = ∞ so we that point. where the heat is absorbed at the boundary. 2(σ 2 + 2it) (5. • We can set the boundary value at r = R to be the value of the analytical solution at is known. • We can use sponge factors.6 Schr¨dinger equation with a trial ﬁxed potential o We can consider the Schr¨dinger equation with a potential which is constant with respect o to time. π (5. that is. 51 (5. This method will cause the wave to reﬂect back oﬀ the boundary.23) Note that the solution (5. change the equation so that is becomes a type of reduce the wave which is reﬂected from the boundary. • We can set the boundary condition to be χ = 0 at r = R where R is chosen so that it is large compared to the initial data.20) will in the long term tend to zero everywhere.22) 2σ 3 A2 = √ . it can only be done for testing. can do the following things to check the numerical calculation of the solution. but that normalisation will not be preserved. when there are no sponge factors or when there is no probability moving oﬀ the grid. and also to approximate the eﬀect of the boundary condition at inﬁnity: • Check that normalisation is preserved.25) (5. We note that this will only work for the case where the analytical solution heat equation.24) . In this case we know that the energy E is preserved where E is given by E = T + V. since there is no gravity or other forces to stop the natural dispersion of the wave equation.
(5. ∂r2 u2 . • Normalisation.and V = with the choice of potential φ = solve the eigenvalue problem 2 1 (r+1) φψ2 d3 x. Then the following procedure is used to calculate the wavefunction and potential at the next time step: 1. the ﬁrst guess being that of the current potential. Choose φn+1 = φn to be an initial guess for φ at the (n + 1)th time step.7 Numerical evolution of the SN equations The method that Bernstein et al [4] use for numerical evolution of the wavefunction of the SN equations consists of a CrankNicholson method for the timedependent Schr¨dinger o equation and an iterative procedure to obtain the potential at the next time step. They transform the SN equations in the spherically symmetric case into a simpler form which is: ∂u ∂t d2 (rφ) dr2 i where u = rψ. We know by general theory that ψn ψdr = cn eiEn t .30) . In summary we can check in this case. (5.27) This gives us eigenvalues En say with eigenfunctions ψn where the ψn are independent of time.29) (5. r (5. 52 = − = ∂2u + φu. • Energy of the wavefunction.26) we can work out the bound states.28) • The inner products with respect to the bound state to see that they are of constant modulus and the phase change correctly 5. where ψ is a general wavefunction and cn is a constant. that is we can ψ + φψ = Eψ (5.
These could be added to stop outgoing waves being reﬂected from the imposed boundary condition at r = R. r (5.34) Thus it is useful to call 3I the conserved energy EI . 5. This outline of a general method is independent of the Poisson solver and numerical method used to evolve the Schr¨dinger equation. R] and R is large. That is Φ that solves: d2 (rΦ) dr2 = un+1 2 . Calculate the Φ which is the solution of the potential equation for the un+1 . that is: 2i U − un δt = −D2 (U ) − D2 (un ) +[ΦU + φn un ]. we could use either a ﬁnite o diﬀerence method or spectral methods.8 Checks on the evolution of SN equations We recall that as we saw in section 1. (5. 5.31) no sponge factors. which is o 2i un+1 − un δt = −D2 (un+1 ) − D2 (un ) +[φn+1 un+1 + φn un ]. 3 (5. but the action I = T + 1 V is.33) where the boundary conditions are the same as in step 2.2 for the SN equations the energy is no longer conserved. We also note that if ψ corresponds to a bound 2 state with energy eigenvalue En then I= 1 En . In the SN equation in general we can check: 53 . Calculate un+1 from the discretized version of the Schr¨dinger equation. Consider U − un+1  if less than a certain tolerance stop else take φn+1 = Φ and continue with step 2. That is.32) where this equation is solved on the region r ∈ [0. (5. 3. Calculate U by the discretized version of the Schr¨dinger equation with Φ being the o potential at the (n + 1) step. where there are 4.2. We note that CrankNicholson is second order accurate with respect to time and is such that it preserves the normalisation. This is the reason why we need to work out the potential at the n + 1 time step instead of just using an implicit method in the potential.
the general solution will consist of a o linear combination of bound states and a scattering state. we need to show that as the mesh and time steps decrease that the method converges to a solution. We expect this since the truncation error in the case of just solving the linear Schr¨dinger equation is second order.h2 and h3 are three diﬀerent values of the step size. To get just a function of p we can calculate: Oh1 − O h3 . For the SN equations. I actual value and p the order of the error.35) where Oh is the calculated value with space or time step size h. 5.at least in In the case of evolving about a stationary state we can check • That the phase factor should evolve at a constant rate at the corresponding energy of the bound state at least for a while. 2 • That the change in the potential function satisﬁes φt = i the case of spherical symmetry. that is there is no requirement on the ratio . Oh2 − O h3 (5. For large times. In a case where we expect the error to be like Oh ∼ I + Ahp . In the case of the ﬁnite diﬀerence method we expect errors to be second order in time and space steps. at least when not aﬀected by the sponge. so we can reﬁne the ∆x2 mesh in both direction independently and expect convergence.36) where h1 . To test for this convergence we can use “Richardson Extrapolation”. 5.9 Mesh dependence of the methods To be sure that the evolution method produces a valid solution up to a degree of numerical error. We also know that the method is o ∆t unconditionally stable.• Preservation of the normalisation. but we do still expect it to be second order in time. We note in the case of the spectral methods we do not expect the method to be second order in space. (5. we have seen in chapter 4 54 . r ¯ 0 (ψψr ¯ − ψ ψr )dr . the scattering state will disperse leaving the bound states. at least when it is not aﬀected by the boundary condition due to the sponge factors.10 Large time behaviour of solutions For the Schr¨dinger equation with a ﬁxed potential. • Preservation of the action I = T + 1 V .
39) (5. E0  EI .38) for some α.40) since the scattered energy is positive. So then we have that α3 = (ES + EI ) . Assuming the truth of this picture.2 to have total probability less than one. which leads to the inequality α3 > (5.2). and we shall see the same thing for nonlinear instability in chapter 6. rescaled as in subsection 1. So if we start with an initial value EI of the action which is negative then we claim EI = ES + α3 E0 . Consequently.2. we might conjecture that the solution at large values of time for the SN equations with arbitrary initial data would be a combination of a multiple of the ground state. It follows from the transformation rule for the SN equations (subsection 1.37) We also note that E0α = α3 E0 . if the conserved energy is negative initially we can obtain a bound for the probability remaining in the multiple of the ground state.that the bound states apart from the ground state are all linearly unstable. E0 (5. We shall see support for this conjecture in chapter 6.2. (5. What we mean by a multiple of the ground state is the following: if ψ0 (r) is the ground r state then consider ψ0α (r) = α2 ψ0 ( α ). where ES is the scattered action and α3 E0 is the energy due to the remaining probability near the origin. (5. together with a scattering state containing the rest of the probability but which disperses.41) 55 . that this is a stationary state but with: ∞ 0 ψ0α 2 = α.
We can use the exponential function of (5. Lump moving outwards 0.05 0 200 150 100 50 r 0 0 50 time 100 150 200 Figure 6.15 0.20) at t = 0.Chapter 6 Results from the numerical evolution 6. we use the initial condition of an outward going wave from (5.2 rψ 0.5).1: Testing the sponge factor with wave moving oﬀ the grid 56 .35 0.3 0.1 Testing the sponges To test the ability of the sponge factor to absorb an outward going wave.25 0.1 0.
2 Evolution of the ground state The ground state is a stationary state and is linearly stable by chapter 4 and we expect it to be nonlinearly stable for small perturbations.1 since it reﬂects less back from the boundary. C is the normalisation factor and a and σ determine the position and shape of the distribution.3 0.1 we plot a Gaussian shell moving oﬀ the grid.2 we plot an Gaussian shell moving oﬀ the grid.Lump moving outwards 0. We note that due to inaccuracy in the calculation of the ground state and the numerical error that will be 57 .1) − exp(− (r + vt + a)2 ivr iv 2 t − − )] 2(σ 2 + 2it) 2 4 Here v is chosen so that this corresponds to an outward going wave and is suﬃciently large so that the wavefunction will scatter. This sponge factor gives a gradual increase toward the boundary.25 0. here the sponge factor is but we notice that there is some reﬂection oﬀ the boundary. −10 exp(0.2 rψ 0.1 0. In ﬁgure 6.5(r − Rend )).15 0. In ﬁgure 6. since it has the lowest energy.35 0.05 0 200 150 100 50 r 0 0 50 time 150 100 200 Figure 6. We note that this sponge factor work betters than one used for 6. −2 exp(0. here the sponge factor is ﬁgure 6.2 ∗ (r − Rend )).2: Testing sponge factor with wave moving oﬀ the the grid rψ = √ C σ (σ 2 + 2it) 2 1 [exp(− (r − vt − a)2 ivr iv 2 t + − ) 2(σ 2 + 2it) 2 4 (6.
So therefore we know that this oscillation about the ground state is introduced by numerical error. We can also check the conservation of energy and the conservation of normalisation. starting with the ground state 30 4 2 25 20 15 10 5 r 0 20 40 60 80 100 time 120 140 0 phase 0 −2 −4 Figure 6.4.evolution of the angle . about 1E5 in the time scale which is about 5E5 time steps. We also deduce that the errors occur due to the long tail of the ground state at which the wavefunction is approximately zero and small error here will have a great eﬀect on the normalisation and the energy so causing more error close to 58 .3: The graph of the phase angle of the ground state as it evolves made in the evolution. Now we evolve the ground state for a long time. see ﬁgures 6. By adjusting the time step. The next thing to try is the long time evolution of the ground state. the numerical evolution will actually be the evolution of the ground state with a small perturbation.5. 6. we can reduce the amplitude of the limiting oscillation. Here we do get a constant phase (see ﬁgure 6.3) and we can check that it is of the correct value by calculating the energy of the wavefunction. but this shouldn’t make any diﬀerence to the overall end result here. We ﬁnd a growing oscillation which reaches a bound and stabilises at a ﬁxed oscillation. We also can adjust the sponge factor to absorb more at the boundary to reduce the amplitude of the boundary. Since the ground state is a solution to the timeindependent SN equations we expect that there should be a constant change in phase corresponding to the energy of the state and the absolute value of the wavefunction should be constant.
042 0.052 graph to show the boundness of the error when evolving around ground state 0.044 0.5 2 0 50 45 40 35 2.5 1 1.05 0.4: The oscillation about the ground state evolution about ground state 0 0.0.2 0 Figure 6.04 0.046 0.4 1.5: Evolution of the ground state 59 .5 time 1 x 10 4 rψ 0.5 x 10 3 4 Figure 6.5 30 25 3 20 15 r 10 5 3.5 0.5 time 2 2.048 abs wave−function near origin 0.038 0 0.
0. which are plotted in ﬁgure 6. add on to it an exponential distribution and evolve. We can compare these observed frequencies to those which were obtained in chapter 4 doing the ﬁrst order linear perturbation.6E − 5.8 1 Figure 6. We expect that these would be at the frequencies determined by the ﬁrst order linear perturbation about the ground state and the corresponding higher order linear perturbations.4 −0. Now we want to consider an evolution of the ground state with deliberately added perturbations. 60 . In ﬁgure 6.6 −0. In table 6.2 0.08 Eigenvalues of the perturbation equation 0.1 we present the frequencies observed (see ﬁgure 6.8 we plot the Fourier transform of the oscillation about the ground state at a given point.6. We use in our evolution method.08 −1 −0. We take a ground state.6: The eigenvalue associated with the linear perturbation about the ground state the origin. With this amount of sampling points and time π step we should be able to obtain the frequencies to an accuracy of = 3.2 0 0.4 0. Then we can Fourier transform the absolute value of the wavefunction near the origin to obtain the diﬀerent frequencies that make up the oscillation.06 −0. 000 sampling points. N = 50 Chebyshev points and dt = 1 as the time step to obtain n = 170. which are all imaginary.6 0.04 0.04 −0. The aim here being to see if we can induce ﬁnite perturbation around the ground state.8) with those obtained in chapter 4.02 −0.8 −0.7 we plot a section of the oscillation about the ground state at a given point. In ndt ﬁgure 6.06 0.02 0 −0.
256 0 500 1000 1500 2000 time 2500 3000 3500 4000 Figure 6.4 −3.7: The oscillation about the ground state at a given point −7 Fourier transform of oscillation about the ground state −8 −9 −10 log(fourier transform) −11 −12 −13 −14 −15 −16 −3.2 −3 −2.2 −2 Figure 6.4 −2.257 0.258 0.6 0.0.6 −2.6 −3.261 graph rψ near the origin.26 rψ at r = 3. evolving about the ground state 0.259 0.8: The Fourier transform of the oscillation about the ground state 61 .8 log(frequency) −2.
431 −14.1 Evolution of the higher states Evolution of the second state We note that whether or not the higher stationary states are nonlinearly stable they should evolve for some time like stationary states before numerical errors grow suﬃciently to make them decay. Do they decay when perturbed into some scatter and some multiple of the ground state? Numerical calculation of a stationary state is only correct up to numerical errors. 2.0730 0.322 Table 6.0341 0.10 based on the assumption that the energy that is scattered is positive? As we can see in ﬁgure 6. How long does the state take to decay? 3.300 −7.115 −16. −8.0687 0.linearised frequencies ±0.1012 ±0.0943 0.632 −8. so if the state is nonlinearly unstable it is likely that it will decay from just evolving.9).0602 0.0603 ±0.0764 0.0866 0.265 −9.1204 log of size of peak . Is this above the estimate of section 5.1196 frequencies 0. There are a few issues to look at with the evolution of the higher stationary states when the state are evolved for long enough.0867 ±0.0934 ±0.377 −15.0810 0.0731 ±0.102 −14.603 −13.038 −11.0341 ±0.1099 ±0. What is the amount of probability left in the ground state? 4. emitting scatter and leaving something that looks like a multiple of the ground state. For the evolution of the second state we ﬁnd that we have again the correct phase factor (see ﬁgure 6.3 6.0810 ±0.1071 0.3.0765 ±0. 62 .0688 ±0.1028 0.10 the second state is decaying. 000 6. 1. n = 170.466 −13.1: Linearised eigenvalues of the ﬁrst state and observed frequencies about it.
2 0.10: The long time evolution of the second state in Chebyshev methods 63 .4 0.9: The graph of the phase angle of the second state as it evolves 0 1000 2000 0.3 0.1 0 150 100 50 0 3000 4000 5000 6000 7000 8000 9000 10000 Figure 6.evolution of the angle . starting with the second state 2 1 0 phase −1 −2 −3 100 80 60 40 20 0 time 50 40 r 30 20 0 10 Figure 6.
We can see in ﬁgure 6. Here we used n = 6000 sampling points and dt = 1 was the time step. 0). To remove the growing exponential we consider: f (t) = rψ(r1 .14 the normalisation and the action bound seem to be converging toward the same value. The second section of the time series we shall ignore. a time series. 64 . origin. 0) + rψ(r1 . but here we have used in a lower tolerance in the computation and the result is that the second state takes longer to decay which would correspond to the error growing at a smaller rate. In ﬁgure 6.12. We can plot the resulting probability in the region in ﬁgure 6.2) As in the case of the ground state we can consider rψ at a particular point to obtain where λ is the eigenvalue with nonzero real part obtained in solving (3. t) − rψ(r1 . since the perturbation has to be signiﬁcantly large before the nonlinear eﬀects will cause the decay of the wavefunction. The ﬁrst section has an exponentially growing oscillation which we expect to be the unstable growing mode obtained in the linear perturbation theory (chapter 4).13) and we obtain approximately one frequency which is at 0. We deﬁne A to be such that A = rψ(r1 . The next section is the part where it decays into the ground state emitting scatter oﬀ the grid. that is the wavefunction is just a perturbation about the second state. The growing mode here provides information about the time taken for the state to decay. We expect the third section to be an oscillation about a “multiple” of the ground state and so we should get frequencies of oscillation but the linear perturbation frequencies should be transformed to make up for the diﬀerence in normalisation. so we are unable to obtain any consistent frequencies.14. To conﬁrm the expectation that this is the growing mode we consider modifying the function so that we would just have the oscillation with no growing exponential. The ﬁnal section corresponds to a “multiple” of the ground state with a perturbation about it. since it is neither perturbation about the second state nor perturbation about the ground state. The resulting time series divides up into three sections. We note that f (t) − A is almost just We then Fourier transform f (t) − A (see ﬁgure 6.24) about the second state and r1 is the point at which the time series has been obtained. with bound on the action from section 5. which is near the an oscillation and we plot this in ﬁgure 6. The ﬁrst section is the evolution of the second stationary state. What we ﬁnd is the normalisation is still decreasing.0095. 0) exp(real(λ)t) (6. This is decaying into something that looks like a multiple of the ground state.10.11 the evolution of the second state. which is approximately that of the growing mode in the linear perturbation theory.
11: Evolution of second state tolerance E9 approximately 3 x 10 −9 f(t)−A 2 1 0 −1 −2 −3 0 2000 4000 6000 time 8000 10000 12000 Figure 6.2 0.05 0 10 8 6 x 10 4 4 2 150 time 0 200 r 100 0 50 Figure 6.Evolution of the second state 0.1 0.15 rψ 0.12: f (t) − A 65 .25 0.
−21
Fourier transform of f(t)−A
−22
−23
−24
log(fourier transform)
−25
−26
−27
−28
−29
−30 −7
−6
−5
−4
−3 −2 log(frequency)
−1
0
1
2
Figure 6.13: Fourier transformation of f (t) − A
1.1
Decay of second state
1
0.9
norm
probability
0.8
0.7
0.6 action bound 0.5
0.4
0
1
2
3
4
5 time
6
7
8
9
10 x 10
4
Figure 6.14: Decay of the second state
66
Evolution of second bound state
2000
0.2 0.15
1500
rψ
0.1 0.05
1000
500 0 150 time 100 r 50 0 0
Figure 6.15: The short time evolution of the second bound state Although we have found the second state to be unstable as expected and the second state grows initially with an exponential mode, we can obtain more of the linear perturbation modes. We consider the evolution of the second state for a short time with an added perturbation to see if it does give any frequencies, and then check whether these agree with the linear perturbation theory. We note that since we can only evolve for a short time before the state decay the accuracy will be low. In ﬁgure 6.15 we plot the evolution of the second state with an added perturbation, and in ﬁgure 6.16 we plot the oscillation about the second state at a ﬁxed radius. In table 6.2 we present the observed frequencies about the second state compared with the eigenvalues of the linear perturbation. We note that the accuracy of the observed frequencies here is 0.002. In ﬁgure 6.17 we plot the Fourier transform of the second state with added perturbation, note that the graph is not smooth due to the low accuracy.
6.3.2
Evolution of the third state
Now we consider the evolution of the third stationary state. In ﬁgure 6.18 we plot the evolution of the third state, as expected we see that it is unstable and decays, emitting scatter and leaving a “multiple” of the ground state.
67
2.2
x 10
−3
section of the section bound state frequency oscillation
2.1
2
1.9
rψ at fixed point
1.8
1.7
1.6
1.5
1.4
1.3
0
50
100
150
200
time
250
300
350
400
450
Figure 6.16: Oscillation about second state at a ﬁxed radius
−10
fourier transform of freqency oscillation about second state
−10.5
−11
log(fourier transform)
−11.5
−12
−12.5
−13
−13.5
−14 −6
−5.5
−5
−4.5 log(frequency)
−4
−3.5
−3
Figure 6.17: Fourier transform of the oscillation about the second state
68
20) varying the three associated parameters.0100i 0.2) but with λ now being the eigenvalue of the linear perturbation about the third state with the maximum real part.0050 which correspond to the imaginary parts of the two complex eigenvalues of the linear perturbation.linearised eigenvalues 0. We can again consider the perturbation about the third state initially as with the second state before it decays.1247 0.0643 0.2: Linearised eigenvalue of the second state and frequencies about it.0853 0.0737i 0 − 0.0100i 0 − 0.1123 0. we ﬁnd that it is exponentially growing. Since varying these factors will produce varied initial conditions.20) and obtain two frequencies at 0.0276i 0 − 0.1118i 0 − 0. 6. with bound on the action from section 5. now we Fourier transform f (t) − A (see ﬁgure 6.1417i 0 − 0.0211i 0 − 0.1419 0.0190 0.4 Evolution of an arbitrary spherically symmetric Gaussian shell In this section we consider the evolution of Gaussian shells or lumps (5. The mean velocity v the rate the shell is expanding outwards.0266 0.0342 0. The width σ of the shell.0726 0. the radial position of the peak of the shell. we note the centre of mass is not moving.0086i −0.0982i 0 − 0.1577 Table 6. The mean position a. In ﬁgure 6.0022 and 0.21 the normalisation and the action bound seem to be converging toward the same value.0530 0.21.0987 0.0855i 0 − 0.0000 0 − 0.0351i 0 − 0.0627i 0 − 0. We plot the resulting probability in the region in ﬁgure 6.1579i frequencies 0.0114 0.0030i 0 − 0. we can perform a mesh analysis and a time step analysis on the evolution 69 .19 we plot f (t) − A.0416 0.0014 − 0.10. In ﬁgure 6.0526i 0 − 0.0153i 0 − 0. We now transform as in (6.0014 + 0.1263i 0 − 0.0434i 0 − 0.
2 1.5 5 0 50 100 r 150 200 250 300 350 Figure 6.5 3 3.6 1.2 0.5 1 1.5 4 time 4.8 1 time 1.18: Evolution of third state 3 x 10 −9 2 1 f(t)−A 0 −1 −2 −3 0 0.6 0.4 0.2 0.5 x 10 4 2 2.1 0 0 0.19: f (t) − A with respect to the third state 70 .3 rψ 0.4 1.8 x 10 2 4 Figure 6.Evolution of third state 0.4 0.
5 0.−20 −22 −24 log(fourier transform) −26 −28 −30 −32 −34 −10 −8 −6 −4 log(frequency) −2 0 2 Figure 6.3 action bound 0 0.5 2 2.2 Figure 6.5 time 3 3.5 x 10 5 4 0.5 1 1.4 0.7 0.5 4 4.8 probability 0.20: Fourier transform of growing mode about third state 1.9 0.1 Decay of third state 1 norm 0.21: Probability remaining in the evolution of the third state 71 .6 0.
5 0. 6.1 Evolution of the shell while changing v In ﬁgure 6. We expect that the more energy in the system to begin with the less energy and therefore probability eventually will remain near the origin.24.6 time = 2000 time = 2250 remaining probability 0. We notice that the negative values of v correspond to an outward going wavefunction moving oﬀ the grid and a positive v is an inward going wavefunction.5 Figure 6.7 time = 1250 0.9. N =40 time = 750 0.1 0 −0.22 we plot the remaining probability at diﬀerent times and at diﬀerent initial mean velocities v.3 the mean calculated value against the values expected for quadratic convergence in time. We also look at how much of the wavefunction is left at the origin which we expect to form a “multiple” of the ground state in the sense of section 5.2 0.a = 50. We can also plot the variation with in the number of Chebyshev points to show convergence with increasing N (this time it does not make sense to do Richardson Extrapolation). σ = 6.2 −0.23 and note that the diﬀerence is small.5 −0.1 0.4 time = 2500 0.3 −0. We also notice that the graph is not symmetric about v = 0 even though the wavefunction energy is symmetric about v = 0.1 0 v 0.22: Progress of evolution with diﬀerent velocities and times as outlined in section 5. Since the values are close to the expected values we are justiﬁed in supposing that the convergence is quadratic in the time step. We calculate the “Richardson fraction” for diﬀerent values of h i ’s we plot the diﬀerent Richardson fractions in ﬁgure 6.4 −0. 72 .4 0.3 0.10.0. We compare the diﬀerence in diﬀerent time steps to the result in ﬁgure 6.8 Evolution of lump with speed v. while a and σ are constant initially. We also show in table 6.3 0.4.2 0.
5 −0.a = 50.5 −0.3 −0.5 Figure 6.4 0.5 1 difference in probability to dt = 1 0.h = 1 1 2 3 5 4.5 0 dt = 5/4 −0. N =40 1. σ = 6.2 0.1 0 v 0.6.5 −0.h2 = 1.h2 = 1.h3 = 1 3 2.2 −0.24: Richardson fraction done with diﬀerent hi ’s 73 .2 x 10 −3 Evolution of lump with speed v.5.4 −0.2 0.5 −2.3 −0.23: Diﬀerence of the other time steps compared with dt = 1 5.3 0.h = 2.25.5 h1 = 2.5 Figure 6.5.5 three different Richardson fractions h = 5.3 0.1 0.h3 = 1 2 −0.4 0.1 0.4 −0.6.2 −0.1 0 v 0.5 dt = 5/3 −1 −1.5 Richardson fraction 4 3.5 h1 = 1.5 −2 dt = 2.
1605 median value calculated 4.2 −0. 74 .4. time = 2500 2 Difference in probability to N =60 1 0 N=50 −1 −2 N=40 −3 −0. We also plot in ﬁgure 6. while v = 0 and σ is constant initially. We can also plot the variation with in the number of Chebyshev points to show convergence with increasing N (this time it does not make sense to do Richardson Extrapolation).27 the values for the remaining probability with diﬀerent time steps. In table 6.9 in the time steps. 6.2 Evolution of the shell while changing a We plot in ﬁgure 6. in ﬁgure 6.26 the remaining probability at diﬀerent times with diﬀerent mean positions a initially.9761 3.5 Figure 6. σ=6.1652 Table 6.28 we plot the “Richardson fraction” at diﬀerent values of a.3: Richardson fractions 3 x 10 −4 Evolution with a=50.7094 2.5714 2.667 1.9531 3.h1 5 2.1 0.2 0. We can again calculate the “Richardson fraction” as in section 5.3 0.4 we show the mean calculated value against the expected values and again we see that we have quadratic convergence in the time step.25 h3 1 1 1 expected value 4.29.25: Diﬀerence in N value The plot is in ﬁgure 6. The plot is in ﬁgure 6.4 0.1 0 v 0.5 −0.5 1.667 h2 2.4 −0.3 −0.25.5 1.
25 0 dt = 0.6 0.01 0 20 40 60 80 a 100 120 140 160 180 Figure 6.02 Evolution of lump with v=0.1 0.2 0.5 0.625.3 0.01 dt = 1.27: Comparing diﬀerences with diﬀerent time steps 75 . dt = 0. N= 40 time = 500 0. σ = 6.26: Evolution of lump at diﬀerent a 0.5 difference in remaining probability with dt = 0.4 0. N= 40 dt = 2.9 0.625 0.7 probability remaining 0.8 Evolution of lump with v=0.833 −0. σ = 6.1 0 time = 750 time = 1250 time = 1500 time = 2000 time = 2500 0 20 40 60 80 a 100 120 140 160 180 Figure 6.
25. h = 1.833.625 1 2 3 5 Richardson fraction 4.5 4 h1 = 1.6 Richardson fraction at different values 5. h =0.5 h = 2.28: Richardson fraction with diﬀerent hi ’s 5 4 3 2 1 0 −1 −2 −3 −4 −5 x 10 −3 Evolution with v=0. h2 = 0.5 3 0 50 a 100 150 Figure 6.625 3.5.σ =6 time =2500 Difference in probability to N =80 N=40 N=60 N=20 0 20 40 60 80 a 100 120 140 160 Figure 6.29: Diﬀerence in N value 76 . h3 =0.25.
5 1. N =40 0.4: Richardson fractions 0.3 Evolution of the shell while changing σ We plot in ﬁgure 6.455 1.8588 Table 6.5 0.30 the remaining probability at diﬀerent times with diﬀerent widths σ initially.4 5 6 7 8 9 10 σ 11 12 13 14 15 Figure 6.55 time = 2250 time = 2500 0.55 0.25 h2 1. h1 0.31 the values for the remaining probability at diﬀerent time steps.6 time = 2000 0. In table 6.625 0.625 expected value 5 3.625 h3 0.65 time = 1250 remaining probability 0.914 Table 6.833 h3 0.625 0.7 0.5: Richardson fractions 77 .4.a = 50. We also plot in ﬁgure 6. In ﬁgure 6. while v = 0 and a = 50. using data obtain at diﬀerent time steps.8571 median value calculated 5.908 median value calculated 2.75 Evolution of lump with σ.83 h2 0.72 0.5 we show the mean calculated value against the expected values again we have quadratic convergence.30: Evolution of lump at diﬀerent σ 6.25 0.55 expected value 2.h1 2. We can again calculate the “Richardson fraction”.45 0.v = 0.32 we plot the “Richardson fraction” at diﬀerent values of σ.458 1.0055 3.
1.5
x 10
−4
Evolution of lump with σ,v = 0,a = 50, N =40 dt = 0.83
1 dt = 0.72
difference in probability to dt =0.55
0.5
0
dt = 0.625
−0.5
−1
−1.5
−2
5
6
7
8
9
10 σ
11
12
13
14
15
Figure 6.31: Comparing diﬀerences with diﬀerent time steps
5.5
Richardson fractions
5
4.5
Richardson fraction
4
3.5
3 h1 = 0.72,h2 = 0.625,h3= 0.55 2.5 h1 = 0.83,h2 = 0.72,h3= 0.55
2
1.5
5
6
7
8
9
10 σ
11
12
13
14
15
Figure 6.32: Richardson fractions
78
8
x 10
−4
Evolution with v=0,a=50, time = 2500
6
Difference in probability to N =60
4
N=40
2 N=50 0
−2
−4
−6
5
6
7
8
9
10 σ
11
12
13
14
15
Figure 6.33: Diﬀerence in N value We can also plot the variation with in the number of Chebyshev points to show convergence with increasing N (this time it does not make sense to do Richardson Extrapolation). The plot is in ﬁgure 6.33.
6.5
Conclusion
In this chapter, we have evolved the SN equations using diﬀerent initial condition, using the methods of Chapter 5. We have checked the ability of the sponge factors to absorb outward going wavefunctions moving oﬀ the grid. We have checked that the evolution method converges to the solution by evolving the SN equations with diﬀerent initial conditions and changing the time steps and the number of Chebyshev points in the grid. To check convergence with respect to the time step we have used Richardson extrapolation and found it to be converging like the second order in the time steps. The numerical calculations show that: • The ground state is stable under full nonlinear evolution. • Perturbations about the ground state oscillate with the frequencies obtained by the linear perturbation theory.
79
• Higher states are unstable and will decay into a “multiple” of the ground state, while emitting some scatter oﬀ the grid. • The decay time for the higher states is controlled by the growing linear mode obtained in linear perturbation theory. • Perturbations about higher states will oscillate for a little while (until they decay) according to the linear oscillation obtained by the linear perturbation theory. • The evolution of diﬀerent Gaussian shells indicates that any initial condition appears to decay as predicted in section 5.10, that is they scatter to inﬁnity and leave a “multiple” of the ground state.
80
Now consider spherical polar coordinates: x = r cos α sin θ.Chapter 7 The axisymmetric SN equations 7. In this chapter.2 The axisymmetric equations We consider the SN equations which in the general nondimensionalised case are: iψt = − 2 2 ψ + φψ.1b) φ = ψ2 . In particular we evolve the dipolelike state.the two regions of probability density attract each other and fall together leaving as in chapter 6 a multiple of the ground state. (7. In chapter 8. We then consider the timedependent problem. 7. we shall ﬁrst ﬁnd stationary solutions.2c) 81 .2a) (7. we consider the SN equations in Cartesian coordinates with a wavefunction independent of z. Then the wavefunction is a function of the polar coordinates r and θ but independent of the polar angle. z = r cos θ.2b) (7. y = r sin α sin θ. (7. For the axisymmetric problem.1a) (7.1 The problem There are two ways of adding another spatial dimension to the problems we are solving. This will give some stationary solutions with nonzero total angular momentum including a dipolelike solution that appears to minimise the energy among wave functions that are odd in z. which turns out to be nonlinearly unstable . we consider solving the SN equations for an axisymmetric system in 3 dimensions.
and to satisfy the normalisation condition we require that ψ → 0 solutions in the spherically symmetric case are still solutions. r ∂r2 r sinθ ∂θ 1 ∂ 2 (rφ) 1 ∂(sinθ ∂V ) ∂θ + 2 .5a) (7. π] and that u(r. We know that ψθ = 0 at θ = 0 and θ = π since otherwise we would have a singularity in the Laplacian. ∞) and α ∈ [0.3) so in the case where ψ is only dependent on θ and r the SN equations become: iψt = − ψ2 = ∂ψ 1 ∂ 2 (rψ) 1 ∂(sinθ ∂θ ) + 2 + φψ. 82 . r sinθ (7. θ ∈ [0. r2 sinθ 1 = φrr + 2 (sinθφθ )θ . So we have that u(0. 1. (1 + r) 2. We note that these boundary conditions are spherically symmetric so that the ψ + φψ. (7. r ∂r2 r sinθ ∂θ (7.5b) For the boundary conditions we still want ψ to be ﬁnite at the origin so that the waveas r → ∞. 2π) . (7.3 Finding axisymmetric stationary solutions To ﬁnd axisymmetric solutions of the stationary SN equations we proceed to adapt Jones et al [3] as follows. π].7b) (sinθφθ )θ . The Laplacian becomes: 2 ψ= ∂ 2ψ 1 ∂(sinθ ∂θ ) 1 1 ∂ 2 (rψ) + 2 + 2 2 . The stationary SN equations are in the general nondimensionalised case: En ψ = − 2 2 as r → ∞.6a) (7. r ∂r2 r sinθ ∂θ r sin θ ∂α2 ∂ψ (7. θ) = 0 for all θ ∈ [0. so the axially symmetric stationary SN equation are: En u = −urr + u2 r = φrr + 1 r2 sinθ 1 r2 sinθ (sinθuθ )θ + φu. Using the potential φ we now solve the timeindependent Schr¨dinger equation that o is the eigenvalue problem 2ψ − φψ = −En ψ with ψ = 0 on the boundary.4a) (7.6b) φ = ψ2 .7a) (7. 7. θ) → 0 for all θ function is wellbehaved. Take as an initial guess for the potential φ = −1 .4b) Now setting u = rψ the equations become: iut = −urr + u2 r 1 (sinθuθ )θ + φu.where r ∈ [0.
that is a solution 1 of Schr¨dinger equation with a potential. Let the new potential be the one obtained from step 4 above but symmetrise potential π since we require that φ should be symmetric around the α = so that numerical errors 2 do not cause the wavefunction to move along the axis (the solution we want should have its centre of gravity at the centre of our grid. Here E is : E= and J 2 is : 2 ¯ 1 ∂(sinθ ∂θ ) + 1 ∂ ψ ]d3 x. We also note in the case of the Schr¨dinger equation with a potential that each o r stationary state in the axially symmetric case has diﬀerent values of constants E and J 2 .8) (7. ψ[ J = sinθ ∂θ sin2 θ ∂α2 R3 2 ∂ψ R3 ¯ −ψ 2 ψ + φψ2 d3 x. We can use this in our selection procedure in step 3 to get started. see below.2 the ﬁrst few stationary states of the axially symmetric solution of the SN equations called axp1 to axp8 together with energy and angular momentum. Now provided that the new potential does not diﬀer from the previous potential by a ﬁxed tolerance in the norm of the diﬀerence. In this case the number of zeros in the radial o r direction and the axial directions follow a characteristic pattern. (This needs care. 6.3. because the solutions are separable.9) We use a combination of the above properties to select solutions at step 3.4 Axisymmetric stationary solutions We present in table 7. Select an eigenfunction in such a way that the procedure will converge to a stationary state. In step 2 the method we use to solve the eigenvalue problem involves Chebyshev diﬀerentiation in two directions that is in r and θ. For 83 . stop else continue from step 2. These values do not change much under iteration of the stationary state problem and we can use this to obtain the next solution with values of E.) 4. Once the eigenfunction has been selected calculate the potential due to that eigenfunction. (7.) Small errors in symmetry will cause the solution to move along the zaxis and oﬀ the grid. In the case of the Hydrogen atom we know explicitly the wavefunction. 5. 7. but as the iteration process continues the pattern of zeros wont be ﬁxed and we may need another 1 method. J 2 which diﬀer only by a small amount.
9E6 name axp1 axp2 axp3 axp4 axp7 axp8 axp5 axp6 Table 7.1592 0.1. 7.2.11. has vanishing J 2 . 7.8.0115 J zero 5.155 3.01252610801692 0. 7.0162 0. but we are not able to ﬁnd a simple classiﬁcation based on. 7.1178 5.0292 0.2: Table for E and J 2 of the axially symmetric SN equations 7.0358 0.1.6. 7.00674732963038 0. Number of state 0 1 2 3 4 Energy Eigenvalue 0. It was previously found in [22] and has nonzero J 2 .3 is a dipole state. 7.14. We use the same method as in section 5. 7.16276929132192 0. say. Axp3 and axp6 are the second and third spherical states.3548 17. and minimises the energy among odd functions. in ﬁgures 7.1 is the ground state which is from table 7.0263 0.16.9.2.00420903256689 Table 7. 7. 7.12.13. 7.1853 0. 7. The others are higher multipoles. Axp2 in ﬁgure 7.002 2.2. Axp1 in ﬁgure 7.3.4.5. 84 . We plot the graphs the stationary states that appear in table 7.10.comparison we give the energy of the ﬁrst few spherically symmetric stationary states in table 7.4). It is odd as a function of z. 7. energy and angular momentum as there would be for the states of the Hydrogen atom.5 Timedependent solutions Now we consider the evolution of the axisymmetric timedependent SN equations (7. 7.2053 1.03079656067054 0.7.7. 7. but instead of solving the Schr¨dinger equation o and the Poisson equation in one space dimension we now need to solve them in two space dimensions.1: The ﬁrst few eigenvalues of the spherically symmetric SN equations Energy 0.0208 0.15 and the contour plots of these stationary states in ﬁgures 7.0599 0.
01 0 60 40 20 0 R −60 −40 −20 z 0 20 40 60 Figure 7.2: Contour plot of the ground state.09 0.02 0.05 ψ 0. axp1 85 .07 0.03 0. axp1 Contour plot 50 40 R 30 20 10 0 −50 −40 −30 −20 −10 0 z 10 20 30 40 50 Figure 7.06 0.1: Ground state.08 0.04 0.0.
axp2 Contour Plot 90 80 70 60 50 40 30 20 10 0 −80 −60 −40 −20 0 20 40 60 80 Figure 7.02 −0.02 0.03 100 80 60 40 20 R 0 −80 −60 −40 z −20 0 20 40 60 100 80 −100 Figure 7.0.03 0.4: Contour Plot of the dipole.01 0 ψ −0.3: “Dipole” state next state after ground state .01 −0. axp2 86 .
axp3 87 .6: Contour Plot of the second spherically symmetric state.01 0. axp3 Contour plot 100 80 60 R 40 20 0 −100 −80 −60 −40 −20 0 z 20 40 60 80 100 Figure 7.0.025 0.015 ψ 0.03 0.01 120 100 80 60 40 20 R 0 −150 −100 −50 z 0 50 100 150 Figure 7.005 0 −0.5: Second spherically symmetric state.005 −0.02 0.
01 ψ 0 −0.7: Not quite the 2nd spherically symmetric state .8: Contour plot of the not quite 2nd spherically symmetric state.01 200 150 100 50 R 0 −200 −100 z 100 0 200 Figure 7.02 0. axp4 88 .axp4 contour plot 150 100 R 50 0 −150 −100 −50 0 z 50 100 150 Figure 7.0.
0162. axp5 contour plot 150 100 R 50 0 −150 −100 −50 0 z 50 100 150 Figure 7. axp5 89 .0162.9: Not quite the 3rd spherically symmetric state E = −0.10: Contour plot of Not quite the 3rd spherically symmetric state E = −0.x 10 10 8 6 4 −3 ψ 2 0 −2 −4 200 100 0 −100 −200 z 200 150 R 100 0 50 Figure 7.
axp6 contour plot 150 100 R 50 0 −150 −100 −50 0 z 50 100 150 Figure 7.axp6 90 .x 10 10 8 6 4 −3 ψ 2 0 −2 −4 200 150 100 50 R 0 −200 −100 z 100 0 200 Figure 7.11: 3rd spherically symmetric sate .12: Contour plot of 3rd spherically symmetric state .
005 −0.0.13: Axially symmetric state with E = −0.01 −0.015 0. axp7 Contour Plot 90 80 70 60 50 40 30 20 10 0 R −80 −60 −40 −20 0 z 20 40 60 80 Figure 7.005 0 ψ −0.015 100 80 60 40 20 R 0 −100 −80 −60 −40 z −20 0 20 40 60 80 100 Figure 7.14: Contour plot of axp7 .double dipole 91 .01 0.0263.
1178 .axp8 Contour plot 180 160 140 120 100 80 60 40 20 0 R −150 −100 −50 0 z 50 100 150 Figure 7.axp8 92 .x 10 15 −3 10 5 ψ 0 0 −5 200 50 150 100 100 50 0 −50 150 −100 z −150 −200 200 R Figure 7.16: Contour plot of the state with E = −0.0208 and J = 3.15: State with E = −0.0208 and J = 3.1178 .
j −L2 φn+2 + ρφn+2 = ρφn+1 + L1 φn+1 + fi. We note that the last operator is not just a function of one variable. and which acts on u. (7. We can now evolve the SN equations using the above method starting with the dipole (ﬁrst axially symmetric state) and evolving with the same sponge factors as that used in Chapter 6. we use a the solution of the equation and deﬁne the iteration to be: −L1 φn+1 + ρφn+1 = ρφn + L2 φn + fi. Let φ0 = 0 be an initial guess for i.j (7.j i.12b) (7. where fi.j i.12c) Now we also need a similar way to solve the Poisson equation for the potential.j i.12a) (7.j iteration until it converges.j 2 is the density distribution. We ﬁnd that again the state behaves like that of a stationary state that is evoling which constant phase. Let ui.j = fi.j . w = rV then the discretized equation for the potential is: −L1 wi.10) which is the r derivative part of the equation which acting on u = rψ. The ADI scheme is: o (1 − iL1 )S = (1 + iL2 )U n .11) which is the θ derivative part of the equation. This leaves the remaining operator to be: L2 = ∂ 1 ∂2 [ 2 + cot(θ) ].j − L2 wi. It decays in the same way as in the 93 PeacemanRachford ADI iteration [1] on the equation. r ∂r2 (7. Rather than solving this equation as it 2 ri stands (that is N 2 × N 2 matrix problem where N is the number of grid points). (1 + iφn+1 )U n+1 = (1 − iφn )T.j . i.We want to use ADI (alternating direction implicit) method to solve the Schr¨dinger o equation.j i. so we need to split the Laplacian into two parts dependent on each coordinate.j = (7. which is enough to use a ADI(LOD) method to solve the Schr¨dinger equation as used by Guenther [10].13) (1 − iL2 )T = (1 + iL1 )S.14b) where ρ is small factor chosen such that the iteration converges.j i.j . but the important part is that it is diﬀerential operators in one variable. before it decays.14a) (7. i.j i. We continue with the . r ∂θ ∂θ (7. We take the operator for the r direction to be: L1 = 1 ∂2 .
17: Evolution of the dipole spherically symmetric states emitting scatter oﬀ the grid and leaving a “multiple” of the ground state behind.t=0 t = 1400 0. N 2 = 25 instead of N = 25. In ﬁgure 7.04 ψ ψ 100 0 0 −100 z 0.125.06 0. We consider three diﬀerent time steps for evolution of the dipole.02 0. 1. using Richardson fraction in time step.5 in ﬁgure 7. 0.02 0 100 50 R 0 −100 z 0 100 t = 2000 t = 2600 0. 94 . 0. One diﬀerence here is that the angular momentum is lost with the wave scattering oﬀ the grid.25.2 and if it is linear it should be 0.01 0. where dt = 0.5.02 0 100 50 R 0 −100 z 0 0 100 50 R 100 Figure 7. Here we note that the convergence is linear in time. We can also check how the method converges with decreasing the mesh size. We note that the state still decays into a “multiple” of the ground state. In this we can also check the method for convergence. In ﬁgure 7.18 the average Richardson fraction.01 0 100 50 R 0 −100 z 0 ψ 100 0. We plot in ﬁgure 7. 0. Where N is the number of Chebyshev poins in the r direction and N 2 is the number of point in the θ direction. We also plot the Richardson fraction in the case of dt = 0. to leave a “multiple” of the ground state. N 2 = 20.17 we plot the result of the evolution.04 ψ 0.25.19 and note that again this shows that the convergence is linear in time.20 we plot the evolution of the dipole with diﬀerent mesh size N = 30.02 0.33. which if the convergence is quadratic should be 0.03 0.
33 0.33 Richardson fraction 0.335 0.305 0 500 1000 1500 2000 time 2500 3000 3500 4000 Figure 7. 0.27 0.32 0.34 0.18: Average Richardson fraction with dt = 0.25 0 500 1000 1500 2000 time 2500 3000 3500 4000 Figure 7.31 0.3 0.25.35 0.29 0.31 0.5 95 .315 0.325 0.25. 1 0.32 Richardson fraction Richardson fraction 0.0.125.28 0.19: Average Richardson fraction with dt = 0.26 0.34 Richardson fraction 0. 0.5. 0.
02 0.02 0.01 0 100 50 R 0 −100 z 0 100 Figure 7.02 ψ 0.02 ψ 0.20: Evolution of the dipole with diﬀerent mesh size 96 .01 0 100 50 R 0 −100 z 0 ψ 100 0.01 0 100 50 R 0 −100 z 0 100 t = 2000 t = 2600 0.01 0 100 50 R 0 −100 z 0 ψ 100 0.t=0 t = 1400 0.
That is.2a) (8. o To model the Schr¨dinger equation on this region.1) with the boundary conditions of zero at the edge of a square. We can use a CrankNicholson method here.Chapter 8 The TwoDimensional SN equations 8. So the nondimensionalized Schr¨dingerNewton equations then in 2D are : o iψt = −ψxx − ψyy + φψ (8.1 The problem In this chapter.2 The equations Consider the 2dimensional case. as in the case of the one o dimension need a numerical scheme that will preserve the normalisation as it evolves.7. These rigidly rotating solutions are unstable. and some solutions which are like rigidly rotating dipoles.2b) φxx + φyy = ψ2 To evolve the full nonlinear case we require to use the same method as section 5. however. instead of on the sphere. and will merge. ∂x2 ∂y (8. we consider the SN equations in a plane. we o require a method to solve the Schr¨dinger equation in 2D as well as the Poisson equation. the case where the Laplacian is just 2 ψ=( ∂2 ∂2 + 2 )ψ. that is. that is in Cartesian coordinates x. we still. we need a method to solve the Schr¨dinger equation and Poisson equation. 8. We shall ﬁnd a dipolelike stationary solution. o 97 . y. radiating angular momentum. Therefore we use an ADI (alternating direction implicit) method to solve the Schr¨dinger equation. that is. but we need to modify this since the matrices become large and no longer tridiagonal in the case of ﬁnite diﬀerences.
we choose sponge factor to be e = min[1. (8.This works when V = 0. We proceed with the iteration until 8.5b) where ρ is a small number chosen to get convergence.j i. (8.1. we use a PeacemanRachford ADI iteration [1] on the and then we deﬁne the iteration to be.3b) (8. We the same scheme as used in Guenther [10] to intoduce the potential so we have the scheme: (1 − iD2x )S = (1 + iD2y )U n (1 + iV n+1 )U n+1 = (1 − iV n )T (1 − iD2y )T = (1 + iD2x )S (8. 98 .3a) (8.j i. i.3c) The last equation (8.j i. and D2 is the denotes a Chebyshev diﬀerentiation matrix or a ﬁnitediﬀerence method for approximating the derivative in the direction indicated.3 Sponge factors on a square grid Here we consider sponge factors on the square grid in two dimensions.j −D2y φn+2 + ρφn+2 = ρφn+1 + D2x φn+1 + fi.j where fi.j = ψ2 .6) since we do not want any corners in the sponge factors. φn+2 − φn  is less than some tolerance. −D2x φn+1 + ρφn+1 = ρφn + D2y φn + fi.j i.5a) (8. Rather than solving this equation as it stands (that is. The ADI method approximates CrankNicolson to the second order. We plot a graph of this sponge factor in ﬁgure 8.j it converges.j i.5( √ (x2 +y 2 )−20) ]. by the subscript. The discretised version of the Possion equation we want to solve is: −D2x φi. Let φ0 = 0 be an initial guess for the solution of the equation i. that is.j − D2y φi. i.j (8.3c) deals with the potential.j i. N 2 × N 2 matrix problem where equation as described below.4) N is the number of grid points).j = fi.j .j . We also need a quicker way to solve the Possion equation in 2D. e0. We can introduce to deal with a nonzero potential an extra term in the scheme.
approximately the ground state (see ﬁgure 8.01((x−a) 2 +(y−b)2 )) eivy .3.2.6).2 0 30 20 10 0 −10 x −20 −30 30 20 10 0 −10 −20 −30 y Figure 8.5).3 and 8.1: Sponge factor Again to test the sponge factors we send an exponential lump oﬀ the grid. We see that the dipolelike state evolves for a while like a stationary state. (8. We plot in ﬁgures 8. 8.4 0. that is rotating at constant phase.sponge factor for square grid 1 0.2 to evolve this stationary state with sponge factors. that is.7) a wavefunction centred initially at x = a and y = b and moving in the y direction with velocity v. 8. a state with higher energy than the ground state. where the initial function is of the form: ψ = e−0. We note that most of the lump is absorbed.4 Evolution of dipolelike state As in chapter 6 and chapter 7 we can consider now the evolution of a higher state.4 the lump moving oﬀ the grid as a test of the sponge factor.8 0. and then it decays and becomes just one lump. In this case we ﬁnd that the ﬁrst stationary state after the ground state has two lumps like that of the dipole (see ﬁgure 8. 99 .6 0. We can ﬁnd such stationary states by using the same method as for axially symmetric state section 7. but modiﬁed to the 2D case by changing the diﬀerential operators. Now we can use the method of section 8.
02 0 20 0 −20 −20 0 20 0.04 0. N = 30 and dt = 2 100 .3: Lump moving oﬀ the grid with v = 2.06 0.04 0.04 0.02 0 20 0 −20 −20 0 20 Figure 8.02 0 20 0 −20 −20 0 20 0.06 0.06 0.06 0.02 0 20 0 −20 −20 0 20 t = 24 t = 36 0.02 0 20 0 −20 −20 0 20 t = 72 t = 84 0. N = 30 and dt = 2 t = 48 t = 60 0.2: Lump moving oﬀ the grid with v = 2.06 0.02 0 20 0 −20 −20 0 20 Figure 8.04 0.t=0 t = 12 0.04 0.02 0 20 0 −20 −20 0 20 0.06 0.04 0.06 0.04 0.02 0 20 0 −20 −20 0 20 0.06 0.04 0.
02 0 −0.06 0.04 0.02 0 20 0 −20 −20 0 20 t = 120 t = 132 0.06 0.04 0.04 0.02 −0.02 0 20 0 −20 −20 0 20 0.04 0.4: Lump moving oﬀ the grid with v = 2.06 0.02 0 20 0 −20 −20 0 20 0.06 0. N = 30 and dt = 2 0.08 0.5: Stationary state for the 2dim SN equations 101 .04 0.06 0.t = 96 t = 108 0.08 40 20 0 −20 y −40 −40 −30 −20 −10 x 0 10 20 30 40 ψ Figure 8.06 −0.02 0 20 0 −20 −20 0 20 Figure 8.04 −0.
05 ψ 0 40 ψ 20 0 −20 −40 y −40 −20 x 0 20 40 0 40 20 0 −20 −40 y −40 −20 x 0 20 40 t = 4800 0. θ + ωt.1 0. In Cartesian coordinates (8. This is redeﬁning the stationary state so that rotating solution are considered. y.8) where ω is a real scalar and r and θ are the polar coordinates.1 0.10b) 102 .8) becomes: ψ(x. 0). 0). t) = e−iEt ψ(r.6: Evolution of a stationary state 8.05 0. (8.05 ψ 0 40 20 0 −20 −40 y −40 −20 x 0 20 40 0 40 20 0 −20 −40 −40 −20 0 20 40 Figure 8. θ. Now we let.9) (8. which satisﬁes: ψ(r. Y = −x sin(ωt) + y cos(ωt).1 0. (8. We could instead try to seek a solution. −x sin(ωt) + y cos(ωt). t) = e−iEt ψ(x cos(ωt) + y sin(ωt).5 Spinning Solution of the twodimensional equations To get a spinning solution we could try setting up two ground states or lumps distance apart and set them moving around each other by choosing the initial velocities of the lumps.1 0.05 0. X = x cos(ωt) + y sin(ωt).10a) (8.t=0 t = 4400 0.
103 . 0) is no longer going to be of just one phase.7 and the imaginary part in ﬁgure 8.12a) (8. Y. We can modify the methods used to calculate the stationary state solution in the axially symmetric case (section 7.11) where ψX denotes partial diﬀerentiation of ψ in the ﬁrst variable and ψY in the second variable. to calculate the numerical solution of (8. We note that = Y and = −X. Note that ψ is a solution is a function of r then [Y ψx − Xψy ] vanishes and we get the ordinary stationary states in that case regardless of what value ω is. (8. does not vanish.6 Conclusion For the 2dimensional SN equations we can conclude that: • They behave as the spherically symmetric and axisymmetric equations that is the ground state is stable and the higher states are unstable.12b) ¯ we note that these equations are dependent on X and Y only.2. Y.dX dY which is a rotation of ωt. ∂t (8. In ﬁgure 8. 0) + iωY ψX (X. So substituting (8. • Spinning solutions do exist and they decay like the other solutions to a “multiple” of the ground state.10 we plot the section of the evolution showing that the state does decay. with the initial data being a spinning solution. 0)].9 we plot the evolution of the state before it decays showing that the state does spin about x = 0 and y = 0.12) at a ﬁxed ω. which corresponds to a solution rotating in the other direction. 0) − iωXψY (X.3). If ψ 8. = − 2 ψ + φψ.005 which we plot the real part in ﬁgure 8. We note that ψ(X. Y. emitting scatter oﬀ the grid and leaving a “multiple” of the ground state. dt dt ∂ψ becomes: Then i ∂t i ∂ψ = e−iEt [Eψ(X.9) into 2dimensional SN equations we have: Eψ + iωY ψX − iωXψY 2 φ = ψ2 .8. We can use the timedependent evolution method as in section 8. Y. hence a solution that will spin? We can ﬁnd such a solution with ω = 0. Do nontrivial solutions of these equations exist that is a solution such that [Y ψx −Xψy ] with −ω instead of ω. In ﬁgure 8.
04 −0.005 0.08 0.01 imag(ψ) 0 −0.02 −0.02 0.spinning solution ω =0.04 0.02 real(ψ) 0 −0.01 −0.03 20 10 0 −10 y −20 −20 −10 x 10 0 20 Figure 8.7: Real part of spinning solution spinning solution ω =0.005 0.8: Imaginary part of spinning solution 104 .03 0.02 −0.06 −0.06 0.08 20 10 0 −10 y −20 −20 −10 x 10 0 20 Figure 8.
06 0.06 ψ 0.06 0.04 0.06 ψ 0.02 0 40 20 40 ψ 0.04 0.t=0 t = 200 0.10: Evolution of spinning solution 105 .04 0.02 0 40 20 40 0 −20 −40 y −40 −20 x 0 20 0 −20 −40 y −40 −20 x 0 20 Figure 8.04 0.02 0 40 ψ 20 0.02 0 40 ψ 20 0.02 0 40 0 −20 −40 y −40 t = 400 −20 x 0 20 40 20 0 −20 −40 y −40 t = 600 −20 x 0 20 40 0.02 0 40 20 40 0 −20 −40 y −40 −20 x 0 20 0 −20 −40 y −40 −20 x 0 20 t = 7600 t = 8800 0.06 0.08 0.9: Evolution of spinning solution t = 5200 t = 6400 0.04 0.04 0.08 0.04 0.06 0.08 0.08 0.02 0 40 20 40 ψ 0.06 ψ 0.04 0.06 ψ 0.02 0 40 0 −20 −40 y −40 −20 x 0 20 40 20 0 −20 −40 y −40 −20 x 0 20 40 Figure 8.
Lange.Jones. (Draft) [5] J. “A numerical study of the time dependent Schr¨dinger equation coupled o with Newtonian gravity” Doctor of Philosophy thesis for University of Texas at Austin (1995).H. D56 (1997) 4844. Phys.F. P.H.Toomire.Schwartz. B.Zweifel. Exactly soluble sector of quantum gravity Phys.R.Soler.Lett 105A (1984) 199202 [7] L.Giladi. Gravitation and the quantummechanical localization of macroobjects.Rev.F.Goldbery. Timedependent dissipation in nonlinear Schr¨dinger o systems 36 (1995) 12741283 Journal of Mathematical Physics.Diosi. Dynamics of the SelfGravitating BoseEinstein Condensate.Christian. Numerical Methods for Partial Diﬀerential Equations 2nd ed (1977) [2] E. Quantum Communications and Measurement. Eigenstate of the Gravitational Schr¨dinger o Equation.Schey.F. Modern Physics Letters A13 (1998) 23272336 [4] D.R.C. An overview of Schr¨dingerPoisson Problems. Results and ByProducts.Jones.Bibliography [1] W.Phys 35 3 (1967) 177186 [10] R.W. Plenum (1995) 245250 [8] L. Computergenerated motion pictures of onedimensional quantum mechanical transmission and reﬂection phenomena Am. B.W. 106 . [11] H.Math. K.Bernstein. [6] L.Diosi. Reo ports on Mathematical Physics 36 (1995) 331345 [12] H.Ames. H. Permanent State Reduction: Motivations.Guenther.12 (1999) 16 [3] D.J. J.Arriola. P.Evans.Zweifel. “Partial diﬀerential equations” AMS Graduate Studies in Mathematics 19 AMS (1998) [9] A. J.R.Lett. K.Toomire.Lange.Bernstein. Asymptotic behaviour for the 3D Schr¨dingerPoisson system in o the attractive case with positive energy App. E.
Math. Is there a gravitational collapse of the wavepacket ? (preprint) [24] H. Numerical Solution of Partial Diﬀerential Equations (1994) [17] R.Sci. Systems of SelfGravitating Particles in General Relativity and the concept of an Equation of State.(Oxford Lecture Course) 107 . B. Academic Press 1972.P.Zwiefel. Phys. 15 (1998) 27332742 [16] K.Ruﬃni. D.Flannery Numerical Recipes in Fortran (second edition) [21] R. [20] W.Soc.Penrose. H. 28 (1996) 581600 [18] R.Trefethen. entanglement and state reduction Phil. Lectures on Spectral Methods.Tod.Powers.F.P. [27] L. [28] L.P.Penrose.Penrose. o Nonlinearity 12 (1999) 20116 [15] I. K.Stephani.Trefethen. Boundary Value Problems. Class Quantum Grav.[13] R.Illner.17 (1994) 349376 [14] I. “Diﬀerential equations: their solution using symmetries” Cambridge: CUP (1989) [25] K.P. R.Lett.Moroz.F.R.M.Moroz. uniqueness and asymptotic behaviour of solutions of the WignerPoisson and Sch¨dingerPoisson systems. P. Physics Letters B 366 (1996) 8588 [23] H. Sphericallysymmetric solutions of the Schr¨dingero Newton equations.Vetterting.Tod. Global existence. On gravity’s role in quantum state reduction. S.W.187 (1969) 1767 [22] B.M.Lange.Rev. Gen.J.N. An axiallysymmetric Newtonian boson star.Morton. Spectral Methods in MATLAB. van der Bij. SIAM 2000.Schmidt.Tod.A o 280 (2001) 173176 [26] Private correspondence with K.Press.Grav. The ground state energy of the Schr¨dingerNewton equations Phys.Bonazzola. An Analytical Approach to the Schr¨dingerNewton equations.Tod. (Lond) A 356 (1998) 1927 [19] D.Mayers. W.Meth in o App. K.Trans.Schupp. Quantum computation.Rel.L. S.N. J.Teukolsky.
METHOD .XE.Y) dY/dt = FN(X. dimension(32*4) :: W Double Precision :: RATIO. I. dimension(4) :: Y.YP.HSTART Double Precision :: X02AJF.INT2 Double Precision :: X.Y1P.STEP2.XEND.STEP.DR2. LENWRK CHARACTER*1 :: TASK Logical :: ERRAS 108 .YS Double precision. dimension(4) :: PY.Appendix A Fortran programs A.XBLOWP INTEGER :: DR.X02AMF Double Precision :: Y1.1 ! ! ! ! ! ! ! ! ! ! ! Program to calculate the bound states for the SN equations integrates the spherically symmetric SchrodingerNewton equations Integrates the system of 4 coupled equations dX/dt = FN(X.INT1.YMAX.INSIZE.GOUT.IFAIL.Y) using the NAG library routine D02PVF D02PCF using the NAG library routines X02AJF XO2AMF Program searchs for bound states and the values of r for which they blow up Program SN Implicit None External D02PVF External D02PCF External OUTPUT External X02AJF External X02AMF External FCN External COUNT Double precision.ENCO2 INTEGER :: N.ENCO.PY2 Double Precision.XBLOW.DOUT.THRES.TOL.
! compute a few things ! N = 4 INSIZE = 0.01 Y1P = 1.2 X = 0.0000001 XEND = 2000 XE = X TOL = 100000000000*X02AJF() THRES(1:4) = Sqrt(X02AMF()) METHOD = 2 TASK =’U’ ERRAS = .FALSE. HSTART = 0.0 LENWRK = 32*4 IFAIL = 0 GOUT = 0 ! Ask for user input . ! to find root below given value ! Y(3) = 1.0 Y(2) = 0.0 Y(4) = 0.0 ! Ask for upper value Print*, " TOL ", TOL Print*, " Enter upper value " Y1 = 1.09 Y(1) = Y1 PY(1:4) = Y(1:4) PY2(1:4) = Y(1:4) ! Step size STEP = 0.0005 OPEN(Unit=’10’,file=’fort.10’,Status=’new’) OPEN(Unit=’11’,file=’fort.11’,Status=’new’) DO WHILE ( GOUT == 0 ) DR = 0 ! ! D02PVF is initially called to give initial starting values to ! subroutine , ! One has too determine the initial direction in which our ! first value blows up Step2 = INSIZE ! Initial trial step in Y(1) DR2 = 1 ! direction of advance in stepping ! DR is direction of blow up CALL D02PVF(N,X,Y,XEND,TOL,THRES,METHOD,TASK,ERRAS,HSTART,W,LENWRK,IFAIL) DO WHILE (DR == 0) XE = XE + STEP PY2(1:4) = PY(1:4) PY(1:4) = Y(1:4) 109
IF (XE < XEND) THEN CALL D02PCF(FCN,XE,X,Y,YP,YMAX,W,IFAIL) END IF IF (XE > 3*STEP) THEN RATIO = (XE2*STEP)*PY2(1)/((XESTEP)*PY(1)) RATIO = RATIO  (XESTEP)*PY(1)/(XE*Y(1)) IF ((RATIO*RATIO) < 1E15) THEN RATIO = (XESTEP)*PY(1)/(XE*Y(1)) IF (RATIO < 1) THEN IF (RATIO > 1) THEN IF (Y(1) > 0) THEN DR = 1 END IF IF (Y(1) < 0) THEN DR =  1 END IF END IF END IF END IF END IF IF (Y(1) > 2) THEN DR = 1 ENDIF IF (Y(1) < 2) THEN DR = 1 ENDIF IF (XE > XEND) THEN DR = 10 ENDIF ENDDO ! Do while is ended ... XBLOWP = 0 XBLOW = 0 DO WHILE (DR /= 10) Y1 = Y1STEP2*REAL(DR2) !Print*, "Y(1) ", Y1 Y(3) = 1.0 Y(2) = 0.0 Y(1) = Y1 Y(4) = 0.0 XBLOWP = XBLOW XBLOW = 0 ENCO = 0 ENCO2 = 0 DOUT = 0 X = 0.0000001 HSTART = 0.0 XE = X 110
CALL D02PVF(N,X,Y,XEND,TOL,THRES,METHOD,TASK,ERRAS,HSTART,W,LENWRK,IFAIL) DO WHILE (DOUT == 0) XE = XE + STEP YS = Y PY2(1:4) = PY(1:4) PY(1:4) = Y(1:4) IF (XE < XEND) THEN CALL D02PCF(FCN,XE,X,Y,YP,YMAX,W,IFAIL) END IF CALL COUNT(YS,Y,ENCO,ENCO2) IF (Y(1) < 0.0001) THEN IF (Y(1) > 0.0001) THEN XBLOW = XE ENDIF ENDIF IF (XE > 3*STEP) THEN RATIO = (XE2*STEP)*PY2(1)/((XESTEP)*PY(1)) RATIO = RATIO  (XESTEP)*PY(1)/(XE*Y(1)) IF ((RATIO*RATIO) < 1E15) THEN RATIO = (XESTEP)*PY(1)/(XE*Y(1)) IF (RATIO < 1) THEN IF (RATIO > 1) THEN IF (Y(1) > 0) THEN DOUT = 1 !Print*, "eup", XBLOW IF (DR /= 1) THEN STEP2 = STEP2/10 DR = 1 DR2 = DR2*(1) END IF END IF IF (Y(1) < 0) THEN DOUT = 1 !Print*, "edown", XBLOW !Print*, "zeros", ENCO IF (DR /= 1) THEN STEP2 = STEP2/10 DR = 1 DR2 = DR2*(1) END IF END IF END IF END IF END IF END IF IF (Y(1) > 1.5) THEN DOUT = 1 !Print*, "up", XBLOW 111
5) THEN DOUT = 1 !Print*.*).XBLOW !Print*.0000001 HSTART = 0.XBLOW. "down " . ENCO Print*.0 Y(1) = Y1 Y(4) = 0.Y1 .ENCO ! Now consider the intergation to get A / B^2 !! Y(3) = 1. " bound state ".Y1 PRINT*. XBLOW Write(10.0 Y(2) = 0.0 XE = X INT1 = 1 INT2 = 0 112 . ENCO IF (DR /= 1) THEN STEP2 = STEP2/10 DR = 1 DR2 = DR2*(1) ENDIF ENDIF IF (XE > XEND) THEN DR = 10 DOUT = 1 ENDIF IF (STEP2+Y1 == Y1 ) THEN DR = 10 DOUT = 1 ENDIF ENDDO ENDDO ! obtained value for bound state now write it in file ! And look for another Print*. "zeros ".0 XBLOWP = XBLOW XBLOW = 0 ENCO = 0 ENCO2 = 0 DOUT = 0 X = 0. " Blow up value is ". " Number of zeros is ".IF (DR /= 1) THEN STEP2 = STEP2/10 DR = 1 DR2 = DR2*(1) ENDIF ENDIF IF (Y(1) < 1.
1) THEN Print*.ERRAS.W.1 END IF END IF END IF END IF END IF IF (Y(1) > 2) THEN DR = 1 ENDIF IF (Y(1) < 2) THEN DR = 1 ENDIF IF (XE > XEND) THEN DR = 10 ENDIF ENDDO Print*.IFAIL) DO WHILE (XE < XBLOWP) XE = XE + STEP PY2(1:4) = PY(1:4) PY(1:4) = Y(1:4) IF (XE < XEND) THEN CALL D02PCF(FCN.HSTART.METHOD.IFAIL) END IF INT1 = INT1 .STEP*XE*Y(1)*Y(1) INT2 = INT2 + STEP*XE*XE*Y(1)*Y(1) IF (XE > 3*STEP) THEN RATIO = (XE2*STEP)*PY2(1)/((XESTEP)*PY(1)) RATIO = RATIO .XEND.(XESTEP)*PY(1)/(XE*Y(1)) IF ((RATIO*RATIO) < 1E15) THEN RATIO = (XESTEP)*PY(1)/(XE*Y(1)) IF (RATIO < 1) THEN IF (RATIO > .X.0000001 XE = X INSIZE = (Y1PY1)/10 Y1P = Y1 Y1 = Y1 .INT2 X = 0. INT2 Write(11.0. "A is : ".TOL. RATIO IF (Y(1) > 0) THEN DR = 1 END IF IF (Y(1) < 0) THEN DR = .*) INT1.0000001 Y(1) = Y1 Y(2) = 0.YP.Y.YMAX.0 113 .LENWRK.THRES. "B is : ".XE.CALL D02PVF(N.Y.W. INT1 Print*.X.TASK.
Ynew Integer :: Number.subroutine SUBROUTINE OUTPUT(Xsol. way If (Yold(1) > 0) THEN If (Ynew(1) < 0) THEN ! we have just reached a turing point way = 1 Number = Number + 1 ENDIF ENDIF 114 .0 Y(4) = 0. Xsol.D0*Y(2)/SY(3)*Y(1) YP(3) = Y(4) YP(4) =2.YP(4) YP(1) = Y(2) YP(2) =2.D0*Y(4)/SY(1)*Y(1) RETURN END ! ! OUTPUT .*).Number.YP) Implicit None Double Precision :: S.way) Implicit None Double Precision.y(1). DIMENSION(4) :: Yold.Y(3) = 1.Y(4).y) Implicit None Double Precision :: Xsol Double Precision. Xsol.Y.Ynew.0 ENDDO END PROGRAM SN ! ! FUNCTION EVALUATOR ! SUBROUTINE FCN(S.y(3) WRITE(2.y(3) END ! ! Subroutine Count to count the bumps in our curve SUBROUTINE COUNT(Yold.y(1). DIMENSION(4) :: Y PRINT*.
If (Yold(1) < 0) THEN If (Ynew(1) > 0) THEN way = 1 Number = Number + 1 ENDIF ENDIF END 115 .
r). d = real(log(yr*r/(yr2*(r+1)))).1).2) = c*exp(d*tail(i. % function [f.1).n).1).nb2(:. tail(i. % [D.r+1).nb2(:.x] = cheb(N). tail(i.1). vr = interp1(nb2(:.N.1) .1.2 Programs to calculate stationary state by Jones et al method function [f.1).r).1).1 B.Appendix B Matlab programs B. yr2 = interp1(nb2(:. f = (vr2*(r+1)vr*r) e = vr*r. % then work out values of the function for i = 1:b tail(i.1))/tail(i. s = exp(d*r)/r.nb2(1.n).r+1). %step = nb2(2.1) = nb2(1.1.r*f.2).2).1) + (i1)*step .3).1) + f.3).nb2(:. yr = interp1(nb2(:.nb2(:.1 Chapter 2 Programs Program for asymptotically extending the data of the bound state calculated by RungeKutta integration % To work out the asymptotic solution % % r the point to match to !! % the nb2 is the solution matching to at the moment !! % matching with exp(r)/r % 1/r % b = 5000.N.evalue] = cpffsn(L. c = yr/s.3) = e/tail(i. vr2 = interp1(nb2(:. 116 . end B.evalue] = cpffsn(L.
N.N.N).cx).N.N.x2 = (1+x)*(L/2).n. D2 = D^2/(dist^2).x). g = abs(ft(2:end1)).x).0]. dist = L/2.N).evalue] = cpfsn(L.2:N).N). nom = inprod(ft.x. pu = [0.evalue] = cpesn(L.newphi.x). f =fx./cx(2:end1).evalue] = CPESN(L.fx/sq./cx(1:N). %f = interp1(sq*x. % N is number of cheb points 117 .phi) % [f. while (res > 1E13) [ft.phi. %f = interp1(sq*x.fx/sq. function [f.evalue] = CPFSN(L. pu = D2\g. ft = ft/sqrt(nom).x.n. phi = 1.x) % % works out nth eigenvector of schrodinger equation with selfpotential % initial guess of 1/(r+1) with the length of the interval L . cx = (xp+1)*dist.ft.n.evalue] = cpfsn(L.f.phi).^2.cx. end fx = interp1(cx. res = max(abs(newphioldphi)) oldphi = newphi.pu. %sq2 = inprod2(f.N.x. res = 1. newphi = pu(1:N).cx). %sq = inprod2(fx.n.x.phi.fx. function [f.evalue] = cpesn(L.n./(1+x2). newphi = interp1(x.n. D2 = D2(2:N.phi) % % solve the bounds states of the SchrodingerNewton Equation % L length of interval % N number of cheb points % n the nth bound state % x the grid points wrt function is required % phi initial guess for potential % [D.x2.N.x) % [f. %sq = sq*sq2.xp] = cheb(N). newphi(N+1) = newphi(N). oldphi = newphi. [f.ft.
phi.eigs] = cpsn(L.x).eigs] = cpsn(L. for a = 1:n [e.N.N.N. E0 = 0*E. % %Matrices C1 and C2 % % Matrices C1 and C2 to solve % D2 + 1/rR = ER % 118 . ZE = zeros(N.x) % % potential of phi wrt x with usual boundary conditions. dist = (cx(N+1)cx(1))/2. cx(1) = 0.2:N).eigs] = cpsn(L.phi. evalue = e.x) % [V.% interpolates to x % [V. 0]. V(:. dist = (cx(N+1)cx(1))/2.ev. end % % working out the cheb difff matrix % D = cheb(N). function [V. D2 = D2W(2:N.phi. for i = 1:(N+1) cx(i) = (cos((i1)*pi/(N)))*dist+dist+cx(1). D2W = D^2/dist^2.i] = max(EE0). cx(N+1) = L. end E = diag(eigs). I = diag(ones(N1. E0(i) = NaN. end ev = [0.x).N).i). f = interp1(cx. for i = 1:(N+1) cx(i) = (cos((i1)*pi/(N)))*dist+dist+cx(1). cx(N+1) = L. % using cheb differential operators % % N = number of cheb points % L = length of interval % cx(1) = 0.1)).
/denom. D(ii+1.1).m .nb1(:.nb1(:. denom(j+1) = 1.cphi = interp1(x. %cheb. x=1. br1(i) = interp1(nb1(:. if j > 0 & j< N D(j+1.2. bv1(i) = interp1(nb1(:.phi. 119 . B.2). %Compute eigenvalues [V. x = cos(pi*ii/N).nb3(:.^(ii+j).2].bx1(i)). end denom = cj*(x(ii+1)x(j+1)). br3(i) = interp1(nb3(:.5*x(j+1)/(1x(j+1)^2).1). D = zeros(N+1.nb2(:. end.eigs] = eig(C1.j+1) = . D2 = D^2/dist^2.(N+1)*(N+1) chebyshev spectral differentiaion % Matrix via explicit formulas function [D.ones(N1.2). if j==0  j==N cj = 2.cx(2:N)).N+1). ii = (0:N)’.2). for i = 1:(N+1) bx1(i) = (cos((i1)*pi/(N)))*dist+dist+nb1(1.3).1))/2.x] = cheb(N) if N==0 D=0.N+1) = (2*N^2+1)/6.bx1(i)). end % working out the cheb difff matrix % D = cheb(N).1).1).bx1(i)).1).1 Chapter 4 Programs Program for solving the Schr¨dinger equation o % schrodinger equation with potential from the bound state % To create function of nb1 % with cheb point spaces % cos(i*pi/2k) % N = number of points clear bx1 bv1 br1 dist = (nb1(l.1). end end D(1.2 B. for j = 0:N.1) = (2*N^2+1)/6. %compute one column at a time cj = 1. D(N+1. C1 = D2 + diag(cphi).1)nb1(1. ci = [2.C2).*(1).bx1(i)).j+1) = ci. C2 = I. br2(i) = interp1(nb2(:. break.
V(1:N1.’r*’)./bx1(2:N)’) f = V(1:N1. f(2:N) = f(1:N1). int(1) title(’second eigenvector ’). grid on.V(1:N1.i2). f(1) = 0. grid.1.i2] = min(E) subplot(3. %Matrices C1 and C2 %for the schrodinger equation %D2 + V0 = e % C1 = D2 + V0. ratio = (V(1.3). f(1) = 0.1).2:N). 120 . D = D(1:N.2.1:N). [y.i] = min(E) subplot(3. I = diag(ones(N1./bx1(2:N)’) f = V(1:N1. ratio2 = ratio2^(1/3).2)./bx1(2:N)’.^2/int(1). plot(diag(eigs). int = dist*(D\f).br1(2:N)’*ratio. int = dist*(D\f).1)). % %Compute eigenvalues [V. C2 = I.1).D2 = D2(2:N.C2). f(2:N) = f(1:N1).^2. error2 = interp1(bx1(2:N). error = error2’ .ratio2*bx1(2:N)).i). [y2.i). E = diag(eigs). plot(bx1(2:N).i2). subplot(3. pause. int(1) title(’first eigenvector ’). hold off subplot(1.i). ratio2 = ratio.eigs] = eig(C1. V0 = diag(bv1(2:N)+Estate). plot(bx1(2:N).error) title(’error to nb1’) E(i) = max(E)+1.i). plot(bx1(2:N). grid on.2.2.V(1:N1.^2./bx1(2))/br1(2).
ratio2 = ratio. bv1(i) = interp1(nb1(:. ratio2= ratio.bx1(i)).i2).1)nb1(1.1)./bx1(2:N)’.V(1:N1.2.V(1:N1.2.nb2(:.6). 121 .error) title(’error to nb3’) B.2)*ratio.^2/int(1). plot(bx1(2:N). error = error2’ .ratio2*bx1(2:N)). int(1) title(’third eigenvector is:’).3).1).bx1(i)). error2 = interp1(nb3(:.i3).1).nb3(:. D2W = D^2/dist^2. ratio = (V(1. ratio2 = ratio2^(1/3)./bx1(2))/br3(2).2)*ratio.2 Program for solving the O( ) perturbation problem % first order perturbation % To create function of nb1 % with cheb point spaces % cos(i*pi/2k) % N = number of points clear bx1 bv1 br1 dist = (nb1(l.nb1(:. plot(bx1(2:N).1).V(1:N1. br1(i) = interp1(nb1(:.1).4). plot(bx1(2:N). subplot(3.i3). int = dist*(D\f).2).^2/int(1). grid on. D2 = D2W(2:N.2:N)./bx1(2:N)’) f = V(1:N1.2. f(1) = 0. ratio2 = ratio2^(1/3). [y2.1))/2.error) title(’error to nb2’) E(i2) = max(E)+1. error2 = interp1(nb2(:. error = error2’ . f(2:N) = f(1:N1).^2.ratio2*bx1(2:N)). for i = 1:(N+1) bx1(i) = (cos((i1)*pi/(N)))*dist+dist+nb1(1.i3)./bx1(2:N)’.i3).nb1(:.subplot(3. end % % working out the cheb difff matrix % D = cheb(N).i2).2.i3] = min(E) subplot(3.5)./bx1(2))/br2(2). ratio = (V(1.
ZE. 122 . g = V(N:2*N2. fg = f.C2).ZE. f = abs(V(1:N1.2.program to work out integral of % D = cheb(N).ZE. R0 = diag(br1(2:N)). plot(Enew(1:N). %Matrices C1 and C2 %for the S_N equations % subject to the equations % 2R0A + D2W = 0 % D2B + V0B = eA % D2A .ZE. f(2:N) = f(1:N1).ZE. f(1) = 0.3 Program for the calculation of (4.ZE.i)). B =0 W = 0 at endpoints % DA = 0 at infinity % DB = 0 at infinity C1 = [2*R0. ZE = zeros(N1.1)). j = sqrt(1). fg(1) =0.m . Enew = sort(E).^2.V0A + R0W = eB % with boundary conditions % A = 0 . B = dist*(D\g). A = dist*(D\f). grid title(’Eigenvalues of the perturbed SN equation ’) B.^2. D = D(1:N.*g. g(1) = 0.ZE]. %Compute eigenvalues [V.1:N). f = V(1:N1. int = dist*(D\fg).’r*’).D2.I. g(2:N) = g(1:N1). f = conj(f).i).N1).ZE.V0 = diag(bv1(2:N)). g = abs(V(N:2*N2.i).eigs] = eig(C1.10) A*B % abint.ZE.R0]. fg(2:N) = fg(1:N1).D2V0.I. hold off E = j*diag(eigs).i)). C2 = [ZE.D2+V0.ZE. I = diag(ones(N1.
5).yp] = ode45(F. y0 = [0 1 0 1 0 1]’. y0(6) = initial value for y0(6).4 Program for performing the RungeKutta integration on the O( ) perturbation problem % see the solution of the first order perturbation equations % intial data % global nb1.imag(yp(:. subplot(3.Xspan.2) plot(xp. % e is the eigenvalue ( e = i\lambda ) global e Yprime(1) = Y(2).3. X0 = 0.1) plot(xp.3 B. B.3) plot(xp.Y) % this is function for the first order pert equations % where V ./xp)) title(’A/r’) subplot(3.y0).1).Xfinal]. F = ’yfo’. y0(2) = initial value for y0(2).imag(yp(:. Yprime(6) = 2*R(X)*Y(3).1./xp)) title(’W/r’) function Yprime = yfo(X. e = eigenvalue.1 Chapter 6 Programs Evolution program %snevc2. global e.length of the interval.m 123 .1./xp)) title(’B/r’) subplot(3.3).1.000001. Yprime(4) = V(X)*Y(3)+e*Y(1)+R(X)*Y(5). Yprime(2) = V(X)*Y(1)+e*Y(3).2.ratio = sqrt(A(1))*sqrt(B(1))\abs(int(1)) B. Xspan = [X0. y0(4) = initial value for y0(4).imag(yp(:. Yprime(3) = Y(4). [xp. Yprime = Yprime’. Xfinal = L . R are the vaule of the potential and wave function % related to the zero th order perturbation. Yprime(5) = Y(6).
u = cpfsn(R.x] = cheb(N). clear Eu. v = 0. pu = D2\rho. fwe = u.for perturbation about ground state./x.^2(0. nu = dt.5*((1x)*R). % start of the iteration scheme for zi2 = 1:100 for zi = 1:1000 count = count +1.1). %+0. [D. u = uold. dt = 0. clear i. wave = rbrack*rsig*exp(0. fgp(count) = abs(u(1)). x = 0. pert = wavewave2. rsig = sqrt(sigma).5*R. D2 = D^2. clear norm.%0. R = 350. o = zeros(N1. phi = pu. dist = 0. D2 = D2(2:N.3. brack = 1/(sigma^2). av = a+v*t. clear fgp. % solve the potential equation % sponge factors e = 2*exp(0.%Iteration scheme for the schrodingernewton equations %using crank nicolson + cheb . count = 1.5*brack*(x+av). t = 0.01*pert. wave2= rbrack*rsig*exp(0. clear psi1.2*(xx(end))). rho = abs(u(1:end)). one = 1. sigma = 15.^2. %clears used data formats hold off N = 70.1). Eu(count) = abs(u(5)). %calculate stationary solution x = x(2:N).5./x(1:end). % alternative initial data ! uold = u.1.^2+(0. a = 50. % Modified 13/3/2001 . N2 = N. 124 . clear g.5*i*v*x)i*vs).5*brack*(xav). u = u(2:N). clear psi.x./(x+1)). %e = zeros(N1.N2. vs = (v^2*t)/4. D2 = D2/dist^2.1. fgp2(count) = abs(u(15)).2:N).5*i*v*x)i*vs). rbrack = sqrt(brack).
d = (ones(N1. for z = 1:N1 g(z) = (i+e(z))*nu/2. u = b\d.x. pe(zi2) = pesn(u. psi(z) = one*(i*dt/2)*(phi(z) +o(z)).*(D2*u). psi(z) = one*(i*dt/2)*(phi(z) +o(z)).x. b = diag(maindaig) + diag(g)*D2.*u . psi1(z) = one*(i*dt/2)*(phi1(z)+o(z)). psi1(z) = one*(i*dt/2)*(phi1(z)+o(z)).D.1)+psi1’.1)+psi1’.1)psi’). pu = D2\rho. unew = u. fgp2(count) = abs(u(30)).u./x(1:end). end maindaig = ones(N1. %save fp.’. phi1 = pu.end+1) = u. u = b\d. % initial guess ! Res = 9E9.^2.phi.*u . fgp(count) = abs(u(1))./x.N1). end maindaig = ones(N1. norm(zi2) = inprod(u. %fwe(:. phi = phi1.*(D2*u). b = diag(maindaig) + diag(g)*D2.g.mat Eu %save nn. Res = max(abs(uunew)).’. while abs(Res) > 3E8 u = uold.mat fgp %save Ep. uold = unew.N). %then recalution of the result ! to check % Now to check the L infinty norm of the residual u = uold. for z = 1:N1 g(z) = (i+e(z))*nu/2.phi1 = phi.g. d = (ones(N1. % this was the solving of the P.mat norm end 125 .% end Eu(count) = abs(u(5)).E ! % work out potential for u^n+1 %% rho = abs(u(1:end)). t = t + dt. %Res = max(abs(phi1phi)) % end of checking the L inifinity norm .1)psi’).
N2.mat fgp fgp2 %Eu save nn.*nb1(:. %plot(nb1(:.4 B. Fn = [conj(Fn(N+1)) Fn(N+2:end) Fn(1:N+1)].phi1. y2 = dis2*(y+1).’r’) %plot(x.lres.l) % % warning off [D.’b’). D2 = D^2.count] = cp2fsn(dis.1).N.fwe(:.aphi. % scale results by fft length idx = N:N.y] = cheb(N2). %plot3(tone. %drawnow %hold on %hold on %plot(x. % rearrange frequency points Fn = Fn/n.aphi.angle(u).1).evalue] = cp2fsn(dis.log(abs(Fn(N+1:2*N+1)))) xlabel(’log(frequency)’).3.N2. dis2 = pi/2.abs(nb1(:. n = 2*N.x] = cheb(N).5. freq = pi/(N*dt). E = E/dis2. E2 = E^2.x. save fp. 126 . % x is the radius points. end B.1 Chapter 7 Programs Programs for solving the axisymmetric stationary state problem function [f. x2= dis*(x+1).’g’). Fn(N+1) = 0. [E. x2= x2(2:N). dt = 0.n.end+1) = u.N.abs(u). plot(log(idxs(N+1:2*N+1)).mat fwe norm %tone = t*ones(N1.4.evalue. idxs = idx*freq.’k’).l) % function [f. B. ylabel(’log(fourier transform)’).1).2 Fourier transformation program N = max(size(f))/2.2)). Fn = fft(f). D = D/dis.n.
/x2. norms ft = ft/nom.D = D(2:N. [ft.N.l). end end res2 = abs(max(newphinwphi)). 127 . for i = 1:N2+1 for j = 1:N1 nwphi((N1)*(i1)+j) = pu((N1)*(i1)+j)/x2(j). % initial guess for potential for i = 1:N2+1 for j = 1:N1 phi((N1)*(i1)+j) = 1. mang = 0.5*(nwphi((N1)*(i1)+j)+nwphi((N1)*(N2+1i)+j)). clear g.2:N).evalue. end end aphi = phi. fac2 = diag((cot(y2eps)+cot(y2+eps))/2). D2 = D2(2:N.N2. % start of a iteration while(res > 1E4) count = count +1. %Laplacian kinda of ! % actually Le Laplacian for axially symetric case ! newphi = phi. mener = 0. oldphi = newphi. res = 1.^2).newphi. count = 0.mang] = cp2esnea(dis.fac1).D2) + kron(E2.mang. L = kron(I. end end g = g.n. pu = L\g. end end for i = 1:N2+1 for j = 1:N1 newphi((N1)*(i1)+j) = 0.2:N).’. I = eye(N2+1). % get nth evector correponding to potential % next aim to work out the potential.mener. corresponding to evector ! data = ft.mener.fac1)+ kron(fac2*E. fac1 = diag(1. for i = 1:N2+1 for j = 1:N1 g((N1)*(i1)+j) = abs(ft((N1)*(i1)+j))^2/x2(j)./(1000*x2(j)+1).
res = 0. 128 . b = b+1.l2] = fnl(data. if (close < vclose) vclose = close.04) data = data/nom.ang] = cp2esnea(dis. if count > 50 lres = res. norms2 if (nom > 0. W = 1. [e. found = 0.N. close = (penerener)^2 + 1E4*(pangang)^2+ 0.phi. En = diag(eigs).N2. for as = 1:limit data = V(:.ener.N.ii(b)). limit = floor(max(ii)/4). b = 0. % % [V.n. [n2.ii] = sort(real(En)). [n2.l2] = fnl(data.N2. function [f.l).N.04) data = data/nom. norms2 if (nom > 0.eigs] = cp2sn2(dis. end end end while(W == 0). lres = res. close = (penerener)^2 + 1E4*(pangang)^2+ 0.1*((n2n)^2+ 10*(l2l)^2). enr.N.pang.pener.phi).N2).res = abs(max(newphioldphi))/abs(max(newphi)) oldphi = newphi.evalue. data = newphi. enr. aphi = newphi. end end f = ft.ii(as)).N2). W = 0. vclose = 10E8.1*((n2n)^2+ 10*(l2l)^2). data = V(:. %axpl %drawnow.
j = 0. %end clear no2 noo2. W = 1.N2). end if (j > 0) noo(j) = no(i). no(i) = cz2sn(u(i. %end end evalue = En(ii(b)). if (close < vclose*1. % where the u is r\phi % for i = 1:N2+1 for j = 1:N1 u(j. end end %if (W == 1) %W = input(’correct state’). % find n and l number of the given grid .N. i = 0. for j = 1:N2+1 i = i +1. no2(j) = cz2sn(u(1:N1. 129 . function [n.. if (no(i) == 1) j = j 1.j)).ii(b)).l] = fnl(data. end end l = mean(no).001) %axpl2 %drawnow. f = V(:.end if (W == 1) W = 0. if (no2(j) == 1) i = i 1. end end clear no noo.i) = data((N1)*(i1)+j).1:N2+1)). for i = 1:N1 j = j+1. %if (j > 0) % l = mean(noo).
%end 130 . dis2 = pi/2.end if (i > 0) noo2(i) = no2(j). D = D(2:N. D = D/dis. E2 = E^2. end end n = mean(no2). x2= dis*(x+1). E = E/dis2.D2)+kron(E2. L = +kron(I. %n = round(n). %n = ceil(n).2:N).I2). fac2 = diag((cot(y2+eps)+cot(y2eps))/2). %data2 = data. [D.fac1)./x2.x] = cheb(N). %Laplacian kinda of ! J = kron(E2.I2) + kron(fac2*E. I = eye(N2+1).2:N). %for i = 1:N2+1 %for j = 1:N1 %data2((N1)*(i1)+j) = data((N1)*(i1)+j)/x2(j).^2). %end %l = ceil(l).fac1)+kron(fac2*E. ang = 0. [E. y2 = pi/2*(y+1). %l = round(l). x2= x2(2:N). I2 = eye(N1). D2 = D2(2:N. %if (i > 0) %n = mean(noo2). % to calculate the norm of a % axially sysmetric data set! % % Jab = r^2sin \alpha % clear ener clear ang ener = 0. D2 = D^2.y] = cheb(N2). fac1 = diag(1.
jab*abs(data((N1)*(i1)+j))^2*darea*phi((N1)*(i1)+j). if (W(1) > eps2) posne = 1. end end norms2 % to calculate the norm of a % axially sysmetric data set! % % Jab = r^2sin \alpha % clear nom nom2 % Setup the grid nom = 0.. function number = cz2sn(W). for i = 3:N2+1 for j = 2:N1 darea = (x2(j)x2(j1))*(y2(i)y2(i1)). for i = 2:N2+1 for j = 2:N1 darea = (x2(j)x2(j1))*(y2(i)y2(i1)). jab = sin(y2(i)). x2= x2(2:N). 131 .. nom2 = 0. jab2 = sin(y2(i)). ang = ang .jab*conj(data((N1)*(i1)+j))*nab2((N1)*(i1)+j)*darea. aab2 = J*data.y] = cheb(N2). % cz2sn. nom = sqrt(nom). [E. % a = max(size(W)). % x is the radius points. y2 = pi/2*(y+1). nom = nom + jab*abs(data((N1)*(i1)+j))^2*darea. end end nom = nom/2. eps2 = 1E4*eps.x] = cheb(N). ener = ener . [D. ener = ener . posne = 2. jab = sin(y2(i)). count = 0. x2= dis*(x+1).jab*conj(data((N1)*(i1)+j))*aab2((N1)*(i1)+j)*darea.%end nab2 = L*data.m % function to work out number of crossing given matrix array.
end for i = 2:a if (posne == 1) if W(i) < eps2 posne = 0. count = count+1. x2= x2(2:N). x2= dis*(x+1).y] = cheb(N2).N. E = E/dis2.eigs] = cp2sn2(dis. % [V.phi). % % program to solve the stationary schrodinger in axially symmetric case. end if W(i) < eps2 posne = 0. count = count+1. D = D/dis.end if (W(1) < eps2) posne = 0.N2. end end if (posne == 0) if W(i) > eps2 posne = 1.phi). end end if (posne == 2) if W(i) > eps2 posne = 1.x] = cheb(N). end end end if posne == 2 count = 1. dis2 = pi/2. function [V. end number = count. y2 = pi/2*(y+1).N.\alpha are the variable in which we solve for at the % present both has N points ! % boundary condition are different on the \alpha ! % i. [D.e we expect phi to be N1*N1 vector ! % [E. 132 . % r.N2. % x is the radius points.eigs] = cp2sn2(dis.
ADI(LOD) wave equation % that the wave equation done in chebyshev differention % matrix in 2d clear xx2 yy2 load dpole [DD.y] = cheb(N2). N2 = 20.D2)+kron(E2.cc) = x3(cc)*sin(y3(cc2)).4. %Laplacian kinda of ! L = L + diag(phi).eigs] = eig(L). D2 = D2(2:N. 133 . E2 = E^2.D2 = D^2.cc) = x3(cc)*cos(y3(cc2)). dis2 = pi/2. yy2(cc2. [D.2:N). [EE. y3 = pi/2*(y3+1). % [V. I = eye(N2+1). clear DD EE for cc = 1:N1 for cc2 = 1:N2+1 f2(cc2. x3 = x3(2:N). fac1 = diag(1. L = +kron(I.cc) = f((cc21)*(N1)+cc). clear xx yy vv % grid setup % N = 25. y2 = pi/2*(y+1). D = D(2:N. x2 = dis*(x+1). fac2 = diag((cot(y2eps)+cot(y2+eps))/2). fred(cc2. grav = 1.fac1) + kron(fac2*E.y3] = cheb(N2).2 Evolution programs for the axisymmetric case % nsax2. end end dis = 50.cc) = aphi((cc21)*(N1)+cc). x3 = dis*(x3+1).x] = cheb(N).^2)./x2. [E.x3] =cheb(N).2:N). B.m SchrodingerNewton equation % on an axially symmetric grid % full nonlinear method using chebyshev.fac1). xx2(cc2.
N2). phi = 0*vv. E = E/dis2.v. D = D/dis. end end % initial data % %vv = interp2(xx2.j) = x2(j)*cos(y2(i)).*exp(0.1) = 1E10. pic = vv.*exp(0.3*(sqrt(xx. time =0. DD = D(2:N.yy for i = 1:N2+1 for j = 1:N1 xx(i.yy2. L1 = E2 + fac2*E./x2).^2).^2+(yy). dt = (2000/10)/plotgap.x2 = x2(2:N). % time step ! dt = 0.2:N).u. vvnew = 0*vv. vv = f2. yy(i.2:N).yy). 134 .^2)).e no sponge factors %% calculate of differentiation matrices D2 = D^2.2*dis)). pot = phi.1*sqrt(xx. %vv = 0. L2 = D2. % calculation of the potential %phi = 0*pot.^2+yy.f2.^2+(yy). fac2(1. e = 1*exp(0. fac3 = diag(1. fac1 = diag(x2). I2 = eye(N1).5.9*sqrt(xx.xx. I = eye(N2+1).^2). % create xx. vvold = vv.^2) .end) = 1E10. D2 = D2(2:N. % sponge factors %e = 0*vv.^2+yy. %vv = vv + vva. E2 = E^2. % wavefunction v = 0*pot.01*((xx). fac2 = diag((cot(y2+eps)+cot(y2eps))/2).^2+yy. % initial guess of new pot for potadi phi = padi(dis. %i.01*((xx+30).j) = x2(j)*sin(y2(i)).N.^2)). % calculate the starting potential % u = grav*vv. % plotting the function on the screen at intervals ! plotgap = round((2000/10)/dt). %vva = 0. fac2(end.
1:N1) = (L2*vv(a.1:N1).yy.02:dis. drawnow % calculate of the norm. pot = phi. pic(:. for a = 1:N1 uaa(1:N2+1.yy.yyy.02:dis).end+1) =vv. d = (sv2 + 0. % (1+dr)Un2 = (1da)Un1 for a = 1:N2+1 sv2 = vv(a.phi) time(end+1) =t.yyy] = meshgrid(dis:dis*.abs(vv)..’cubic’). % for a = 1:N2+1 % urr(a.^2+yy.^2)).a). 135 .2. %view(that)..’.5*dt*ds).xxx.imag(vv)) %hold on %surf(xx.ADI(LOD) .*uaa(a.a) = (1/x2(a)^2)*L1*vv(1:N2+1.^2+yy.abs(vvv)). % ..vv.1:N1).2 0.abs(vv)) %. npot = phi.Plot results at multiples of plotgap %subplot(2. %surf(xx. while (abs(Res) > 1E4) % set up for the iteration . % colormap([0 0 0]) %mesh(xx. % calculate of the partials urr = zeros(N2+1.1:N1).’..dis:dis*. ds = sqrt(1)*uaa(a. %[xxx. %surf(xx. %vvv = interp2(xx.abs(uaa)) %axis([2*dis 2*dis 0 2*dis 0.yy. %mesh(xxx. if rem(n.yy./sqrt(xx. end % .^2)).:.yyy..N1). %cn2d %cnorm end Res = 1.plotgap) == 0.2]) %title([’t = ’ num2str(t)]).% evolution loop begin here .1:N1).yy.% vv2 = 0*vv.yy.1:N1) + e(a./sqrt(xx. for n = 0:30*plotgap t = n*dt. %shading interp %axis([1 1 1 1 1 1]).n/plotgap+1).N1). % end uaa = zeros(N2+1. %hold off %mesh(xx.’).
’.’.N. for a = 1:N1 d = vv3(1:N2+1.5*dt*bs2.0.a) = b\d.a) = b\d.5*dt*bs2.5*dt*bpm2. %pot = npot. ds = sqrt(1)*urr(1:N2+1.’). % end for a = 1:N2+1 urr(a. end % end part1 % % caculate urr. % initial guess of new pot for potadi n2pot = padi(dis.a))*L1.*urr(1:N2+1. end % end part3 % % now the calculate of the potential of the new potential % due to the new wavefuntion u = grav*vvnew. bpm2 = sqrt(1)*diag(npot(1:N2+1.a) = (1/x2(a)^2)*L1*vv2(1:N2+1. vvnew(1:N2+1.0.0. npot = n2pot. b = eye(N2+1) .uaa % for a = 1:N1 % uaa(1:N2+1.dis.N2). b = eye(N2+1) .N2.a). for a = 1:N1 sv = vv2(1:N2+1.v.1:N1))*L2. vv3(1:N2+1.N.5*sqrt(1)*dt*vv3(1:N2+1.a). b = eye(N1) . Res = norm(n2potnpot).a).a)).5*dt*ds. end phi = n2pot.a) + e(1:N2+1.*pot(1:N2+1.5*pot. % wavefunction v = 0*pot.1:N1) = (L2*vv2(a.a). bs2 = (1/x2(a)^2)*sqrt(1)*L1 + (1/x2(a)^2)*diag(e(1:N2+1.abs(vv).1:N1). vv2(a. end % (1+da)Un3 = (1dr)Un2 vv3 = 0*vv2.bs2 = sqrt(1)*L2 + diag(e(a. % calculation of the potential %n2pot = 0*pot.a).1) save axtest pic time N N2 xx yy dis end 136 .1:N1) = (b\d). %inaxprod(abs(vv).a).a) + 0. d = sv + 0. vv = vvnew. % +0.u. end % end part2 % % (1V)Un4 = (1+V)Un3 vvnew = 0*vv3.
rho = 0.2:N)./x2). D = D/dis.1) = 1E5.017. dis2 = pi/2. % % u is the wavefunction ! % v is the potential ! % grid setup ! [E. rho2 = 0.1:N1).u.x] = cheb(N). I2 = eye(N1). E2 = E^2.^2*fac1.v. while (abs(Res) > 1E3) % calculate wrr. % x is the radius points. %for a = 1:N2+1 % wrr(a. I = eye(N2+1).’).N1). fac2(end. y2 = pi/2*(y+1). E = E/dis2. 137 . L1 = E2 + fac2*E.’.017. L2 = D2. %end waa = zeros(N2+1.2:N).end) = 1E5.function phi = padi(dis. umod = uu.with axially % sysmetric case. % readjust density function ! uu = abs(u). fac2= diag((cot(y2+eps)+cot(y2eps))/2). D = D(2:N. % calculation of the differnetiation matrics D2 = D^2. % start of the iteration Res = 1E12. x2 = dis*(x+1). [D.m % Aim: To use PeacemanRachford ADI iteration % to find the solution to the poisson equation in 2D.waa wrr = zeros(N2+1. % uu =u. fac1= diag(1. D2 = D2(2:N.N2) % padi.1:N1) = (L2*v(a.N1).N. fac2(1. x2 = x2(2:N).y] = cheb(N2).
a)).1:N1) = (L2*vv(a. v = newv. vv = 0*v.1 Chapter 8 Programs Evolution programs for the two dimensional SN equations % ns2d.a). vv(a. end % . B. ADI(LOD) wave equation % that the wave equation done in chebyshev differention % matrix in 2d dis = 40.’). %end for a = 1:N2+1 wrr(a.1:N1).’.umod(1:N2+1. padires = Res.ADI .^2.a) + wrr(1:N2+1.’. newv(1:N2+1. 138 . end % end of ADI test = abs(newvv). end fac = diag(1.5./x2).a) .1:N1)).for a= 1:N1 waa(1:N2+1.5 B. end newv = 0*v.1:N1) .umod(a. phi = v*fac.wrr %for a = 1:N1 %waa(1:N2+1. % part2 ADI %A = L1+rho2*I.’)). % (L1+rho2)Wn+1 = (L2+rho2umod)S for a = 1:N1 sv = rho2*vv(1:N2+1. for a = 1:N2+1 sv2 = rho*v(a. %abs(uu(a.a).a) = L1*vv(1:N2+1.% % (L2+rho)S = (L1+rhoumod)Wn A = L2+rho*I2.1:N1) + waa(a. A = L1*(1/x2(a)^2) + rho2*I.1:N1). %abs(uu(1:N2+1.a) = A\sv.1:N1)).a).m SchrodingerNewton equation % on a 2d grid % full nonlinear method using chebyshev. grav = 1.1:N1) = (A\(sv2.a) = (1/x2(a)^2)*L1*v(1:N2+1. Res = norm(test(1:N2+1.^2. end % end part1 of ADI % calculate waa.
07*exp(0. for cc = 1:N+1 for cc2 = 1:N+1 if isnan(vv(cc.yy2] = meshgrid(x2.^2+(yy35). end end %f3 = 0.^2+yy.speed =0. % initial data % %vv = 0.y2). %initial velocity perhaps %. for cc = 1:N21 for cc2 = 1:N21 f2(cc.07*exp(0.^2)(dis10)))).x] = cheb(N). %vv = vv + vva.^2+yy. %vva = 0.x2] = cheb(N2).1*xx).y). %vv = vv+vva.02*((xx). end end end dt = 1. y = x’. [D.01*((xx+35).^2+(yy+15). x2 = dis2*x2(2:N2).cc2) = f((cc1)*(N21)+cc2). vvold = vv. %% calculate of differentiation matrices D2 = D^2. vv = interp2(xx2. 139 .1*exp(0.5*(f2+f2(N21:1:1.*exp(sqrt(1)*0. D = D*1/dis.5*(sqrt(xx. % sponge factors e = 0*vv. y2 = x2’.cc2)) == 1 vv(cc.^2)). [xx. vvnew = 0*vv.^2)).cc2) = 0. % to add initial velocity %vv = 0. e = min(1. [xx2. phi = 0*vv.*exp(sqrt(1)*speed*yy). pot = phi.yy2.05*exp(0.^2)).01*((xx15).:)).xx. dt = 400/(2*plotgap). x = dis*x.^2))*exp(sqrt(1)*0.^2+yy.yy).01*((xx+10).*exp(sqrt(1)*speed*(yy+xx)/sqrt(2)).07*exp(0.f2. % plotting the function on the screen at intervals ! plotgap = round(400/(2*dt)). % grid setup % N = 36. %vva = 0.yy] = meshgrid(x.5). load twin [DD.
phi) %view(that).yy. % calculate uxx uyy = zeros(N+1.D2 = D2(2:N.5*dt*ds). end uxx = zeros(N+1.2:N) = (D2*vv(a.v. % wavefunction v = 0*pot.imag(vv)). time(end+1) = t.yy.2:N). 140 . % evolution loop begins here for n = 0:50*plotgap t = n*dt.N+1). pic = vv.’cubic’). if rem(n. % . % calculation of the potential time = 0. % calculate the starting potential % u = grav*vv.yyy.2:N).vv. % initial guess of new pot for potadi phi = potadi(dis.N).2:N) + e(a. %axis([dis dis dis dis 0.03]) %title([’t = ’ num2str(t)]). %axis([1 1 1 1 1 1]). for a = 2:N uyy(a.% vv2 = 0*vv.2. %vvv = interp2(xx.2:N).Plot results at multiples of plotgap %subplot(2.’.yyy] = meshgrid(dis:dis*. while (abs(Res) > 1E3) % set up for the iteration . % (1+dy)Un2 = (1dx)Un1 for a = 2:N sv2 = vv(a. npot = phi.2:N).02:dis.*uxx(a. d = (sv2 + 0.plotgap) == 0.n/plotgap+1).. for a = 2:N uxx(2:N.yy. drawnow % calculate of the norm.u.2:N). % colormap([0 0 0]) %mesh(xx. pot = phi.N+1).’..a). %cn2d %cnorm end Res = 1. end % .02:dis).yyy. ds = sqrt(1)*uxx(a. pic(:.xxx.end+1) = vv.’). %mesh(xxx.03 0.dis:dis*.ADI(LOD) .a) = D2*vv(2:N. %[xxx.abs(vvv)). %mesh(xx.:.
d = sv + 0.2:N) = (D2*vv2(a. %pot = npot. % calculation of the potential Res = norm(n2potnpot).uyy for a = 2:N uxx(2:N. % wavefunction v = 0*pot.*pot(2:N. end % end part1 % % caculate uxx.2:N) = (b\d).a).N) 141 .a) = b\d.a).5*dt*bs2.a) = b\d. end % end part3 % % now the calculate of the potential of the new potential % due to the new wavefuntion u = grav*vvnew. end for a = 2:N uyy(a.5*dt*ds. end % (1+dx)Un3 = (1dy)Un2 vv3 = 0*vv2. end % end part2 % % (1V)Un4 = (1+V)Un3 vvnew = 0*vv3.5*pot.N). vvnew(2:N. bpm2 = sqrt(1)*diag(npot(2:N.’.N+1:1:1)). vv3(2:N.2:N))*D2.vvnew(:.a) + e(2:N.0.’).a).5*dt*bpm2.0. npot = n2pot.a). end phi = n2pot.*uyy(2:N. %vv = 0.a))*D2. b = eye(N1) + 0.bs2 = sqrt(1)*D2 + diag(e(a.a).’. ds = sqrt(1)*uyy(2:N. % +0.a) . end save test3 pic time N xx yy dis function phi = potadi(dis.v.2:N).a)). bs2 = sqrt(1)*D2 + diag(e(2:N. for a = 2:N d = vv3(2:N. b = eye(N1) .a) = D2*vv2(2:N. for a = 2:N sv = vv2(2:N.a).5*dt*bs2.5*(vvnew .u.v. vv2(a. % initial guess of new pot for potadi n2pot = potadi(dis.5*sqrt(1)*dt*vv3(2:N. b = eye(N1) 0. vv = vvnew.u.
for a = 2:N uyy(a. end for a = 2:N uyy(a. x = dis*x. end % .abs(u(a.uyy uyy = zeros(N+1. vv = 0*v. end uxx = zeros(N+1.2:N).’). 142 .2:N).ADI .2:N) + uxx(a.2:N) = (A\(sv2.2:N) .a).01.a) = D2*vv(2:N.2:N) = (D2*v(a. I = eye(N1). D = D*1/dis. % start of the iteration Res = 1.’).a).’)).’.N+1).yy] = meshgrid(x.2:N) = (D2*vv(a. end newv = 0*v.2:N)).uyy for a = 2:N uxx(2:N. while (abs(Res) > 1E4) % calculate uxx. D2 = D2(2:N. end % end part1 of ADI % calculate uxx. [xx.^2. % calculation of the differnetiation matrics D2 = D^2.a) = D2*v(2:N. rho = +0.N+1).x] = cheb(N).’.2:N).m Aim: To use PeacemanRachford ADI iteration to find the solution to the poisson equation in 2D u is the wavefunction ! v is the potential ! % grid setup ! [D.% A = D2+rho*I.y).% % % % % % potadi.’. y = x’. for a = 2:N sv2 = rho*v(a. for a= 2:N uxx(2:N. vv(a.
a)).% part2 ADI for a = 2:N sv = rho*vv(2:N.2:N)).^2.abs(u(2:N.a) = A\sv. 143 . newv(2:N. Res = norm(test(2:N. end phi = v. v = newv. end % end of ADI test = newvv.a) +uyy(2:N.a) .
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.