Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
A thesis presented by Adam LupuSax to The Department of Physics in partial ful llment of the requirements for the degree of Doctor of Philosophy in the subject of Physics Harvard University Cambridge, Massachusetts September 1998
c 1998 Adam LupuSax All rights reserved
Abstract
Scattering theory provides a convenient framework for the solution of a variety of problems. In this thesis we focus on the combination of boundary conditions and scattering potentials and the combination of nonoverlapping scattering potentials within the context of scattering theory. Using a scattering tmatrix approach, we derive a useful relationship between the scattering tmatrix of the scattering potential and the Green function of the boundary, and the tmatrix of the combined system, e ectively renormalizing the scattering tmatrix to account for the boundaries. In the case of the combination of scattering potentials, the combination of tmatrix operators is achieved via multiple scattering theory. We also derive methods, primarily for numerical use, for nding the Green function of arbitrarily shaped boundaries of various sorts. These methods can be applied to both open and closed systems. In this thesis, we consider single and multiple scatterers in two dimensional strips (regions which are in nite in one direction and bounded in the other) as well as two dimensional rectangles. In 2D strips, both the renormalization of the single scatterer strength and the conductance of disordered manyscatterer systems are studied. For the case of the single scatterer we see nontrivial renormalization e ects in the narrow wire limit. In the many scatterer case, we numerically observe suppression of the conductance beyond that which is explained by weak localization. In closed systems, we focus primarily on the eigenstates of disordered manyscatterer systems. There has been substantial investigation and calculation of properties of the eigenstate intensities of these systems. We have, for the rst time, been able to investigate these questions numerically. Since there is little experimental work in this regime, these numerics provide the rst test of various theoretical models. Our observations indicate that the probability of large uctuations of the intensity of the wavefunction are explained qualitatively by various eldtheoretic models. However, quantitatively, no existing theory accurately predicts the probability of these uctuations.
Acknowledgments
Doing the work which appears in this thesis has been a largely delightful way to spend the last ve years. The nancial support for my graduate studies was provided by a National Science Foundation Fellowship, Harvard University and the Harvard/Smithsonian Institute for Theoretical Atomic and Molecular Physics (ITAMP). Together, all of these sources provided me with the wonderful opportunity to study without being concerned about my nances. My advisor, Rick Heller, is a wonderful source of ideas and insights. I began working with Rick four years ago and I have learned an immense amount from him in that time. From the very rst time we spoke I have felt not only challenged but respected. One particularly nice aspect of having Rick as an advisor is his ready availability. More than one tricky part of this thesis has been sorted out in a marathon conversation in Rick's o ce. I cannot thank him enough for all of his time and energy. In the last ve years I have had the great pleasure of working not only with Rick himself but also with his postdocs and other students. Maurizio Carioli was a postdoc when I began working with Rick. There is much I cannot imagine having learned so quickly or so well without him, particularly about numerical methods. Lev Kaplan, a student and then postdoc in the group, is an invaluable source of clear thinking and uncanny insight. He has also demonstrated a nearly in nite patience in discussing our work. My classmate Neepa Maitra and I began working with Rick at nearly the same time and have been partners in this journey. Neepa's emotional support and perceptive comments and questions about my work have made my last ve years substantially easier. Alex Barnett, Bill Bies, Greg Fiete, Jesse Hersch, Bill Hosten and Areez Mody, all graduate students in Rick's group, have given me wonderful feedback on this and other work. The substantial postdoc contingent in the group, Michael Haggerty, Martin Naraschewski and Doron Cohen have been equally helpful and provided very useful guidance along the way. At the time I began graduate school I was pleasantly surprised by the cooperative spirit among my classmates. Many of us spent countless hours discussing physics and sorting out problem sets. Among this crowd I must particularly thank Martin Bazant, Brian Busch, Sheila Kannappan, Carla Levy, Carol Livermore, Neepa Maitra, Ron Rubin and Glenn Wong for making many late nights bearable and, oftentimes, fun. I must particularly thank Martin, Carla and Neepa for remaining great friends and colleagues in the years that followed. I have had the great fortune to make good friends at various stages in my life and I am honored to count these three among them.
5 It is hard to imagine how I would have done all of this without my ancee, Kiersten Conner. Our upcoming marriage has been a singular source of joy during the process of writing this thesis. Her un agging support and boundless sense of humor have kept me centered throughout graduate school. My parents, Chip Lupu and Jana Sax, have both been a great source of support and encouragement throughout my life and the last ve years have been no exception. The rest of my family has also been very supportive, particularly my grandmothers, Sara Lupu and Pauline Sax and my stepmother Nancy Altman. It saddens me that neither of my grandfathers, Dave Lupu or N. Irving Sax, are alive to see this moment in my life but I thank them both for teaching me things that have helped bring me this far.
Citations to Previously Published Work
Portions of chapter 4 and Appendix B have appeared in \Quantum scattering from arbitrary boundaries," M.G.E da Luz, A.S. LupuSax and E.J. Heller, Physical Review B, 56, no. 3, pages 24962507 (1997).
Contents
Title Page . . . . . . . . . . . . . . . . . Abstract . . . . . . . . . . . . . . . . . . Acknowledgments . . . . . . . . . . . . . Citations to Previously Published Work Table of Contents . . . . . . . . . . . . . List of Figures . . . . . . . . . . . . . . List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 3 4 6 7 10 12
1 Introduction and Outline of the Thesis
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 2.2 2.3 2.4 2.5 CrossSections . . . . . . . . . . . . . Unitarity and the Optical Theorem . Green Functions . . . . . . . . . . . Zero Range Interactions . . . . . . . Scattering in two dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
13 15 20 24 26 29 31 33 39 49 50 52 53 55 58 61 64
2 Quantum Scattering Theory in dDimensions
19
3 Scattering in the Presence of Other Potentials
3.1 Multiple Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Renormalized tmatrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 Introduction . . . . . . . . . . . . . . . . Boundary Wall Method I . . . . . . . . Boundary Wall Method II . . . . . . . . Periodic Boundary Conditions . . . . . . Green Function Interfaces . . . . . . . . Numerical Considerations and Analysis From Wavefunctions to Green Functions Eigenstates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
4 Scattering From Arbitrarily Shaped Boundaries
49
7
Contents
8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 Scattering in Wires I: One Scatterer
5.1 5.2 5.3 5.4 5.5 5.6
One Scatterer in a Wide Wire . . . . . . . . . . . . The Green function of an empty periodic wire . . . Renormalization of the ZRI Scattering Strength . . From the Green function to Conductance . . . . . Computing the channeltochannel Green function One Scatterer in a Narrow Wire . . . . . . . . . .
65
65 69 73 74 75 76 81 89
6 Scattering in Rectangles I: One Scatterer 7 Disordered Systems
7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8
6.1 Dirichlet boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Periodic boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disorder Averages . . . . . . . . . . . . . . . . . . . . . . . . . Mean Free Path . . . . . . . . . . . . . . . . . . . . . . . . . . . Properties of Randomly Placed ZRI's as a Disordered Potential Eigenstate Intensities and the PorterThomas Distribution . . . Weak Localization . . . . . . . . . . . . . . . . . . . . . . . . . Strong Localization . . . . . . . . . . . . . . . . . . . . . . . . . Anomalous Wavefunctions in Two Dimensions . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
94 97 100 101 103 106 107 109
94
8 Quenched Disorder in 2D Wires
9.1 9.2 9.3 9.4
8.1 Transport in Disordered Systems . . . . . . . . . . . . . . . . . . . . . . . . 113 Extracting eigenstates from tmatrices . . . . . . . . . . . . . . . . . . Intensity Statistics in Small Disordered Dirichlet Bounded Rectangles Intensity Statistics in Disordered Periodic Rectangle . . . . . . . . . . Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
112 121
123 125 129 143
9 Quenched Disorder in 2D Rectangles
10 Conclusions Bibliography A Green Functions
A.1 A.2 A.3 A.4 A.5 A.6
De nitions . . . . . . . . . . . . . . . . . . . . . . . . Scaling L . . . . . . . . . . . . . . . . . . . . . . . . Integration of Energy Green functions . . . . . . . . Green functions of separable systems . . . . . . . . . Examples . . . . . . . . . . . . . . . . . . . . . . . . The Gor'kov (bulk superconductor) Green Function
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
147 149 153
153 155 155 156 157 159
B Generalization of the Boundary Wall Method
164
Contents
9
C Linear Algebra and NullSpace Hunting D Some important in nite sums P n
E.1 E.2 E.3 E.4
C.1 Standard Linear Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 C.2 Orthogonalization and the QR Decomposition . . . . . . . . . . . . . . . . . 168 C.3 The Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . 169 D.1 Identites from xn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 D.2 Convergence of Green Function Sums . . . . . . . . . . . . . . . . . . . . . . 172 Polar Coordinates . . . . . . . . . . . . . . . Bessel Expansions . . . . . . . . . . . . . . . Asymptotics as kr ! 1 . . . . . . . . . . . . Limiting Form for Small Arguments (kr ! 0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
166
171
E Mathematical Miscellany for Two Dimensions
176
176 176 177 177
List of Figures
4.1 Transmission (at normal incidence) through a at wall via the Boundary Wall method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 A Periodic wire with one scatterer and an incident particle. . . . . . . . . . 5.2 \Experimental" setup for a conductance measurement. The wire is connected to ideal contacts and the voltage drop at xed current is measured. . . . . . 5.3 Re ection coe cient of a single scatterer in a wide periodic wire. . . . . . . 5.4 Number of scattering channels blocked by one scatterer in a periodic wire of varying width. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Transmission coe cient of a single scatterer in a narrow periodic wire. . . . 5.6 CrossSection of a single scatterer in a narrow periodic wire. . . . . . . . . . 6.1 Comparison of Dressedt Theory with Numerical Simulation . . . . . . . . . 62 66 67 68 69 77 78 90
7.1 PorterThomas and exponential localization distributions compared. . . . . 108 7.2 PorterThomas,exponential localization and lognormal distributions compared. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 7.3 Comparison of lognormal coe cients for the DOFM and SSSM. . . . . . . 111 8.1 The wire used in open system scattering calculations. . . . . . . . . . . . . 8.2 Numerically observed mean free path and the classical expectation. . . . . . 8.3 Numerically observed mean free path after rstorder coherent backscattering correction and the classical expectation. . . . . . . . . . . . . . . . . . . . . 8.4 Transmission versus disordered region length for (a) di usive and (b) localized wires. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 116 117 119
9.1 Typical low energy wavefunctions (j j2 is plotted) for 72 scatterers in a 1 1 Dirichlet bounded square. Black is high intensity, white is low. The scatterers are shown as black dots.For the top left wavefunction ` = :12 = :57 whereas ` = :23 = :11 for the bottom wavefunction. ` increases from left to right and top to bottom whereas decreases in the same order. . . . . . . . . . . 126
10
List of Figures
11
9.2 Typical medium energy wavefunctions (j j2 is plotted) for 72 scatterers in a 1 1 Dirichlet bounded square. Black is high intensity, white is low. The scatterers are shown as black dots. For the top left wavefunction ` = :25 = :09 whereas ` = :48 = :051 for the bottom wavefunction. ` increases from left to right and top to bottom whereas decreases in the same order. . . . 127 9.3 Intensity statistics gathered in various parts of a Dirichlet bounded square. Clearly, larger uctuations are more likely in at the sides and corners than in the center. The (statistical) error bars are di erent sizes because four times as much data was gathered in the sides and corners than in the center. . . . 128 9.4 Intensity statistics gathered in various parts of a Periodic square (torus). Larger uctuations are more likely for larger =`. The erratic nature of the smallest wavelength data is due to poor statistics. . . . . . . . . . . . . . . 133 9.5 Illustrations of the tting procedure. We look at the reduced 2 as a function of the starting value of t in the t (top, notice the logscale on the yaxis) then choose the C2 with smallest con dence interval (bottom) and stable reduced 2 . In this case we would choose the C2 from the t starting at t = 10.134 9.6 Numerically observed lognormal coe cients ( tted from numerical data) and tted theoretical expectations plotted (top) as a function of wavenumber, k at xed ` = :081 and (bottom) as function of ` at xed k = 200. . . . . . . . 136 9.7 Typical wavefunctions (j j2 is plotted) for 500 scatterers in a 1 1 periodic square (torus) with ` = :081 = :061. The density of j j2 is shown. . . . . 137 9.8 Anomalous wavefunctions (j j2 is plotted) for 500 scatterers in a 1 1 periodic square (torus) with ` = :081 = :061. The density of j j2 is shown. We note that the scale here is di erent from the typical states plotted previously. 138 9.9 The average radial intensity centered on two typical peaks (top) and two anomalous peaks (bottom). . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 9.10 Wavefunction deviation under small perturbation for 500 scatterers in a 1 1 periodic square (torus). ` = :081 = :061. . . . . . . . . . . . . . . . . . . 142
List of Tables
9.1 Comparison of lognormal tails of P (t) for di erent maximum allowed singular value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 9.2 Comparison of lognormal tails of P (t) for strong and weak scatterers at xed and `. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 C.1 Matrix decompositions and computation time . . . . . . . . . . . . . . . . . 170
12
Chapter 1
Introduction and Outline of the Thesis
1.1 Introduction
\Scattering" evokes a simple image. We begin with separate objects which are far apart and moving towards each other. After some time they collide and then travel away from each other and, eventually, are far apart again. We don't necessarily care about the details of the collision except insofar as we can predict from it where and how the objects will end up. This picture of scattering is the rst one we physicists learn and it is a beautiful example of the power of conservation laws 25]: In many cases the laws of conservation of momentum and energy alone can be used to obtain important results concerning the properties of various mechanical processes. It should be noted that these properties are independent of of the particular type of interaction between the particles involved.
L.D. Landau, Mechanics (1976)
Quantum scattering is a more subtle a air. Even elastic scattering, which does not change the internal state of the colliding particles, is more complicated than its classical counterpart 26]: In classical mechanics, collisions of two particles are entirely determined by their velocities and impact parameter (the distance at which they would pass if they did not interact). In quantum mechanics, the very wording of the problem must be changed, since in motion with de nite velocities the concept of the path 13
Chapter 1: Introduction and Outline of the Thesis
14
is meaningless, and therefore so is the impact parameter. The purpose of the theory is here only to calculate the probability that, as a result of the collision, the particles will deviate (or, as we say, be scattered ) through any given angle.
L.D. Landau, Quantum Mechanics (1977)
This socalled \di erential crosssection," the probability that a particle is scattered through a given angle, is the very beginning of any treatment of scattering, whether classical or quantum mechanical. However, the crosssection is not the part of scattering theory upon which we intend to build. It is instead the separation between free propagation (motion without interaction) and collision. That this idea should lead to so much useful physics is at rst surprising. However the Schrodinger equation, like any other wave equation does not make this split particularly obvious. It is indeed some work to recover the bene ts of this division from the complications of wave mechanics. In fact, the idea of considering separately the free or unperturbed motion of particles and their interaction is usually considered in the context of perturbation theory. Unsurprisingly then, the very rst quantum mechanical scattering theory was Born's perturbative treatment of scattering 6] which he developed not to solve scattering problems but to address the completeness of the new quantum theory: Heisenberg's quantum mechanics has so far been applied exclusively to the calculation of stationary states and vibration amplitudes associated with transitions... Bohr has already directed attention to the fact that all di culties of principle associated with the quantum approach...occur in the interactions of atoms at short distances and consequently in collision processes... I therefore attack the problem of investigating more closely the interaction of the free particle ( ray or electron) and an arbitrary atom and of determining whether a description of a collision is not possible within the framework of existing theory.
M. Born, On The Quantum Mechanics of Collisions (1926)
Later in the same note, the connection to perturbation theory is made clear: \One can then show with the help of a simple perturbation calculation that there is a uniquely determined solution of the Schrodinger equation with a potential V ..." Scattering theory has developed substantially since Born's note appeared. Still, we will take great advantage of one common feature of perturbations and scattering. The
Chapter 1: Introduction and Outline of the Thesis
15
division between perturbation and unperturbed motion is one of de nition, not of physics. Much of the art in using perturbation theory comes from recognizing just what division of the problem will give a solvable unperturbed motion and a convergent perturbation series. In scattering, the division between free motion and collision seems much more natural and less exible. However, many of the methods developed in this thesis take advantage of what little exibility there is in order to solve some problems not traditionally in the purview of scattering theory as well as attack some which are practically intractable by other means.
1.2 Outline of the Thesis
In chapter 2 we begin with a nearly traditional development of scattering theory. The development deviates from the traditional only in that it generalizes the usual de nitions and calculations to arbitrary spatial dimension. This is done mostly because the applications in the thesis require two dimensional scattering theory but most readers will be familiar with the three dimensional version. A generalized derivation allows the reader to assume d = 3 and check that the results are what they expect and then use the d = 2 version when necessary. I have as much as possible followed standard textbook treatments of each piece of scattering theory. I am con dent that the ddimensional generalizations presented here exist elsewhere in the literature. For instance, work using socalled \hyperspherical coordinates" to solve fewbody problems certainly contains much of the same information, though perhaps not in the same form. The nal two sections of chapter 2 are a bit more speci c. The rst, section 2.4, deals with zero range interactions, a tool which will be used almost constantly throughout the remainder of the thesis. It is our hope that the treatment of the zero range interaction in this section is considerably simpler than the various formalisms which are typically used. After this section follows a short section explicitly covering some details of scattering in two dimensions. After this introductory material, we move on to the central theoretical work in scattering theory. Chapter 3 covers two extensions of ordinary scattering theory. The rst is multiple scattering theory. A system with two or more distinct scatterers can be handled by solving the problem one scatterer at a time and then combining those results. This is a nice example of the usefulness of the split between propagation and collision made
Chapter 1: Introduction and Outline of the Thesis
16
above. Multiple scattering theory takes this split and some very clever bookkeeping and solves a very complex problem. Our treatment di ers somewhat from Fadeev's in order to emphasize similarities with the techniques introduced in section 3.2. A separation between free propagation and collision and its attendant bookkeeping have more applications than multiple scattering. In section 3.2 we develop the central new theoretic tool of this work, the renormalized tmatrix. In multiple scattering theory, we used the separation between propagation and collision to piece together the scattering from multiple targets, in essence complicating the collision phase. With appropriate renormalization, we can also change what we mean by propagation. We derive the relevant equations and spend some time exploring the consequences of the transformation of propagation. The sort of change we have in mind will become clearer as we discuss the applications. Both of the methods explained in chapter 3 involve combining solved problems and thus solving a more complicated problem. The techniques discussed in chapter 4 are used to solve some problems from scratch. In their simplest form they have been applied to mesoscopic devices and it is hoped that the more complex versions might be applied to look at dirty and clean superconductor normal metal junctions. We begin working on applications in chapter 5 where we explore our rst nontrivial example of scatterer renormalization, the change in scatterer strength of a scatterer placed in a wire. We begin with a xed twodimensional zero range interaction of known scattering amplitude. We place this scatterer in an in nite straight wire (channel of nite width). Both the scatterer in free space and the wire without scatterer are solved problems. Their combination is more subtle and brings to bear the techniques developed in 3.2. Much of the chapter is spent on necessary applied mathematics, but it concludes with the interesting case of a wire which is narrower than the crosssection of the scatterer (which has zerorange so can t in any nite width wire). This calculation could be applied to a variety of systems, hydrogen con ned on the surface of liquid helium for one. Next, in chapter 6 we treat the case of the same scatterer placed in a completely closed box. While a wire is still open and so a scattering problem, it is at rst hard to imagine how a closed system could be. After all, the di erential crosssection makes no sense in a closed system. Wonderfully, the equations developed for scattering in open systems are still valid in a closed one and give, in some cases, very useful methods for examining properties of the closed system. As with the previous chapter, much of the work in this chapter is preliminary but necessary applied mathematics. Here, we rst confront the oddity of using
Chapter 1: Introduction and Outline of the Thesis
17
the equations of scattering theory to nd the energies of discrete stationary states. With only one scatterer and renormalization, this turns out to be mathematically straightforward. Still, this idea is important enough to the sequel that we do numerical computations on the case of ground state energies of a single scatterer in a rectangle with perfectly re ective walls. Using the methods presented here, this is simply a question of solving one nonlinear equation. We compare the energies so calculated to numerically calculated ground state energies of hard disks in rectangles computed with a standard numerical technique. This is intended both as con rmation that we can extract discrete energies from these methods and as an illustration of the similarity between isolated zero range interactions and hard disks. Having spent a substantial amount of time on examples of renormalization, we return multiple scattering to the picture as well. We will consider in particular disordered sets of xed scatterers, motivated, for example, by quenched impurities in a metal. Before we apply these techniques to disordered systems, we consider disordered systems themselves in chapter 7. Here we de ne and explain some important concepts which are relevant to disordered systems as well as discuss some theoretical predictions about various properties of disordered systems. We return to scattering in a wire in chapter 8. Instead of the single scatterer of chapter 5 we now place many scatterers in the same wire and consider the conductance of the disordered region of the wire. We use this to examine weak localization, a quantum e ect present only in the presence of timereversal symmetry. In the nal chapter we use the calculations of this chapter as evidence that our disorder potential has the properties we would predict from a hard disk model, as we explored for the one scatterer case in chapters 5 and 6. Our nal application is presented in chapter 9. Here we examine some very speci c properties of disordered scatterers in a rectangle. These calculations were in some sense the original inspiration for this work and are its most unique achievement. Here calculations are performed which are, apparently, out of reach of other numerical methods. These calculations both con rm some theoretical expectations and confound others leaving a rich set of new questions. At the same time, it is also the most specialized application we consider, and not one with the broad applicability of the previous applications. In chapter 10 we present some conclusions and ideas for future extensions of the ideas in this work. This is followed (after the bibliography) by a variety of technical appen
Chapter 1: Introduction and Outline of the Thesis
18
dices which are referred to throughout the body of the thesis.
Chapter 2
Quantum Scattering Theory in dDimensions
The methods of progress in theoretical physics have undergone a vast change during the present century. The classical tradition has been to consider the world to be an association of observable objects (particles, uids, elds, etc.) moving about according to de nite laws of force, so that one could form a mental picture in space and time of the whole scheme. This led to a physics whose aim was to make assumptions about the mechanism and forces connecting these observable objects, to account for their behavior in the simplest possible way. It has become increasingly evident in recent times, however, that nature works on a di erent plan. Her fundamental laws do not govern the world as it appears in our mental picture in any very direct way, but instead they control a substratum of which we cannot form a mental picture without introducing irrelevancies.
P.A.M. Dirac, Quantum Mechanics (1930)
Nearly all physics experiments measure the outcome of scattering events. This ubiquity has made scattering theory a crucial part of any standard quantum text. Not surprisingly, all the attention given to scattering processes has led to the invention of very powerful theoretical tools, many of which can be applied to problems which are not traditional scattering problems. After this chapter, our use of scattering theory will involve mostly nontraditional uses of the tools of scattering theory. However, those tools are so important to what follows that we must provide at least a summary of the basic theory. There are nearly as many approaches to scattering as authors of quantum mechanics textbooks. As is typical, we begin by de ning the problem and the idea of the 19
Chapter 2: Quantum Scattering Theory in dDimensions
20
scattering crosssection. We then make the somewhat lengthy calculation which relates the di erential crosssection to the potential of the scatterer. We perform this calculation for arbitrary spatial dimension. At rst, this may seem like more work than necessary to review scattering theory. However, in what follows we will frequently use two dimensional scattering theory. While we could have derived everything in two dimensions, we would then have lost the reassuring feeling of seeing familiar three dimensional results. The arbitrary dimension derivation gives us both. We proceed to consider the consequences of particle conservation, or unitarity, and derive the ddimensional optical theorem. It is interesting to note that for both this calculation and the previous one, the dimensional dependence enters only through the asymptotic expansion of the plane wave. Once we have this machinery in hand, we proceed to discuss point scatterers or \zero range interactions" as they will play a large role in various applications which follow. In the nal section we focus brie y on two dimensions since two dimensional scattering theory is the stage on which all the applications play out.
2.1 CrossSections
At rst, we will generalize to arbitrary spatial dimension a calculation from 28] (pp. 8035) relating the scattering crosssection to matrix elements of the potential, V . We consider a domain in which the stationary solutions of the Schrodinger equation are known, and we label these by k . For example, in free space,
k(
r) = eik r:
(2.1)
In the presence of a potential there will be new stationary solutions, labeled by in terms of k where superscript plus and minus labels the asymptotic behavior of ddimensional spherical waves. In particular
( )
H
and
k(
k
E
=E
k
E
(2.2) (2.3)
r) r
!1
k(
ikr r) + fk ( ) e d;1 :
r
2
Chapter 2: Quantum Scattering Theory in dDimensions
21
We assume the plane wave, k (r) is wide but nite so we may always go far enough away that scattering at any angle but = 0 involves only the scattered part of the wave. Since the ux of the scattered wave is j = Imf scatt r scatt g = ^jfk ( )j2 =rd 1 , the probability r per unit time of a scattered particle to cross a surface element da is
;
f ( ) 2 v k d 1 da = v fk ( ) d : r
;
2
(2.4)
But the current density ( ux per unit area) in the incident wave is v so
d
If unambiguous, we will replace ka and kb by a and b respectively. We proceed to develop a relationship between the scattering function f and matrix ^ elements of the potential, V . This will lead us to the de nition of the socalled scattering tmatrix. ~ Consider two potentials, U (r) and U (r) (both of which fall o faster 1=r). We will show ~b
;
d
a b
!
+ = jfka ( b ) j2
(2.5)
^ ^ ~ U ;U
+ a
Z
2 d+1 i d;1 3;d h + ~ = h i 2 (2 ) 2 k 2 fa ( b ) ; fb ( a ) : m
;
+ ~b (r) U (r) ; U (r) a (r) dr ~
;
h
i
(2.6) (2.7)
We begin with the Schrodinger equation for the 's:
2 ^ ~ ; 2hm r2 + U ~b = E ~b ! h2 r2 + U + = E + : ^ a ; 2m a
; ; ;
!
(2.8) (2.9)
+ We multiply (2.9) by ~b and the complex conjugate of (2.8) by a and then subtract ~ the latter equation from the former. Since U (r) and U (r) are real, we have (dropping the a's and b's when unambiguous)
; 2hm
~
;
2
n ~
;
r2 + ; r2
+
h
~
;
i
(+)
o
]+ ~
;
^ ~ U ;U
(+)
= 0:
(2.10)
We integrate over a sphere of radius R centered at the origin to get ^ ^ ~ U ;U
h2 lim Z n ~ = 2m R r<R
!1
;
r2 + ; r2
h
~
;
i +o
dr:
(2.11)
Chapter 2: Quantum Scattering Theory in dDimensions
22
For two functions of r,
1
and
2
we de ne
1
W
1 2]
@2; @1 @r 2 @r
1 2 ] r=R R
(2.12)
d 1d
;
and its integral over the ddimensional sphere
f 1 2gR
= Green's Theorem implies
Z W Z h
1
r 2 ; 2r 1] da
2
j
f 1 2 gR =
~
;
Z
r<R
2 2 1 (r 2 ) ; (r 1 )
i
dr
+
(2.13)
and thus equation (2.11) may be written ^ ^ ~ U ;U
+
h = 2m Rlim
1 n ik} r b
;
2
n ~
;
o
R
!1
:
2 }
(2.14)
To evaluate the surface integral, we substitute the asymptotic form of the 's:
n lim ~ R ( ; lim f R 
; !1 !1
o +
;
r {z 2
3
R R ) eikr eika r
!1
= lim e
z
eika r R + Rlim e
z o{
;
(
d;1
R }
+ Rlim
(

!1
;f
2 r d;1 R ikr ikr ) e f+ e : (2.15) d 2 2 r{z;1 r d;1 R }
!1
;
ikb r f +
eikr
) {
+
4
Since we are performing these integrals at large r, we require only the asymptotic form of the plane wave and only in a form suitable for integration. We nd this form by doing a stationary phase integral 41] of an arbitrary function of solid angle against a plane wave at large r. That is, Z I = rlim eik rf ( r ) d r (2.16)
!1
We nd the points where the exponential varies most slowly as a function of the integration variables, in this case the angles in r . Since k r = kr cos ( kr ) the stationary phase points will occur at kr = 0 . We expand the exponential around each of these points to yield Z 1 ( ( 2 f( ) d + I exp ikr d=11 1 ; 2 ki) ; ri) r r i Z ( ( 2 f( ) d : exp ikr d=11 ;1 + 1 ki) ; ri) r r i 2
; ;
Chapter 2: Quantum Scattering Theory in dDimensions
23
We perform all the integrals using complex Gaussian integration to yield an asymptotic form for the plane wave (to be used only in an integral):
eik r \
where r + k = 0 (enforced by the second function) means that ^ = ;k. r ^ We'll attack the integrals in equation (2.15) one at a time, beginning with Since ka and kb have the same length, ka + kb is orthogonal to ka ; kb . Thus, we can always choose our angular integrals such that our innermost integral is exactly zero: Z2 @ n ikb r ika ro Z 2 2 eia sin d = 1 eia sin = 0: e e cos eia sin d = 1
;
2 " ikr
d;1
2
h
( r ; k ) eikr + id
;
1
i ( r + k ) e ikr
;
(2.17)
n o lim e ikb r eika r R = iRd (ka + kb ) ^ eiR(ka r R
; !1
Z
;
kb ) ^ r
d :
(2.18)
Thus limR e = 0. R We can do the second integral using the asymptotic form of the plane wave. The only contribution comes from the incoming part of the plane wave,
!1 ;
n
R
0
ikb r eika r
o
ia
0
@
ia
0
(2.19)
lim e R
!1
(
;
ikb r f +
r(d 1)=2
;
eikr
)
We can do the third integral exactly the same way. Again, only the incoming part of the plane wave contributes,
R
=
d;1
2
2 2 k 3;d (2i) d+1 f + ( b) :
(2.20)
( ; lim f R
!1
;
eikr eika r 2 r d;1
(+)
)
The fourth integral is zero since both waves are purely outgoing. Thus lim R
R
=;
d;1
2
d;1
2
2 2 k 3;d (2i) d+1 f (; a ) :
;
(2.21)
n ^
;
o
!1
R
= (2i)
d+1
2
k
3
;d
2
f + ( b) ; f (; a )
;
(2.22)
which, when substituted into equation 2.14 gives the desired result. ^ ^ ^ ~ Let's apply the result (2.7) to the case U = V , U = 0. We have
D ^ b V
^ V
+ a
^ ^ ^ ~ We also apply it to the case U = 0, U = V , yielding
E h2 d;1 3;d d+1 + = m (2 ) 2 k 2 i 2 fa ( b ) :
;
(2.23)
D
b
;
E h2 d;1 3;d d+1 a = m (2 ) 2 k 2 i 2 fb (; a ) :
(2.24)
Chapter 2: Quantum Scattering Theory in dDimensions
24
^ ^ ^ ^ ~ Finally, we apply (2.7) to the case U = V , U = V , yielding
+ fa ( b ) = fb (; a ) :
;
(2.25)
We now have
d
Since the ddimensional density of states per unit energy is dp m %d (E ) = (2 1h)d pd 1 dE = ( 1 )d (hk)d 1 hk = (2 m )d (hk)d 2 (2.27) h h and the initial velocity is hk=m, we can write our nal result for crosssection in a more useful form, d a b = 2 D V + E 2 % (E ) ^ a (2.28) d d hv b
; ; ; !
d
a b
!
2 + = fa ( b ) 2 = m4 (2 )1 d kd
h
;
;
3
D ^ b V
E + 2: a
(2.26)
where all of the dimensional dependence is in the density of states and the matrix element. For purposes which will become clear later, it is useful to de ne the socalled ^ \tmatrix" operator, t (E ) such that ^ t (E ) j and our result may be rewritten as
a
^ i=V ^ t (E )
a
(2.29)
d
d
a b
!
2 = hv
D
b
a
E2
%d (E ):
(2.30)
2.2 Unitarity and the Optical Theorem
The fact that particles are neither created nor destroyed in the scattering process R forces a speci c relationship between the total crosssection, = (d =d )d and f ( = 0). It is to this relationship that we now turn. This section closely follows a calculation from 26] (pp. 508510) but, as in the previous section, generalizes it to arbitrary spatial dimension. Suppose the incoming wave is a linear combination of plane waves, as in
incoming
= F ( )eik r0 d
0
Z
0
(2.31)
(+) So the asymptotic outgoing form of is (where f ( a b ) = fa ( b ) is simply a more symmetric notation than used in the previous section)
Z
F(
0
)eik r0 +
eikr Z F ( )f ; 2 r d;1
0
0
d
(2.32)
Chapter 2: Quantum Scattering Theory in dDimensions
25
For large r we can use the asymptotic for of the plane wave (2.17) to perform the rst integral. We then get
ikr
2
d;1
2
h
F(
)eikr + id 1 F (
;
;
)e
;
ikr
i eikr Z ; + d;1 F ( )f (+)
r
0
0
2
d :
;
(2.33)
We can write this more simply (without the common factor (2 =ik)(d 1)=2 )
where
e ikr F (; ) + i1 d eikr SF ( ) ^ 2 2 r d;1 r d;1
; ;
(2.34) (2.35)
ik ^ S =1+ 2 ^ and f is an integral operator de ned by
d;1
2
f^
)d
^ fF ( ) = F ( )f (
0
Z
0
0
(2.36)
is elastic, we must have as many particles going into the center as there are going out of ^ the center. and the normalization of these two waves must be the same. So S is unitary: ^^ S S = 1:
y
^ S is called the \scattering operator" or \Smatrix." Since the scattering process
(2.37)
d;1
2
Substituting (2.35) we get
2 2 i d;1 f^ + i 1;d f^ = ; 2k
y
f^ f^
y
(2.38)
then divide through by i:
i
d;3
2
d;3
2
f^ ; i
3
;d
2
f^ = i 2k
y
d;1
2
^ f^ f:
y
(2.39)
We apply the de nition (2.36) and have
i
f(
0
) ;i
0
h
d;3
2
f(
0
) = i 2k
i
d;1
2
Z
f(
00
)f (
0
00
)d
00
(2.40)
the unitarity condition for scattering. For = we have Im i
n d;3
2
f(
o ) =1 k
2 2
d;1
2
Z
jf (
) 2d
j
1 = 2 2k
d;1
2
(2.41)
Chapter 2: Quantum Scattering Theory in dDimensions
26
which is the optical theorem. Invariance under time reversal (interchanging initial and nal states and the direction of motion of each wave) implies
S( f(
0
0
) = S (; ) = f (;
0
0
; ;
) )
which is called the \reciprocity theorem."
2.3 Green Functions
The crosssection is frequently the end result of a scattering calculation. However, for most of the applications considered here, we are concerned with more general properties of scattering solutions. For these applications, the machinery of Green functions is invaluable and is introduced in this section. Much of the material in this section is covered in greater detail in 10]. A more formal development, some examples and some useful Green function identities are given in appendix A. The idea of a Green function operator is both simple and beautiful. Suppose a quantum system is initially (at t = 0) in state j i. What is the amplitude that the system will be found in state j i a time later? This information is contained in the timedomain Green function. We can take the Fourier transform of this function with respect to and get the energydomain Green function. It is the energy domain Green function which we explore in some detail below. We de ne an energydomain Green function operator for the Hamiltonian H via
0
^ ^ (z ; H i )G( ) (z ) = ^ 1
(2.42)
where \^" is the identity operator. The i is used to avoid di culties when z is equal to 1 ^ an eigenvalue of H . is always taken to zero at the end of a calculation. We frequently use 2 ^ ^ ^ the Green function operator, Go (z ) corresponding to H = Ho = ; 2hm r2 . ^ ^ ^ Consider the Hamiltonian H = Ho + V . As in the previous section, we denote the ^ ^ eigenstates of Ho by j a i and the eigenstates of H by j a i. These satisfy ^ Ho j a i = Ea j ^ H a = Ea
a
i
a
(2.43) (2.44)
Chapter 2: Quantum Scattering Theory in dDimensions
27
^ ^ = j a i + Go (Ea )V a : (2.45) ^ The claim is easily proved by applying the operator (Ea ; Ho i ) to the left of both ^ ^ ^ sides of the equation since (Ea ; Ho i ) j a i = V j a i, (Ea ; Ho i ) j a i = 0 and ^ (Ea ; Ho i )Go (Ea ) = ^. Using the tmatrix, we can rewrite this as 1
a a
We claim
^ ^ = j a i + Go (Ea )t (Ea ) j a i
(2.46)
but we can also rewrite (2.45) by iterating it (inserting the right hand side into itself as j a i) to give h i ^ ^ ^ ^ = j a i + Go (Ea )V j a i + Go V j a i + : (2.47) a From (2.46, 2.47) we get a useful expression for the tmatrix: ^ ^^ ^ ^^ ^^ ^ ^ t (z ) = V + V Go (z)V + V Go (z)V Go (z)V + ^ ^^ ^ t (z) = V 1 ; V Go (z)
:
(2.48)
^ We factor out the rst V in each term and sum the geometric series to yield
h
^ ^ = V Go (z ) Go
i1 n^ o 1
; ;
^ ^ ^ = V z ; Ho ; V i ^ ^ = V G (z ) z ; Ho i
^ (z ) ; V
;
1n^
Go
o
;
1
(z ) (2.49)
^ where (2.49) is frequently used as the de nition of t (z ). We now proceed to develop an equation for the Green function itself. ^ ^ G (z ) = z ; Ho ; V i = ^ z ; Ho i ^ = 1 ; z ; Ho i ^ ^ We expand 1 ; Go (z )V
;
1
;
^ 1 ; z ; Ho i
;
1
^ V
;
;
1
(2.50) (2.51) (2.52) (2.53)
1
h
i
^ ^ = 1 ; Go (z )V
;
h
i 1^ Go (z ):
;
^ V
;
1
(z ; Ho i )
1
1
in a power series to get (2.54)
^ ^ ^ ^^ ^ ^^ ^^ G (z) = Go (z) + Go (z )V Go (z ) + Go (z)V Go (z )V Go (z) +
Chapter 2: Quantum Scattering Theory in dDimensions
28
which we can rewrite as
^ ^ ^ ^ ^ ^ ^^ G (z ) = Go (z ) + Go (z)V Go (z) + Go (z)V Go (z ) + ^ ^ ^^ = Go (z ) + Go (z )V G (z )
h
i
(2.55) (2.56)
^o and, using (2.49) and the de nition of G( ) (z ), we get a tmatrix version of this equation, namely ^ ^ ^ ^ ^ G (z ) = Go (z) + Go (z)t (z )Go (z): (2.57)
2.3.1 Green functions in the position representation
So far we have looked at Green functions only as operators rather than functions in a particular basis. Quite a few of the speci c calculations which follow are performed in position representation and it is useful to identify some general properties of ddimensional Green functions. We begin from the de ning equation (2.42), rewritten in position space:
"
; h2 z + 2m r2 ; V (r) i G (r r z) = r ; r : r
0 0
#
(2.58)
We begin by considering an arbitrary point ro and a small ball around it, B (ro ). We can move the origin to ro and then integrate both sides of (2.58) over this volume:
Z
"
We now consider the ! 0 limit of this equation. We assume that the potential is nite and continuous at r = ro so V (ro ; r) can be replaced by V (ro ) in the integrand. We can safely assume that Z lim G (ro ; r 0 z) dr = 0 (2.60) 0 since, if it weren't, the integral of the r2 G term would be in nite. We are left with Z (2.61) lim r2G (ro ; r 0 z) dr = 2m : 2 0
! !
B( )
Z h2 z + 2m r2 ; V (ro ; r) i G (ro ; r 0 z) dr = (r) dr: r B( )
#
(2.59)
B(ro )
@ G (r ; r 0 z ) d 1 d = 2m : (2.62) o h2 @B(0 ) @r So we have a rst order di erential equation for G (r r z ) for small = jr ; ro j: @ G ( z) = 2m 1 d (2.63) @ h2 Sd
lim 0
! ; 0 ;
We can apply Gauss's theorem to the integral and get
B (0 )
h
Z
Chapter 2: Quantum Scattering Theory in dDimensions
29
where
is the surface area of the unit sphere in ddimensions (this is easily derived by taking the product of d Gaussian integrals and then performing the integral in radial coordinates, see e.g., 32], pp. 5012). In particular, in two dimensions the Green function has a logarithmic singularity \on the diagonal" where r ! r . In d > 2 dimensions, the diagonal singularity goes as r2 d . It is worth noting that in one dimension there is no diagonal singularity in G. As a consequence of our derivation of the form of the singularity in G, we have proved that, as long as V (r) is nite and continuous in the neighborhood of r , then
0 ;
2 Sd = ;(d=2)
d=2
(2.64)
r!r
lim G (r r z ) ; Go (r r z ) < 1
(2.65)
This will prove useful in what follows.
2.4 Zero Range Interactions
For the sake of generality, up to now we have not mentioned a speci c potential. However, in what follows we will frequently be concerned with potentials which interact with the particle only at one point. Such interactions are frequently called \zero range interactions" or \zero range potentials." There is a wealth of literature about zero range interactions in two and three dimensions, including 7, 13, 2], their application to chaotic systems, for example 3, 38] and their applications in statistical mechanics, including 31, 22]. In one dimension, the Dirac delta function is just such a point interaction. However, in two or more dimensions, the Dirac delta function does not scatter incoming particles at all. This is shown for two dimensions in 7]. In more than one dimension, the wavefunction can be set to 0 at a single point without perturbing the wavefunction since an in nitesimal part of the singular solution to the Schrodinger equation can force to be zero at a single point. Since there are no singular solutions to the Schrodinger equation in one dimension, the one dimensional Dirac function does scatter, as is well known. Trying to construct a potential corresponding to a zerorange interaction can be quite challenging. The formal construction of these interactions leads one to consider the Hamiltonian in a reduced space where some condition must be satis ed at the point of
Chapter 2: Quantum Scattering Theory in dDimensions
30
interaction. Choosing the wave function to be zero at the interaction point leads to the mathematical formalism of \selfadjoint extension theory" so named because the restriction of the Hamiltonian operator to the space of functions which are zero at a point leaves a non selfadjoint Hamiltonian. The family of possible extensions which would make the Hamiltonian selfadjoint correspond to various scattering strengths 2]. Much of this complication arises because of an attempt to write the Hamiltonian explicitly or to make sure that every possible zero range interaction is included in the formalism. To avoid these details, we consider a very limited class of zerorange interactions, namely zerorange swave scatterers. Consider a scatterer placed at the origin in two dimensions. We assume the physip cal scatterer being modeled is small compared to the wavelength, lambda = 2 = E and thus scatters only swaves. So we can write the tmatrix (for a general discussion of tmatrices see, e. g., 35]), ^ t (z ) = j0i s (z ) h0j : (2.66) If, at energy E , j i is incident on the scatterer, we write the full wave (incident plus scattered) as ^ ^ = j i + Go (E )t (E ) j i (2.67) (r) = (r) + Go (r 0 E )s (E ) (0): (2.68)
which may be written more clearly in position space
At this point the scatterer strength, s (z ), is simply a complex constant. We can consider s (E ) as it relates to the crosssection. From equation (2.26) with V j a i replaced D E ^ by t j a i we have (since b t (z ) a = s (z ))
2 (E ) = Sd m4 (2 )1 d kd 3 js (E )j2
h
;
;
(2.69)
where Sd , the surface area of the unit sphere in d dimensions is given by (2.64). We also consider another length scale, akin to the threedimensional scattering length. Instead of looking at the asymptotic form of the wave function, we look at the swave component of the wave function by using Ro (r), the regular part of the swave solution to the Schrodinger equation, as an incident wave. We then have
+ (r) = R + o (r ) + Go (r
E )s(E )Ro (0)
(2.70)
Chapter 2: Quantum Scattering Theory in dDimensions
31
We de ne an e ective radius, ae , as the smallest positive real number solution of
Ro(x) + s+(E )G+ (x E ) = 0: o
(2.71)
We can reverse this line of argument and nd s+ (E ) for a particular ae s( E ) = ; Ro (ae ) (2.72) G+(ae E ) o The point interaction accounts for the swave part of the scattering from a hard disk of radius ae . From equation 2.69 the cross section of a point interaction with e ective radius ae is 2 2 (E ) = Sd m4 (2 )1 d kd 3 Ro (aa ) (2.73) G+ (ae E ) h o but this is exactly the swave part of the crosssection of a hard disk in ddimensions. Though zero range interactions have the crosssection of hard disks, depending on the dimension and the value of s+(E ), the point interaction can be attractive or repulsive. In three dimensions, the E ! 0 limit of ae exists and is the scattering length as de ned in the modern sense 36]. It is interesting to note that other authors, e.g., Landau and Lifshitz in their classic quantum mechanics text 26] de ne the scattering length as we have de ned the e ective radius, namely as the rst node in the swave part of the wave function. These de nitions are equivalent in three dimensions but quite di erent in two where the modern scattering length is not well de ned but, for any nite energy, the e ective radius is.
; ;
2.5 Scattering in two dimensions
While many of the techniques discussed in the following chapters are quite general, just as we will frequently use point interactions in applications due to their simplicity, we will usually work in two dimensions either because of intrinsic interest (as in chapter 9) or because numerical work is easier in two dimensions than three. Since most people are familiar with scattering in three dimensions, some of the features of two dimensional scattering theory are, at rst, surprising. For example
d
d
a b
!
2 + = fa ( b ) 2 = m4 2 1k h
b
jV j
+ 2 a
(2.74)
Chapter 2: Quantum Scattering Theory in dDimensions
32
implying that as E ! 0, (E ) ! 1 which is very di erent from three dimensions. Also, the optical theorem in two dimensions is di erent than its threedimensional counterpart: Im e
n
i ;
4
f(
o sk ) = 8 :
(2.75)
We have already mentioned the di erence in the diagonal behavior of the two and three dimensional Green functions. The logarithmic singularity in G prevents a simple idea of scattering length from making sense in two dimensions. This singularity comes from the form of the freescattering solutions in two dimensions (the equivalent of the threedimensional spherical harmonics). In two dimensions the free scattering solutions are Bessel and Neumann functions of integer order. The small argument and asymptotic properties of these functions are summarized in appendix E. In two dimensions, we can write the speci c form of the causal tmatrix for a zero ^ range interaction with e ective radius ae located at rs: t+ (E ) = jrsi s+ (E ) hrs j with 4o ;i HJ(1)((pEa)) (2.76) Ea o (1) where Jo (x) is the Bessel function of zeroth order and Ho (x) is the Hankel function of zeroth order.
s+(E ) =
p
Chapter 3
Scattering in the Presence of Other Potentials
In chapter 2 we presented scattering theory in its traditional form. We computed crosssections and scattering wave functions. In this chapter, we focus more on the tools of scattering theory and broaden their applicability. Here we begin to see the great usefulness of the bookkeeping associated with tmatrices. We will also begin to use scattering theory for closed systems, an idea which is confusing at the outset, but quite natural after some practice.
3.1 Multiple Scattering
Multiple scattering theory has been applied to many problems, from neutron scattering by atoms in a lattice to the scattering of electrons on surfaces 21]. In most applications, the individual scatterers are treated in the swave limit, i.e., they can be replaced by zero range interactions of appropriate strength. We begin our discussion of multiple scattering theory with this special case before moving on to the general case in the following section. This is done for pedagogical reasons. The general case involves some machinery which gets in the way of understanding important physical concepts.
33
Chapter 3: Scattering in the Presence of Other Potentials
34
3.1.1 Multiple Scattering of Zero Range Interactions
Consider a domain, with given boundary conditions and potential, in which the ^ Green function operator, GB (z ) for the Schrodinger equation is known. Into this domain we ^i place N zero range interactions located at the positions fri g and with tmatrices ft+ (z )g ^i given by t+ (z ) = s+ (z ) jri ihri j. At energy E , (r) is incident on the set of scatterers and i we want to nd the outgoing solutions of the Schrodinger equation, + (r), in the presence of the scatterers. We de ne the functions + (r) via i
+ (r) = i
(r) +
X
j =i
6
G+ (r rj E )s+(E ) + (rj ): j j B
(3.1)
The number i (ri ) represents the amplitude of the wave that hits scatterer i last. That is, + (r) is determined by all the other + (r) (j 6= i). The full solution can be written in i j + (r ): terms of the i i
+ (r) =
(r) +
X
i
G+ (r ri E )s+(E ) + (ri ): i i B
(3.2)
The expression (3.1) gives a set of linear equations for the i (ri ). This can be seen more simply from the following substitution and rearrangement:
+ (r ) i i
; X G+(ri rj E)s+(E) + (rj ) = (ri): j j B
j =i
6
(3.3)
We de ne the N vectors a and b via ai = + (ri ) and bi = (ri ) and rewrite (3.3) i as a matrix equation h + i 1 ; t (E )GB +(E ) a = b (3.4)
where 1 is the N N identity matrix, t(E ) is a diagonal N N matrix de ned by (t)ii = si (E ) and G+ (E ) is an o diagonal propagation matrix given by B
8 + < G+ (E ) ij = : GB (ri rj E ) for i 6= j B 0 for i = j: More explicitly, 1 ; t+ (E )G+ (E ) is given by (suppressing the \E" and \+"): B 0 1 ;s1GB (r1 r2) ;s1GB (r1 rN ) 1 B C B ;s2G(r2 r1 ) 1 ;s2GB (r2 rN ) C : B C C 1 ; tGB = B B C .. .. .. ... B C . . . @ A ;sN GB (rN r1) ;sN GB (rN r2) 1
(3.5)
(3.6)
Chapter 3: Scattering in the Presence of Other Potentials
35
The o diagonal propagator is required since the individual tmatrices account for the diagonal propagation. That is, the scattering events where the incident wave hits scatterer i, ^ propagates freely and then hits scatterer i again are already counted in ti . We can look at this diagrammatically. We use a solid line to indicate causal propagation and a dashed line ending with an \ i " to indicate scattering from the ith ^ scatterer. With this \dictionary," we can write the in nite series form of ti as ^ ti =
i
+
i
i
+
i
i
i
+
(3.7)
^ ^^^ so Go + Go ti Go has the following terms: +
i
+
i i
+
(3.8)
Now we consider multiple scattering from two scatterers. The Green function has the direct term, terms from just scatterer 1, terms from just scatterer 2 and terms involving both, i.e., +
1
+
2
+
1 1
+
2 2
+
1 2
+
2 1
+
:
(3.9)
The o diagonal propagator appearing in multiple scattering theory allows to add only the terms involving more than one scatterer, since the one scatterer terms are already accounted ^i for in each t+ . If, at energy E , 1 ; t+GB is invertible, we can solve the matrix equation (3.4) for a: a = 1 ; tGB 1 b (3.10)
;
where the inverse is here is just ordinary matrix inversion. We substitute (3.10) into (3.2) to get
+ (r ) =
( r) +
X
ij
G+ (r ri E )s+ (E ) 1 ; t+(E )G+ (E ) i B B
h
i
;
1
ij
(rj ):
(3.11)
We can de ne a multiple scattering tmatrix ^ t+(E ) =
X
ij
jrii (t+(E))ii 1 ; t(E)G+ (E) B
+
h
i
;
1
ij
hrj j
(3.12)
and write the full solution in a familiar form ^B ^ = j i + G+ (E )t+ (E ) j i : (3.13)
Chapter 3: Scattering in the Presence of Other Potentials
;
36
An analogous solution can be constructed for j i by replacing all the outgoing solutions in the above with incoming solutions (superscript \+" goes to superscript \"). We have shown that scattering from N zero range interactions is solved by the inversion of an N N matrix. As we will see below, generalized multiple scattering theory is not so simple. It does, however, rely on the inversion of an operator on a smaller space than that in which the problem is posed.
3.1.2 Generalized Multiple Scattering Theory
We now consider placing N scatterers (not necessarily zero range), with tmatrices ^ ^ ti (z ), in a background domain with Green function operator GB (z). In what follows, the argument z is suppressed. We assume that each tmatrix is identically zero outside some domain Ci and we further assume that the Ci do not overlap, that is Ci \ Cj = for all i 6= j . We de ne the S scattering space, S = i Ci . In the case of N zero range scatterers, the scattering space is just N discrete points. The de nition of the scattering space allows a separation between propagation and scattering events. As in the point scatterer case, we consider the function, i (r) = hr j i i, representing the amplitude which hits the ith scatterer last. We can write a set of linear equations for the i : E E ^ X^ = j i + GB t j j (3.14) i where (r) is the incident wave. As in the simpler case above, the full solution can be written in terms of the i via ^ X^ = j i + GB ti
i i j =i
6
E
:
(3.15)
The derivation begins to get complicated here. Since the scattering space is not necessarily discrete, we cannot map our problem onto a nite matrix. We now begin to create a framework in which the results of the previous section can be generalized. ^ We de ne the projection operators, Pi , which are projectors onto the ith scatterer, that is 8 D ^ E < f (r) if r 2 Ci r Pi f = : : (3.16) 0 if r 2 Ci = ^ Pi=1 ^ Also we de ne a projection operator for the whole scattering space, P = N Pi .
Chapter 3: Scattering in the Presence of Other Potentials
37
We can project our equations for the i (r) onto each scatterer in order to get equations analogous to the matrix equation we had for i (ri ) in the previous section: ^ Pi
E ^ ^ ^ X^ = Pi j i + Pi GB tj i
j =i
6
j
E
(3.17)
and, fur purely formal reasons, we de ne a quantity analogous to the vector a in the zero range scatterer case: X^ E = Pi i : (3.18) We note that (r) is nonzero on the scattering space only. With these de nitions, we can develop a linear equation for j summing (3.17) over the N scatterers: ^ = Pj i +
i
i.
We begin by (3.19)
X ^ ^ X^ Pi GB tj
i j =i
6
j
E
:
^ ^^ ^ ^ ^ Since ti is una ected by multiplication by Pi , we have ti = Pi ti and ti Thus we can rewrite (3.19) as ^ = Pj i + or
j i = t^i
i
E
.
X^ ^ X^^ P i GB P j t j
i j =i
6
(3.20)
XX ^ ^ ^ ^ ^ = Pj i + Pi GB Pj tj
i j =i
6
j i:
(3.21)
We can simplify this equation if, as in the zero range scatterer case, we de ne an o diagonal background Green function operator,
GB =
N XX ^ ^ ^ ^ ^ ^ X ^ ^ ^ PiGB Pj = PGB P ; PiGB Pi i j =i
6
i=1
(3.22) (3.23) (3.24)
and a diagonal tmatrix operator, and note that
^ = X tm ^ t
m
GB ^ = t
XX ^ ^ ^ ^ Pi GB Pj tj
i j =i
6
We can now rewrite (3.19) as
^ = P j i + GB ^ t
(3.25)
Chapter 3: Scattering in the Presence of Other Potentials
38
which we may formally solve for j
i:
^ The operator P ; GB ^ is an operator on functions on the scattering space, S , and the t boldface ;1 superscript indicates inversion with respect to the scattering space only. In the case of zero range interactions the scattering space is a discrete set and the inverse is just ordinary matrix inversion. In general, nding this inverse involves solving a set of coupled linear integral equations. ^ We note that the projector, P, is just the identity operator on the scattering space so h^ i h i ^ P ; GB ^ 1 = 1 ; GB ^ 1 : t t (3.27)
; ;
h
i
^ = P ; GB ^ t
h
i 1^ Pj i:
;
(3.26)
We can rewrite (3.15), yielding ^ t = j i + GB ^ Substituting (3.26) into (3.28) gives ^ t ^ = j i + G B ^ 1 ; GB ^ t The identity implies
;
:
(3.28)
h
i 1^ Pj
; ;
i
(3.29) (3.30) (3.31) (3.32)
^ ^^ ^^ ^ A(1 ; B A) 1 = (1 ; AB ) 1 A
^ ^ t = j i + GB 1 ; ^GB We now de ne a multiple scattering tmatrix ^ t ^ t = 1 ; ^ GB
h
i
;
1
^j i: t
h
i
;
1
^ t
which is zero outside the scattering space. Our wavefunction can now be written ^ ^ = j i + GB t
j i:
(3.33)
This derivation seems much more complicated than the special case presented rst. While this is true, the underlying concepts are exactly the same. The complications arise from the more complicated nature of the individual scatterers. Each scatterer now leads to a linear integral equation, rather than a linear algebraic equation what was simply a
Chapter 3: Scattering in the Presence of Other Potentials
39
set of linear equations easily solved by matrix techniques, becomes a set of linear integral equations which are di cult to solve except in special cases. The techniques in this section are also useful formally. We will use them later in this chapter to e ect an alternate proof of the scatterer renormalization discussed in the next section.
3.2 Renormalized tmatrices
In the previous section we used the tmatrix formalism and some complicated bookkeeping to combine many scatterers into one tmatrix. In this section we'd like to work with the propagation step. We'll start with a potential with known tmatrix in free space. We then imagine changing free space by adding boundaries for example. We then nd the correct tmatrix, for the same physical potential, for use with this new propagator. ^ To begin we consider the scatterer in free space. It has known tmatrix t (z ) which satis es ^ ^ ^ ^ Gs (z ) = Go (z) + Go (z)t (z )Go (z): (3.34) where the subscript \s" is used to denote that this Green function is for the scatterer in free space. Now, rather than free space, we suppose we have a more complicated background but ^ one with a known Green function operator, GB (z ). We note that there exists a tmatrix, ^ tB (z ), such that: ^ ^ ^ ^ ^ GB (z) = Go (z) + Go (z)tB (z )Go (z): (3.35) ^ We will use the tB (z ) operator in the algebra which follows, but only as a formal tool. Frequently, the division between scatterer and background is arbitrary we can often treat the background as a scatterer or a scatterer as part of the background. As an example, we begin with a zero range scatterer, with e ective radius ae , in ^ two dimensions. In section 2.5, we computed t+ (z ) for this scatterer in free space. We place this scatterer into an in nite wire with periodic transverse boundary conditions. The ^B causal Green function operator, G+ (r r z ), can be written as an in nite sum and can be calculated quite accurately using numerical techniques (see chapter 5). Computing a tmatrix for the scatterer and boundary together is quite di cult. Also, such a tmatrix would be nonzero not only on the scatterer but on the entire in nite length boundary. This lacks the simplicity of a zerorange scatterer tmatrix. Instead, we'd
0
Chapter 3: Scattering in the Presence of Other Potentials
40
^ ^ like to nd a tmatrix, T + (z ), for the scatterer such that the full Green function, G+ (z ), may be written ^ ^B ^B ^ ^B G+(z ) = G+ (z) + G+ (z)T + (z )G+ (z ): (3.36) ^ We'll call T + (z ) the \renormalized" tmatrix. This name will become clearer below. Let's start with a guess. What can happen in our rectangle that couldn't happen in freespace? The answer is simple: amplitude may scatter o of the scatterer, hit the walls and return to the scatterer again. That is, there are multiple scattering events between the background and the scatterer. Diagrammatically, this is just like the two scatterer multiple scattering theory considered in the previous section where scatterer 1, instead of being another scatterer, is the background (see equation 3.9). Naively we would expect to add up all the scattering events (dropping the z 's): ^ ^ ^ ^B^ ^ ^B^ ^B^ T + = t+ + t+ G+ t+ + t+G+ t+G+ t+ + =
"X
1
n=0
^ ^B n ^ t+G+ t:
#
(3.37)
This is perhaps clearer if we consider the position representation:
T +(r r ) = t+(r r ) + dr dr t+ (r r )G+ (r r )t(r r ) + B
0 0 00 000 00 00 000 000 0
Z
(3.38)
and, since our scatterer is zerorange,
t+(r r ) = s+ (r ; rs ) (r
0
0
; rs)
0
(3.39)
which simpli es (3.38):
T + (r r ) = s+ (r ; rs ) (r
0
0
; rs) + s+G+(rs rs) (r ; rs) (r ; rs) + B
0
:
(3.40)
Summing the geometric series yields:
T +(r r ) = (r ; rs ) (r
0
s ; rs) 1 ; s+G+ (rs rs)
B
+
(3.41)
as an operator equation:
1 ^+ (3.42) ^+ G+ t : 1 ; t ^B This is not quite right. With multiple scattering we had to de ne an o diagonal Green function since the diagonal part was already accounted for by the individual tmatrices. Something similar is needed here or we will be double counting terms which scatter, propagate without hitting the boundary, and then scatter again. ^ T+ =
Chapter 3: Scattering in the Presence of Other Potentials
41
^o ^B We can correct this by subtracting G+ from G+ in the previous derivation: ^ ^ ^ ^B ^o ^ ^ ^B ^o ^ ^B ^o ^ ^ T + = t+ + t+(G+ ; G+)t+ + t+ (G+ ; G+)t+ (G+ ; G+)t+ + = ^+ ^1+ ^ + t+ : 1 ; t (GB ; Go ) (3.43) For future reference, we'll state the nal result: ^ ^ h^1 i t+(z): T + (z ) = (3.44) + (z ) G+ (z ) ; G+ (z ) ^o ^ 1;t B We haven't proven this but we can derive the same result more rigorously in at least two ways, both of which are shown below. The rst proof follows from the expression (2.49) for the tmatrix derived in section 2.3. This is a purely formal derivation but it has the advantage of being relatively compact. Our second derivation uses the generalized multiple scattering theory of section 3.1.2. While this derivation is algebraically quite tedious, it emphasizes the arbitrariness of the split between scatterer and background by treating them on a completely equal footing. ^o We note that the freespace Green function operator, G+ (z ), could be replaced by any Green function for which the tmatrix of the scatterer is known. That should be clear from the derivations below.
3.2.1 Derivation
Formal Derivation
^ ^ ^ ^ ^ Suppose we have H = Ho + HB + Hs where HB is the Hamiltonian of the \back^ ground," and Hs is the \scatterer" Hamiltonian which may be any reasonable potential. There is a tmatrix for the scatterer without the background: ^ ^ ^ ^ t (z) = HsGs (z)(z ; Ho i ) (3.45)
and for the scatterer in the presence of the background where the background is treated as part of the propagator: ^ ^ ^ ^ ^ T (z) = HsG (z)(z ; Ho ; HB i ): ^ This yields an expression for the full Green function, G (z ) operator: ^ ^ ^ ^ ^ G (z ) = GB (z) + GB (z)T (z )GB (z ) (3.47) (3.46)
Chapter 3: Scattering in the Presence of Other Potentials
42
where GB (z ) solves
^ ^ ^ (z ; Ho ; HB i )GB (z ) = ^: 1 (3.48) ^ ^ ^ ^ We wish to nd T (z ) in terms of t (z ), Go (z ), and GB (z ). Formally, we can use ^ (3.45) to solve for Hs: h^ i 1 ^ ^ ^ ^ ^ ^ ^ Hs = t (z )(z ; Ho i ) 1 Gs (z ) = t (z )Go (z )(z ; Ho ; Hs i ): (3.49)
; ;
We substitute this into (3.46) h^ i 1 ^ ^ ^ ^ ^ ^ T (z ) = t (z)Go (z) Gs (z) G (z )(z ; Ho ; HB i )
;
(3.50)
which we can rewrite h^ i ^ ^ ^ ^ ^ ^ T (z ) = t (z )Go (z) Go (z ) + Go (z)t (z)Go (z) h^ ih ^ i 1 ^ ^ ^ GB (z ) + GB (z)T (z)GB (z ) GB (z )
; ;
;
1
(3.51) (3.52)
;
and, canceling a few inverses, we have h ^ i 1h ^ ^ i ^ ^ ^ T (z) = t (z ) 1 + Go (z)t (z ) 1 + GB (z )T (z ) : We rewrite this as h ^ i 1^ h ^ i ^ ^ ^ ^ ^ 1 ; t (z ) 1 + Go (z )t (z ) GB (z ) T (z ) = t (z ) 1 + Go (z )t (z )
;
1
(3.53)
1
^ which we solve for T (z ):
^ ^ ^ ^ T (z) = 1 ; t (z ) 1 + Go (z)t (z)
h
i 1^ GB (z)
;
;
1
^ ^ ^ t (z) 1 + Go (z )t (z)
;
h
i
;
:
(3.54) (3.55)
To proceed we'll need the operator identity ^^ ^ ^ ^^ (1 ; AB ) 1 A = A(1 ; B A)
;
1
Which we apply to yield h i1 ^ ^ ^ ^ ^ T (z ) = 1 ; 1 + t (z )Go (z) t (z )GB (z )
;
;
1h
^ ^ 1 + t (z )Go (z )
;
i
;
1
^ t (z ) ^ t (z )
= = =
h n
^ ^ ^ ^ 1 + t (z )Go (z ) 1 ; 1 + t (z )Go (z )
i
h
i
1
^ ^ t (z )GB (z)
;
1
^ ^ ^ ^ 1 + t (z )Go (z ) ; t (z )GB (z )
o
;
1
^ t (z ) (3.56)
^ i t (z ): ^ ^ 1 ; t (z ) GB (z ) ; Go (z )
h^1
Chapter 3: Scattering in the Presence of Other Potentials
43
Thus we have veri ed our guess, albeit in an exceedingly formal way.
Derivation Using Generalized Multiple Scattering Theory
Consider a situation with two scatterers in free space. One, the background, with ^ a scattering tmatrix t1 which is identically zero outside the domain C1 and the other ^ a scattering tmatrix t2 which is zero outside the domain C2 . They may each be point scatterers or extended scatterers. We assume that the scatterers do not overlap, i.e., C1 \ C2 = . The scattering space, S , is simply the union of C1 and C2, S = C1 C2. From this point on in the derivation, we drop the superscript \ " since we carried it through the previous derivation and it should be clear here that there is a superscript \ " on every tmatrix and every Green function. We also drop the argument \z ". Now we apply the generalized multiple scattering theory of section 3.1.2 where the ^ background Green function operator is just Go. We have an explicit form for ^: t
hi ^ = ti t ij ^
ij
(3.57) (3.58)
and Go :
According to our derivation of section 3.1.2, the tmatrix may be written ^ ^ t t = 1 ; ^Go
8 < Go ij = : ^0 i = j GB i 6= j h i
;
1
^: t
(3.59)
^ Painful as it is, let's write out all of the terms in the above expression for t. We drop the \hats" on all the operators since everything in sight is an operator. First we have to invert 1 ; ^Go and we have to do it carefully because none of these operators necessarily t commute. 0 1 1 ;t1 Go A 1 ; ^Go = @ t (3.60) ; t2 G o 1 so 0 1 1 (1 ; t1 Go t2 Go ) 1 t1 Go(1 ; t2 Got1 Go) 1 A 1 ; ^Go = @ t (3.61) t2 Go (1 ; t1Go t2 Go ) 1 (1 ; t2 Got1 Go) 1 and thus
; ; ; ; ;
1 ; ^Go t
;
1
0 1 ^ = @ (1 ; t1 Go t2 Go ) t1 t 1
; ;
t1 Go (1 ; t2 Go t1 Go ) 1 t2 A : t2 Go (1 ; t1Go t2 Go ) t1 (1 ; t2 Go t1Go) 1 t2
; ;
1
(3.62)
Chapter 3: Scattering in the Presence of Other Potentials
44
So, in detail,
G = Go : +Go (1 ; t1 Got2 Go) 1 t1 Go + Go (1 ; t2 Go t1 Go ) 1 t2 Go +Go t1 Go (1 ; t2 Got1 Go) 1 t2 Go + Go t2 Go (1 ; t1 Go t2 Go ) 1 t1 Go
; ; ; ;
(3.63)
We want to rewrite this in the form
G = G1 + G1T2 G1
(3.64)
where T2 is the renormalized tmatrix operator for scatterer 2 in the presence of scatterer 1. First we add and subtract Go t1 Go :
or
G = Go + Got1 Go ;Got1Go + Go(1 ; t1Got2Go) 1t1Go + Go(1 ; t2Got1Go) 1t2Go +Go t1 Go (1 ; t2 Go t1 Go ) 1 t2 Go + Go t2 Go (1 ; t1 Go t2 Go ) 1 t1 Go
; ; ; ;
(3.65)
but So
G = Go + Go t1Go + Go (1 ; t1Go t2Go ) 1 ; 1 t1Go +Go (1 ; t2 Got1 Go) 1 t2 Go +Go t1 Go (1 ; t2 Got1 Go) 1 t2 Go + Go t2 Go (1 ; t1 Go t2 Go ) 1 t1 Go
; ; ; ;
(3.66)
h
(1 ; t1 Go t2 Go )
;
1
;1
i
= (1 ; t1 Got2 Go) 1 t1 Go t2 Go :
; ;
(3.67)
G = Go + Go t1Go + Go t1 Go(1 ; t2 Got1 Go) 1 t2 Go t1 Go (3.69) +Go (1 ; t2 Got1 Go) 1 t2 Go +Go t1 Go (1 ; t2 Got1 Go) 1 t2 Go + Go (1 ; t2 Go t1 Go ) 1 t2 Go t1 Go : Several terms now have the common factor, (1 ; t2 Go t1 Go ). This allows us to collapse
; ; ; ;
G = Go + Go t1Go + Go (1 ; t1 Got2 Go) 1 t1Go t2 Go t1 Go : +Go (1 ; t2 Got1 Go) 1 t2 Go +Go t1 Go (1 ; t2 Got1 Go) 1 t2 Go + Go t2 Go (1 ; t1 Go t2 Go ) 1 t1 Go We use the operator identity (3.55) to rewrite G:
; ; ;
(3.68)
several terms:
1 G = (Go + Go t1 Go ) + (Go + Go t1 ) 1 ; t G t G t2 (Go + Got1 Go)
2 o1 o
(3.70)
Chapter 3: Scattering in the Presence of Other Potentials
45
but Go + Go t1 Go = G1 is the Green function for scatterer 1 alone. Also, t2 (Go t1 Go ) = t2 (G1 ; Go), so we have 1 (3.71) G = G1 + G1 1 ; t (G ; G ) t2 G1 : 2 1 o Now we see that equation 3.71 has exactly the same form as equation 2.49. So we have derived (restoring the \ " and the \z ") ^ ^ h^1 i t (z) T2 (z ) = (3.72) ^ ^ 1 ; t2 (z ) G1 (z ) ; Go (z ) 2 which is identical to equation 3.44 as was to be shown. In the notation of the previous derivation, T2 (z ) = T (z ), t2 (z ) = t(z ) and G1 (z ) = GB (z ).
3.2.2 Consequences
Free Space Background
^ ^ ^ ^ What happens if GB (z ) = Go (z )? Our formula should reduce to T (z ) = t (z ). And it does: 1 ^ ^ ^ T (z ) = ^ (3.73) ^ (z )B ; Go (z )] t (z ) = t (z ): ^ 1 ; t (z ) G
Closed Systems
Suppose GB comes from a nite domain, e.g., a rectangular domain in two dimensions. Then we have 1 ^ ^ ^ (3.74) T = ^ ^1 ^ t = ^ ^ ^ ^ t : 1 ; t (GB ; Go ) 1 ; t (Go tB Go ) ^ It is not obvious from this that T doesn't depend on the choice if incoming or outgoing solutions in the above equation. However, it is clear from physical considerations that a closed system only has one class of solutions. In fact, the above equation is independent of the choice of incoming or outgoing solutions for the free space quantities. We can show this in a nonrigorous way by observing that 1 ^ T= (3.75) i 1 h^ ^ ^ t ; GB ; Go
;
and that
^ t
;
1
^ ^ = Hs 1 ; Go
;
(3.76)
Chapter 3: Scattering in the Presence of Other Potentials
46
(3.77) To show this rigorously would require a careful de nition of what these various inverses mean since many of the operators can be singular. We need to properly de ne the space on which these operators act. This would be similar to the de nition of the scattering space used in section 3.1.2. The Green function operator of a closed system has poles at the eigenenergies. o o ^ ^ That is, tB (z ) and GB (z ) have poles at z = En for n 2 f1 : : : 1g. For z near En ^ R ^ tB (z ) z ; n (3.78) E
; ; 0
so
^ ^ ^ T 1 = Hs 1 ; GB :
and
n
^ Go (z ) has no poles (though it may have other sorts of singularities). So we de ne ^ ^ n ^ ^ n Rn = Go (Eo )Rn Go (Eo ):
0
^ R ^ ^ ^ GB (z ) Go (z); n Go (z) : z En
0
(3.79) (3.80)
^ Since we have added a scatterer, in general none of the poles of GB (z ) should o ^ be poles of G(z ). This is something we can check explicitly. Suppose z = En + where j j << 1. Then ^ ^ ^ ^ ^ 0 ^ G(z) = G(En + ) = Rn + Rn t^1(z)Rn t (z) Rn (3.81) ^ 1; which is easily simpli ed: 0 1 ^ Rn @1 ; 1 ^ A G(z) = (3.82) 1 ; t^ (z)Rn : ^ ^ ^ o We assume that t (En ) 6= 0 and we know that Rn 6= 0 because En is a simple pole of GB (z ). So there exists such that So we have and thus ^ ^ t ( z ) Rn 1 ; t^ (z)Rn ^ 1
<< 1:
1+ ^ ^ t (z)Rn
(3.83) (3.84) (3.85)
^ G(z) = ^ 1 + O( ): t (z ) ^ ^ ^ 0 Therefore, the poles of GB (z ) are not poles of G(z ) unless t(En ) = 0.
Chapter 3: Scattering in the Presence of Other Potentials
47
^ So Where are the Poles in G(z)? ^ ^ As we expect, the analytic structure of G(z ) and T (z ) coincide (at least for the ^ ^ most part, see 10], p. 52). More simply, since the poles of GB (z ) are not poles of G(z ), ^ ^ only poles of T (z ) will contribute poles to G(z ). Recall ^ ^ h^1 i t (z) T (z ) = (3.86) ^ ^ 1 ; t (z ) GB (z ) ; Go (z ) o ^ so poles of T (z ) occur at En (6= En ) satisfying h^ i ^ ^ t (En ) GB (En ) ; Go (En ) = 1 (3.87) or 1 : (3.88) = ^ ^ ^ Go (En)tB (En )Go (En ) ^ ^ This is a simple equation to use when GB comes from scatterers added to free space so tB is ^ known. When GB is a Green function given apriori, e.g., the Green function of an in nite wire in 2 dimensions, the above equation becomes somewhat more di cult to evaluate. We'll address this issue in a later chapter (5) about scattering in 2dimensional wires. ^ ^ ^ t (En ) = GB (En) ; Go (En)
;
h
i
1
Perturbatively small scatterers
^ ^ For small t(z ) the only way to satisfy equation 3.88 is for GB (z ) to be very large. o ^ ^ But GB (z ) is large only near a pole, En of GB (z ). This is a nice result. It implies that for o ^ ^ ^ small t(z ), i.e., a small scatterer, the poles, En of G(z ) are close to the poles En of GB (z ) as we might expect. This idea has been used (see 11]) to explore the possibility that an atomic force microscope, used as a small scatterer, can probe the structure of the wavefunction of a quantum dot.
Renormalization of a zero range tmatrix
A zero range tmatrix provides a simple but subtle challenge for the application of this renormalization. Since ^ t (z) = s (z ) jrs ihrs j (3.89) we have,
^ T (z ) =
1 s (z)
;
h
GB (rs rs z ) ; Go (rs rs z)
1
i jrs ihrs j :
(3.90)
Chapter 3: Scattering in the Presence of Other Potentials
48
But, as we have already discussed in section 2.3, in more than one dimension, the Green function, G (r r z ) for the Schrodinger equation is singular in the r ! r limit. So, we have to de ne T (z ) a bit more formally: 1 ^ i T (z ) = rlim 1 h (3.91) 0 rs ; GB (rs r z) ; Go (rs r z) jrsihrsj : s (z)
0 0 ! 0 0
This limit can be quite di cult to evaluate. Four particular cases are dealt with in chapters 5 and 6.
Chapter 4
Scattering From Arbitrarily Shaped Boundaries
4.1 Introduction
In the previous chapter we developed some powerful tools for solving complicated scattering problems. All of them were built upon one or more Green functions. In this chapter we consider a variety of techniques for computing Green functions in various geometries. The techniques discussed in this chapter are useful when we have a problem which involves scattering on a surface of codimension one (one dimension less than the dimension of the system), for example scattering from a set of one dimensional curves in two dimensions. We begin by computing the Green function of an arbitrary number of arbitrarily shaped smooth Dirichlet ( = 0) boundaries placed in freespace. The method is constructed by nding a potential which forces to satisfy the Dirichlet boundary condition. The technique is somewhat more general. It can enforce an arbitrary linear combination of Dirichlet and Neumann boundary conditions. The more general case is dealt with in appendix B. We then rederive the fundamental results by considering certain expansions of rather than a potential. This lends itself nicely to the generalizations which follow in the next two sections. We can use expansions of to simply match boundary conditions. The rst generalization is a small but useful step from Dirichlet boundary conditions to periodic boundary conditions. 49
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
50
We next consider scattering from a boundary between two regions with di erent known Green functions. This cannot be handled as a boundary condition but, nonetheless, all the scattering takes place at the interface. This method could be used to scatter from a potential barrier of xed height that was the original motivation for its development. It could also be used to scatter from a superconductor embedded in a normal metal or viceversa since each has its own known Green function (see A.6). This idea is being actively pursued.
4.2 Boundary Wall Method I
Consider where the integral runs over the surface C . Here we will assume a pragmatic point of view by supposing that our mathematical problem is well posed, i.e., there does exist a solution for the Schrodinger equation satisfying the boundary conditions considered. Obviously, the method has no meaning when this is not so.) The boundary condition
C
V (r) = ds (s) (r ; r(s))
Z
(4.1)
(r(s)) = 0
(4.2)
emerges as the limit of the potential's parameters ( ! 1). For nite , the potential has the e ect of a penetrable or \leaky" wall. A similar idea has been used to incorporate Dirichlet boundary conditions into certain classes of solvable potentials in the context of the path integral formalism 19]. Here we use the delta wall more generally, resulting in a widely applicable and accurate procedure to solve boundary condition problems for arbitrary shapes. Consider the Schrodinger equation for a ddimensional system, H (r) (r) = E (r), with H = H0 + V . As is well known, the solution for (r) is given by (r) = (r) + dr GE (r r )V (r ) (r ) 0
0 0 0 0 0
Z
(4.3)
where (r) solves H0 (r) (r) = E (r) and GE (r r ) is the Green function for H0 . Hereafter, 0 for notational simplicity, we will suppress the superscript E in GE . 0 Now, we introduce a type potential
V (r) =
Z
C
ds (r ; r(s))
(4.4)
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
51
where the integral is over C , a connected or disconnected surface. r(s) is the vector position of the point s on C (we will call the set of all such vectors S ), and is the potential's strength. Clearly, V (r) = 0 for r 2 S . = In the limit ! 1, the wavefunction will satisfy (4.2) (with (s) = 1) as shown below. For nite , a wave function subject to the potential (4.4) will satisfy a \leaky" form of the boundary condition. Inserting the potential (4.4) into (4.3), the volume integral is trivially performed with the delta function, yielding (r) = (r) +
Z
C
ds G0(r r(s )) (r(s )) = (r) + ds G0 (r r(s )) T (r(s )):
0 0 0 0 0 0 C 00
Z
(4.5)
Thus, if (r(s)) = T (r(s)) is known for all s, the wave function everywhere is obtained from (4.5) by a single de nite integral. For r = r(s ) some point of S (r(s )) = (r(s )) +
00 00
Z
C
ds G0(r(s ) r(s )) (r(s ))
0 00 0 0
(4.6)
which may be abbreviated unambiguously as (s ) = (s ) +
00 00
Z
ds G0 (s s ) (s ):
0 00 0 0
(4.7)
We can formally solve this equation, getting ~ = ~ ; G0 ~ I
h
i 1~
;
(4.8)
where ~ ~ stand for the vectors of (s)'s and (s)'s on the boundary, and ~ for the identity I operator. The tildes remind us that the free Green function operator and the wavevectors are evaluated only on the boundary. We de ne h i ~ 1 T = ~ ; G0 I (4.9)
;
and then it is easy to see that T in (4.5) is given from (4.8)(4.9) by
T (r(s )) = ds T (s s) (s):
0 0
Z
(4.10)
In order to make contact with the standard tmatrix formalism in scattering theory 35], we note that a T operator for the whole space may be written as
t(rf ri) = ds ds (rf ; r(s )) T (s s ) (ri ; r(s )):
00 0 00 00 0 0
Z
(4.11)
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
52
For So,
! 1, the operator T converges to ; G0 1. Inserting this into (4.12), we have h i ~ = ~ ; G0 G0 1 ~ = 0: I ~ ~ (4.13) satis es a Dirichlet boundary condition on the surface C for = 1.
; ;
Finally, we observe that (4.6) can be written as h ~ i ~ = ~ + G0 T ~: I
h~ i
(4.12)
4.3 Boundary Wall Method II
We can simplify the previous derivation considerably if we assume that there exists an operator Z ^ t(E ) = ds ds jr(s)i T (s s E ) r(s ) (4.14) such that the solution of our scattering problem, j i, may be written ^ j i = j i + Go(E)t^(E) j i (4.15)
0 0 0
or, in position space, (r) = (r) + which we can simplify to (r) = (r) +
Z
dr dr Go(r r E )t(r r E ) (r )
0 00 0 0 00 00 0 00 0 0 00 00
(4.16) (4.17) (4.18)
Z
ds ds Go (r r(s ) E )T (s s E ) (r(s )):
The solution is a bit simpler if we make a notational switch:
t ](s) =
Z
ds T (s s E ) (r(s ))
0 0 0
solve this with
ds Go (r r(s ) E )t ](s ) = ; (r(s)) (4.19) which we may solve for t ](s) (e.g., using standard numerical methods.) Formally, we can
0 0 0
Now we enforce the dirichlet boundary condition (r(s)) = 0. That gives us a Fredholm integral equation of the rst kind:
Z
(4.20) where the new notation reminds us that the inverse is calculated only on the boundary. That is Go 1(s s ) satis es
0 ; 0 0 ; 0
t ](s) = ; ds Go 1(s s E ) (s )
Z
Z
ds Go(r(s) r(s ) E )Go 1 (s s E ) = (s ; s ):
00 00 ; 00 0 0
(4.21)
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
53
4.4 Periodic Boundary Conditions
We can generalize this for other sorts of boundary conditions. In this section, we'll deal with periodic boundary conditions. As with Dirichlet/Neumann boundary conditions we may write the solution in terms of an expansion in freespace solutions to the waveequation. However, rst we should properly characterize our periodic boundaries. We'll consider a kind of generalized periodic boundaries. Consider two surfaces, C1 and C2 both parameterized by functions of a generalized parameter s. We insist that the surfaces are parameterized such that both parameter functions have the same domain. We de ne \periodic boundary conditions" as any set of conditions of the form (r1 (s)) = (r2 (s)) @n(r1 (s)) (r1(s)) = @n(r2 (s)) (r2 (s)) (4.22) (4.23)
where @n(r(s)) denotes the partial derivative in the direction normal to the curve r(s). We note that di erent parameterizations of the surfaces may yield di erent solutions. For example, consider the unit square in twodimensions. Standard periodic boundary conditions specify that (0 y) @ (0 y) @x (x 0) @ (x 0) @y = = = = (1 y) 8y 2 0 1] @ (1 y) 8y 2 0 1] @x (x 1) 8x 2 0 1] @ (x 1) 8x 2 0 1]: @y
(4.24)
We choose the surface C1 as the left side and top of the square and C2 as the right side and bottom of the box. We may choose a real parameter, s 2 0 2], where s 2 0 1] parameterizes the sides of the box from top to bottom and s 2 (1 2] parameterizes the top and bottom from right to left. However, twisted periodic boundaries may also be considered: (0 y) @ (0 y) @x (x 0) @ (x 0) @y = = = = (1 1 ; y) 8y 2 0 1] @ (1 1 ; y) 8y 2 0 1] @x (1 ; x 1) 8x 2 0 1] @ (1 ; x 1) 8x 2 0 1]: @y
(4.25)
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
54
In our language, this simply means choosing a di erent parameterization of the two pieces of the box. To solve this problem, we now expand in terms of the freespace Green function on the boundary: (r) = (r) + (r1 (s)) +
Z
G(r r1 (s) E )f1 (s) + G(r r2 (s) E )f2 (s)] ds
(4.26)
We insert the expansion (4.26) into equations (4.224.23):
Z
G(r1 (s) r1 (s ) E )f1 (s ) + G(r1 (s) r2 (s ) E )f2 (s ) ds =
(r2 (s)) +
Z
0
0
0
0
0
G(r2(s) r1 (s ) E )f1 (s ) + G(r2 (s) r2 (s ) E )f2 (s ) ds
0 0 0 0
0
@n(r1Z(s)) (r1 (s))+ (4.27) h i @n(r1 (s)) G(r1 (s) r1 (s ) E )f1 (s ) + @n(r1 (s)) G(r1(s) r2 (s ) E )f2 (s ) ds =
0 0 0 0 0
@n(r2Z(s)) (r2 (s))+ (4.28) h i @n(r2 (s)) G(r2(s) r1 (s ) E )f1 (s ) + @n(r2 (s)) G(r2(s) r2 (s ) E )f2 (s ) ds (4.29)
0 0 0 0 0
This is a set of coupled Fredholm equations of the rst type. To make this clearer we de ne:
a(s) a (s) G1 (s s E ) G2 (s s E ) G1 (s s E ) G2 (s s E )
0 0 0 0 0 0 0
= = = = = =
(r2 (s)) ; (r1 (s)) @n(r2 (s)) (r1 (s)) ; @n(r1 (s)) (r2 (s)) (4.30) Go (r1 (s) r1 (s ) E ) ; Go(r2 (s) r1 (s ) E ) Go (r1 (s) r2 (s ) E ) ; Go(r2 (s) r2 (s ) E ) @n(r1 (s)) Go (r1 (s) r1 (s ) E ) ; @n(r2 (s)) Go (r2 (s) r1 (s ) E ) @n(r1 (s)) Go (r1 (s) r2 (s ) E ) ; @n(r2 (s)) Go (r2 (s) r2 (s ) E ): (4.31)
0 0 0 0 0 0 0 0
and then rewrite the equations at the boundary:
0 0 0
Z G1(s s )f1 (s ) + G2(s s )f2 (s ) ds = a(s) Z
0 0 0 0 0 0 0 0 0 0
G1(s s )f1 (s ) + G2(s s )f2 (s ) ds = a (s)
(4.32)
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
55
which we can write, at least schematically, as a matrix equation
0 @ G1
0
where the G 's are linear integral operators in the space of functions on the boundary and the f 's and a's are vectors (functions) in that space. Formally, we can solve this equation:
G2 A @ f1 A = @ a A G1 G2 f2 a
0 0
10
1 0
1
(4.33)
0 1 0 @ f1 A = @ G1 f2 G1
0
G2 A 1 @ a A G2 a
; 0 0
1 0
1
(4.34)
While we cannot usually invert this operator analytically, we can sample our boundary at a discrete set of points. We then construct and invert this operator in this nite dimensional space and, using this nite basis, construct an approximate solution to our scattering problem.
4.5 Green Function Interfaces
In this section we'll deal with scattering from an arbitrarily shaped potential step. Though this is not purely a boundary condition, the problem is solved, as in the previous two cases, if sums of solutions to \freespace" equations satisfy certain conditions on a boundary. We consider an object with potenial Vo embedded in freespace (we could consider an object with one Green function embedded in a space with another Green function as long as the asymptotic solutions of the wave equation are known in both regions but we'll be a bit more speci c) where the boundary between the two regions is parameterized by s via r(s) as the Dirichlet boundary was in section 4.3. It is immediately clear that an expansion of the form (4.16) cannot work for the wavefunction inside the dielectric (though it may work for the wavefunction outside) since the Green function used in the expansion does not solve the wave equation inside. We need a separate expansion for the inside wavefunction. This is no surprise since at a potential step boundary we have twice as many boundary conditions: continuity of and it's rst derivative. We expand the inside and outside solutions separately to get out (r) = out (r) + Gout (r r(s) E ) tout in out ](s) ds
Z
(4.35)
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
56 (4.36)
where (r) = out (r) ; in(r). The incoming wave inside the potential step is puzzling at rst. Because the basis in which we are expanding the solutions is not orthogonal, we have some freedom in choosing in(r) = 0. i) We can choose in(r) = 0 which corresponds to the entire inside wavefunction being produced at the boundary sources. This is mathematically correct but a little awkward when the potential step is small compared to the incident energy.. When the step is small, the wavefunction inside and outside will be much like the incoming wave which leads us to ii) A more physical but harder to de ne choice is to choose in(r) to be a solution to the wave equation inside the step and which has the property
Vo 0
!
in(r) =
in(r) + Gin(r r(s) E ) tin in out ](s) ds
Z
lim in(r(s)) = out (r(s)):
(4.37)
Though this seems ad hoc, it is mathematically as valid as choice (i) and has the nice property that lim t ](s) = 0 (4.38) Vo 0 in,out which is appealing. Now we write our boundary conditions:
!
out (r(s)) = in(r(s)) @n(s) out (r(s)) = @n(s) in(r(s)) or out (r(s)) + Gout (r(s) r(s ) E ) tout ](s ) ds = in(r(s)) + Gin(r(s) r(s ) E ) tin ](s ) ds
0 0
(4.39)
Z
Z
0
0
0
(4.40)
0
@n(s) out(r(s)) + @n(s) Gout (r(s) r(s ) E ) tout ](s ) ds = Z @n(s) in(r(s)) + @n(s) Gin(r(s) r(s ) E ) tin ](s ) ds
0 0 0 0 0 0
Z
(4.41)
where @n(s) is the normal derivative at the boundary point r(s).
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
57
The above is a set of coupled Fredholm equations of the rst type. To make this clearer we de ne:
a(s) = in(r(s)) ; out (r(s)) a (s) = @n(s) in(r(s)) ; @n(s) out (r(s))
0
v(s) = tout ](s) w(s) = tin ](s) Go(s s Gi (s s Go(s s Gi (s s
0 0 0
0
0
0
E) E) E) E)
= = = =
Gout(r(s) r(s ) E ) Gin(r(s) r(s ) E ) @n(s) Gout (r(s) r(s ) E ) @n(s) Gin(r(s) r(s ) E ):
0 0 0 0
(4.42)
Now we have the following system of integral equations
0 0 0 0 0
Z Go(s s E )v(s ) ; Gi(s s E )w(s ) ds = a(s) Z
0 0 0 0 0 0 0 0
Go(s s E )v(s ) ; Gi(s s E )w(s ) ds = a (s)
(4.43)
with all the G's and 's given. We may schematically represent this as a matrix equation:
0 @ Go
0
;Gi A @ v A = @ a A Go ;Gi w a
0 0
10
1 0
1
(4.44)
which we may formally solve:
0 1 0 @ v A = @ Go w Go
0
;Gi A 1 @ a A ;Gi a
; 0 0
1 0
1
(4.45)
This formal solution is not much use except perhaps in a special geometry. However, it does lead directly to a numerical scheme. Simply discretize the boundary by breaking it into N pieces fCi g of length . Label the center of each piece by si and change all the integrals in the integral equations to sums over i. Now the schematic matrix equation actually becomes
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
58
a 2N like.
2N matrix problem which can be solved by LU decomposition techniques or the
We might also worry about mutliple step edges or di erent steps inside each other. All this will work as well but we will get a set of equations for each interface so the problem may get quite costly. This would not be a sensible way to handle a smoothly varying potential. However, as noted at the beginning, the formalism here works for any known Gin and Gout and so certain smooth potentials may be handled if their Green function's are known.
4.6 Numerical Considerations and Analysis
4.6.1 Discretizing The Boundary Wall Equations
As discussed in Section 4.2, the key idea in our method is to calculate T and/or T on C , and then to perform the integral (4.5). Unfortunately, in the great majority of cases the analytical treatment is too hard to be applied. In such cases we consider the problem numerically. We divide the region C into N parts, fCj gj =1:::N . Then, we approximate (r ) = (r) + (r) +
N XZ j =1 N X j =1
C
j
ds G0 (r r(s)) (r(s))
(r(sj ))
Z
C
j
ds G0 (r r(s))
(4.46)
with sj the middle point of Cj and rj = r(sj ). Now, considering r = ri we write (ri ) = P (ri ) + N=1 Mij (rj ) (for M , see discussion below). If = ( (r1 ) : : : (rN )), and j = ( (r1 ) : : : (rN )), we have = + M , and thus = T , with T = (I ; M) 1 , which is the discrete T matrix. So
;
i = (T )i =
N Xh
j =1 N X j =1
(I ; M)
;
1
i
ij j
(4.47) (4.48)
and
(r)
(r) +
G0(r rj )
j (T )j
where we have used a mean value approximation to the last integral in (4.46) and de ned j the volume of Cj .
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
59
It follows from (4.46) that
Mij =
We can approximate
Z
C
j
ds G0(ri r(s)):
(4.49) (4.50)
Mij G0(ri rj ) j :
However, G0 (ri rj ) may diverge for i = j (e.g., the free particle Green functions in two or more dimensions). We discuss these approximations in detail in Section 4.6.3. If we consider ! 1, it is easy to show from the above results that (r) (r) ;
N X j =1
G0 (r rj )
j (M
;
1
)j :
(4.51)
Equation (4.51) is then the approximated wave function of a particle under H0 interacting with an impenetrable region C .
4.6.2 The Many Scatterer Limit
The boundary wall method is a sort of multiple scattering approach to building boundaries. In the many scatterer limit, we can make this connection explicit. Recall that the inverse of the multiple scattering tmatrix for point scatterers has the form 8 < 1=Ti (E ) if i = j (Mms )ij = : (4.52) Go(ri rj E ) if i 6= j: We assume that the discretization of the boundary wall method is uniform with spacing i = l 8i. If, for the boundary wall M matrix, we de ne B = ;(1=l)M, we get a simpler version of the discretized equation: (r) (r) +
N X j =1
G0 (r rj ) (B
;
1
)j :
(4.53)
we notice that B has the same o diagonal elements as Mms . The diagonal elements of B p have the form (where k = E )
Bii = 1 l
Z l=2
;
l=2
G (ri r(si + x) E ) dx
1 Z l=2 ln(kx) dx = 1 ln kl ; 1 = 1 ln kl : 2 2 2 2e 0 (4.54)
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
60
If we want to identify this with a multiple scattering problem we must have 1=Ti (E ) = kl 1 2 ln 2e which is the low energy form of the point interaction tmatrix discussed in section 2.4 for a scatterer of scattering length l=2e. Thus, in the many scatterer limit (kl << 1), the Dirichlet boundary wall method becomes the multiple scattering of many pointlike scatterers along the boundaries where each scatterer has scattering length l=2e. The numerical solution (4.53) approaches the solution (4.5) as N ! 1. In practice, we choose N to be some nite but large number. In this section we explain how to choose N for a given problem and how the approximation (4.50) a ects this choice. In order to analyze the performance of the numerical solution, we must de ne some measure of the quality of the solution. We measure how well a Dirichlet boundary blocks the ow of current directed at it. Thus we measure the current,
4.6.3 Quality of the Numerical Method
j = =f (r)r (r)g
(4.55)
behind a straight wall of length l. To simplify the analysis we integrate j n over a \detector" located on one side of a wall with a normally incident plane wave on the other side. We divide this integrated current by the current which would have been incident on the detector without the wall present. We call this ratio, T , the transmission coe cient of the wall. Instead of T as a function of N , we consider T vs. , where = 2 N=(lk) is the number of boundary pieces per wavelength. We consider three methods of constructing the matrix M for each value of . The rst is the simplest approximation:
C
8R > i ds G0(ri r(s)) i = j > < Mij = > (4.56) > : G0 (ri rj ) i 6= j which we call the \fullyapproximated" M. The next is a more sophisticated approximation with 8R > j ds G0(ri r(s)) jsi ; sj j < k > < Mij = > (4.57) > : G (r r ) js ; s j
C
0 i j
i
j
k
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
61
which we call the \bandintegrated" M because we perform the integrals only inside a band of =(2 ) wavelengths. Finally, we consider
Mij =
Z
C
j
ds G0 (ri r(s))
8ij
(4.58)
which we call the \integrated" M. Numerically, the \bandintegrated" and \integrated" M require far more computational work than the \fullyapproximated" M which requires the fewest integrals. All ; methods of calculating M scale as O N 2 . The calculation of T or T from M scales as O ;N 3 and the calculation of (r) for a particular r from a given T scales as O (N ). Which of these various calculations dominates the computation time depends on what sort of computation is being performed. When computing wavefunctions, computation time is typically dominated by the large number of O (N ) vector multiplications. However, when calculating (r) in only a small number of places, e.g., when performing a ux calculation, ; computation time is often dominated by the O N 3 construction of T . In Figure 4.1 we plot log10 T vs. for the three methods above and 2 30. We see that all three methods block more than 99% of the current for > 5. However, it is clear from the Figure that the \integrated" M and to a lesser extent the \bandintegrated" M strongly outperform the \fullyapproximated" M for all plotted.
4.7 From Wavefunctions to Green Functions
We pause in the development of these various boundary conditions to explain how to get Green functions (which we'll need to use these boundary conditions in more complex scattering problems) from the wavefunctions we've been computing. All the above methods compute wavefunctions from given boundary conditions and incoming waves. In several cases, the idea of an incoming wave is somewhat strange. For example, what is an incoming wave on a periodic boundary? However, we include the incoming wave only to allow the computation of Green functions as we outline below. When we have an eigenstate of the boundary condition, the incoming wave must be become unimportant and we will show that below. One simple way to form a Green function from a general solution for wavefunctions is to recall that G(r r E ) is the wavefunction at r given that the incoming wave is a freespace pointsource at r , (r) = Go (r r E ). This yields an expression for the Green function
0 0 0
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
25
62
fullyapproximated bandintegrated integrated
Behavior of t2 for different approximations to M
15 0.5 1.5 2.5 3.5 4.5 5.5 1 2 3 4 5 log10 t
2
20 5 10 p
Figure 4.1: Transmission (at normal incidence) through a at wall via the Boundary Wall method.
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
63
everywhere as a linear function of Go .
4.7.1 Example I: Dirichlet Boundaries
For Dirichlet boundaries we had (r) = (r) + Go (r r(s) E )T (s s ) (r(s )) ds ds
0 0
Z
0
where T (s s ) = Go(s s )] 1 . So
0 0 ; 0 0
G(r r E ) = Go(r r E ) + Go (r r(s) E )T (s s )Go (r(s ) r E ) ds ds
0 0 0
Z
0
4.7.2 Example II: Periodic Boundaries
For periodic boundaries we had (r) = (r) +
Z
Go(r r1 (s) E )f1 (s) + Go(r r2 (s) E )f2 (s)] ds ds
0
where f1 and f2 are determined from
0 1 0 @ f1 A = @ G1 f2 G1
0 0
G2 A 1 @ a A G2 a
; 0 0
1 0
1
If we de ne
F1 (s r ) = f1 (s) given F2 (s r ) = f2 (s) given
0
(r) = Go (r r E ) (r) = Go (r r E )
0 0
we have
G(r r E ) = Go (r r E ) +
0 0
Z
Go (r r1 (s) E )F1 (s r ) + Go(r r2 (s) E )F2 (s r ) ds
0 0
Though this looks like we have to solve many more equations than just to get the wavefunction, we note that the operator inverse which we need to get the wavefunction is su cient to get the Green function, just as in the Dirichlet case. We simply apply that inverse to more vectors. Thus for all boundary conditions, the Green function requires extra matrixvector multiplication work but the same amount of matrix inversion work.
Chapter 4: Scattering From Arbitrarily Shaped Boundaries
64
4.8 Eigenstates
It is also useful to be able to use the above methods to identify eigenenergies and eigenstates (if they exist) of the above boundary conditions. This is actually quite simple. All of the various cases involved inverting some sort of generalized Green function operator on the boundary. This inverse is a generalized tmatrix and its poles correspond to eigenstates. Poles of t correspond to linear zeroes of G and so we may use standard techniques to check for a singular operator. If the operator we are inverting is singular, its nullspace holds the coe cients required to form the eigenstate. A more concrete explanation of this can be found in section 9.1.
Chapter 5
Scattering in Wires I: One Scatterer
In this section we consider the renormalization of the scatterer strength due to the presence of in nite length boundaries. The picture we have in mind is that of two dimensional wire (one dimensional free motion) with periodic boundary conditions in the transverse direction and a single scattering center. This will be our rst example of scatterer renormalization by an external boundary.
5.1 One Scatterer in a Wide Wire
It seems intuitively clear that the scattering o a small object in the middle of a big wire should be much like scattering in free space. After all, if the con ning walls are further away than any other length scale in the problem, they ought to play a small role. In this section we attempt to make this similarity explicit by computing the transmission coe cient for a wide wire with one small scatterer in its center. We will simply be making the connection between crosssection and transmission in the ballistic limit.
65
Chapter 5: Scattering in Wires I: One Scatterer
66
11111 00000 11111 00000 11111 00000 θ 11111 00000 11111 00000
W
Figure 5.1: A Periodic wire with one scatterer and an incident particle. We begin with a simple classical argument. Suppose a particle in the wire is incident with angle with respect to the walls, as in gure 5.1. What is the probability that such a particle scatters? For a small scatterer, the probability is approximately P ( ) = 1 W cos . Of course, this must break down before P ( ) > 1 but for << W this will a small range of . We now need to know how the various incident angles are populated in a particular scattering process. For this, we must think more carefully about the physical system we have in mind. In our case, this is a \two probe" conductance measurement on our wire, as pictured in gure 5.2. Our physical system involves connecting our wire to contacts which serve as reservoirs of scattering particles (e.g., electrons), running a xed current, I , through the system and then measuring the voltage, V . Theoretically, it is the ratio of current to voltage which is interesting, thus we de ne the unitless conductance of this system,g = (h=e2 )I=V .
Chapter 5: Scattering in Wires I: One Scatterer
67
11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 00000000 11111111
I
1 0 1 0
W
111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 000000000 111111111
V
Figure 5.2: \Experimental" setup for a conductance measurement. The wire is connected to ideal contacts and the voltage drop at xed current is measured. In such a setup all of the transverse quantum channels are populated with equal probability. Since the quantum channels are uniformly distributed in momentum we have for the probability density of nding a particular transverse wavenumber, (ky )dky = 2 1E dky . p We also know that ky = E sin and together these give the density of incoming angles in 1 the plane of the scatterer. ( )d = 2 cos . We'll also assume the scattering is isotropic half of the scattered wave scatters backward. So, we have 1 Z =2 P ( ) ( ) d = : R= 2 (5.1) 4W =2
p ;
The maximum cross section of a zero range interaction in two dimensions is 4= E (see 2.4) and corresponds to scattering all incoming swaves. If we put this into (5.1), we get R = EW . Interestingly, this is exactly one quantized channel of re ection. The wire has as many channels as half wavelengths t across it, Nc = EW and the re ection coe cient is simply 1=Nc , indicating that one channel of re ection in free space (the swave channel) is also one channel of re ection in the wire, though no longer one speci c channel. We can check this conclusion numerically (the numerical techniques will be discussed later in the chapter) as a check on the renormalization technique and the numerical method. In gure 5.3 we plot the numerically computed re ection coe cient and the theoretical value of 1=Nc for wires of varying widths, from 15 half wavelengths to 150 half wavelengths. The expected behavior is nicely con rmed, although there is a noticeable
p p
p
Chapter 5: Scattering in Wires I: One Scatterer
68
discrete jump at a couple of widths where new channels have just opened.
0.07 Numerical Data QuasiClassical 0.06
0.05
0.04 R 0.03 0.02 0.01
0 20 30 40 50 60 70 Number of Open Channels 80 90 100
Figure 5.3: Re ection coe cient of a single scatterer in a wide periodic wire. As the wire becomes narrower (at the left of the gure) we see that the agreement between the measured value and the quasiclassical theory is poorer. This is no surprise since our quasiclassical argument is bound to break down as the height of the wire becomes comparable to the wavelength and scatterer size. This is a hint of what we will see in section 5.6 where the limit of the narrow wire is considered. Before we consider that problem, we develop some necessary machinery. First we compute the Green function of the empty periodic wire, we then consider the renormalization of the scattering amplitude in a wire and the connection between Green functions and transmission coe cients. It is interesting to watch the transition from wide to narrow in terms of scattering channels. Above, we saw that the scattering from one scatterer in a wide wire can be understood classically. As we will see later in the chapter, scattering in the narrow wire (W < a ) is much more complex. If we shrink the wire from wide to narrow we can watch this transition occur. This is shown in gure 5.4.
Chapter 5: Scattering in Wires I: One Scatterer
69
1
Channels Blocked by Scatterer
0.8
0.6
0.4
0.2
0 0 10 20 30 Number of Open Channels 40 50 60
Figure 5.4: Number of scattering channels blocked by one scatterer in a periodic wire of varying width.
5.2 The Green function of an empty periodic wire
We now proceed to do some tedious but necessary mathematical work necessary to apply the methods of chapter 3 to this system. We begin by computing the Green function of the wire without the scatterer. The state jk ai de ned by
hx y jk a i = eikx
where a (y) satis es
a (y)
(5.2)
d ; dy2
ZW
0
= "a a (y) a (0) = a (W ) d a = ddya dy
y=0
2 (y) dy a
2
a (y)
y =W
(5.3) (5.4) (5.5) (5.6)
= 1
Chapter 5: Scattering in Wires I: One Scatterer
70
is an eigenstate of the in nite ordered periodic wire. We can write the Green function of the in nite ordered wire as (see, e.g., 10], or appendix A)
X ^B G+ (z ) =
a
Z
1
;1
jk aihk aj z ; "a ; k2 + i
dk
(5.7)
In order to perform the diagonal subtraction required to renormalize the single scatterer tmatrices (see section 3.2) we need to compute the Green function in position representation. Equations 5.35.6 are satis ed by
1 0 (y ) = h q2 (0) (y) = a q W sin (1) (y ) = 2 cos a W
q
2 ay W 2 ay W
with "0 = 0 with "a = 2Wa with "a = 2Wa
2 2
(5.8)
where the cos and sin solutions are degenerate for each a. Since the eigenbasis of the wire is a product basis (the system is separable) we can apply the result of appendix A, section A.4 and we have: ^B G+ (z ) =
X
a
jaihaj
X
a
go ) (E ; "a ) ^(
(5.9)
or, in the position representation (we will switch between the vector r and the pair x y frequently in what follows),
G+ (r r E ) = G+ (x y x y z ) = B B
0 0 0
+ a (y) a (y )go (x
0
x E ; "a )
0
(5.10)
where the one dimensional free Green function is
+ go (x x z ) =
0
Z
1
8 > < = > :
;1
eik(x x0) dk z ; k2 + i
;
i 2 z exp (i 1 exp 2
; p
p
;
j
j
pz jx ; x j) ;pj jjx ; x j
0 0
if Imf z g 0 Refz g > 0 : if z = ;j j
p
When doing the Green function sum, we have to sum over all of the degenerate states at each energy. Thus, for all but the lowest energy mode (which is nondegenerate), the ypart of the sum looks like: (5.11) sin 2 a y sin 2 a y + cos 2 a y cos 2 a y = cos 2 a(y ; y )
0
W
W
0
W
W
0
W
Chapter 5: Scattering in Wires I: One Scatterer
71
0
which is sensible since the Green function of the periodic wire can depend only on y ; y . So, at this point, we have
G+ (x B
2 2 1 (+) 2 X y + y x y z) = W go (x x z) + W cos 2 a(W; y ) go x x z ; 4W a : (5.12) 2 a=1
1 0 0 0 0 0
!
As nice as this form for GB is, we need to do some more work. To renormalize free space scattering matrices we need to perform the diagonal subtraction discussed in section 3.2.2. In order for that subtraction to yield a nite result, GB must have a logarithmic diagonal singularity. The next bit of work is to make this singularity explicit. + It is easy to see where the singularity will come from. Since go (x x ;j j) 1 for E real there exists an M such that,
p
G+ (x y x y E ) const. + 21 B
a=M a
X1
1
(5.13)
which diverges. We now proceed to extract the singularity more systematically. We begin by substituting the de nition of go and explicitly splitting the sum into two parts. The rst part is a nite sum and includes energetically allowed transverse modes (open channels). For these modes the energy argument to go is positive and waves can propagate down the wire. The rest of the sum is over energetically forbidden transverse modes (closed channels). For these modes, waves in the x direction are evanescent. We p de ne N , the greatest integer such that E ; 4 2 N 2 =W > 0, ka = E ; 4 2 a2 =W 2 and p a = 4 2 a2 =W 2 ; E and have
G+ (x B
y x y E) =
0 0
N ;iei E x x0 ; i X cos 2 p W 2W E
p j ; j
1 ; W X cos 2
1
a>N
a(y ; y ) e W
0
a=1
a(y ; y ) eika x W ka
0 j ;
;
x0
j
a jx;x0 j
a
:
(5.14)
In order to extract the singularity, we add and subtract a simpler in nite sum (see D.1), 1 X cos 2 a(y ; y ) exp ; 2Wa jx ; x j 2 a=1 W a exp ( ; )=W = 41 ln 2 cosh 2 (x ; x 2=h]x; 2xcos 2 ](y ; y )=W ] )
1 0 0 0 0 0
(5.15)
Chapter 5: Scattering in Wires I: One Scatterer
72
to GB . This gives
G+ (x B
2 ik x 4e a
yx y
0
0
j ;
ka
N ;iei E x x0 ; i X cos 2 pE W E) = 2W a=1 2 a jx ; x j 3 0 x 5 ; W exp ; W
p j ; j j 0
a(y ; y ) W
0
2i a
exp 2 (x ; x )=W ] (5.16) 2 cosh 2 (x ; x )=W ] ; 2 cos 2 (y ; y )=W ] : This is an extremely useful form for numerical work. For x 6= x or y 6= y we have transformed a slowly converging sum into a much more quickly converging sum. This is dealt with in detail in D.2.2. In this form, the singular part of the sum is in the logarithm term and the rest of the expression is convergent for all x ; x y ; y . In fact, the remaining in nite sum is uniformly convergent for all x ; x , as shown in (D.2.2). We can now perform the diagonal subtraction of Go . We begin by considering the x ! x y ! y limit of G: N i i X 1; W lim y0 G+ (x y x y E ) = ;p ; W x x0 y 2W E a=1 ka 2i a
1 y ; W X cos 2 a(W; y ) 4 e a>N
0 0
2
;
a jx;x0 j
a
0
;
W exp
; 2Wa jx ; x j 3 5
0
2 a
; 41 ln
0
0
0
0
0
0
0
0
0
0
!
!
1 ;W X
1
exp ; x )=W lim ln 2 cosh 2 (x ; x )2 (x ; 2 cos 2 ] (y ; y )=W ] : x0 y y0 =W ] We can use equation D.6 to simplify the limit of the logarithm: exp 2 (x ; x )=W ] 1 lim ln 0 y y0 4 x x 2 cosh 2 (x ; x )=W ] ; 2 cos 2 (y ; y )=W ]
a>N
!
; 2Wa a
0 0 0
; 41 x
!
(5.17)
0
!
!
0
0
= 41 x x0 y y0 ln lim
! !
(
2
= 41 ln Since (from A.5.1)
!
"
2
W
2#
W
2h
(x ;
x ) 2 + (y
0 0
;
y )2
0
i)
; + 21 rlim0 ln jr ; r j : r
! 0
(5.18) (5.19)
p ; lim0 Go (r r z ) = ;i + 1 Yo(R) (0) + 21 ln( E ) + 21 ln jr ; r j r r 4 4
0
Chapter 5: Scattering in Wires I: One Scatterer
73
we have
G+ (x y E ) = rlim0 G+ (x y x y E ) ; G(+)(r r E ) o B r B N i i X 1; W = ;p ; W 2W E a=1 ka 2i a 1 i ; W X 1a ; 2Wa + 21 ln W2pE + 4 ; 1 Yo(R)(0) 4 a>N
0 0 0 !
(5.20)
which is independent of x and y as it must be for a translationally invariant system and nite, as proved in section 2.3.1. The case of a Dirichlet bounded wire is very similar and so there's no need to repeat the calculation. For the sake of later calculations, we state the results here. We have
G+ (x y x y E ) = B " # N W i X sin a y sin a y eika x x0 ; e 2 a x x0 ;W W W ka 2i a a=1 " a x x0 # W e 2 a x x0 1 X sin a y sin a y e ; 2a ;W W W a a>N
0 0 j ; j ; j ; j 0 ; j ; j ; j ; j 0
2 0 3 y+ 0 1 ln 4 sin2 (2Wy ) + sinh2 (xWx ) 5 2 ; 4 sin2 (y y0) + sinh2 (x x0) 2W 2W
; ; ;
(5.21)
and
G+ (x y E ) = B i N 1 ; W X sin2 Wa y k1a ; 2i1 a ; W X sin2 Wa y a=1 a>N i 1 1 ln sin2 y + 1 ln pE + 4 ; 4 Yo(R) (0): ;4 W 2 2W
1
; 2 1a a
(5.22)
5.3 Renormalization of the ZRI Scattering Strength
Recall that the full Green function may be written
G+ (E ) = G+ (E ) + G+ (E )T + (E )G+ (E ) B B B
(5.23)
where T + (E ) is computed via the techniques in chapter 3, namely renormalization of the free space tmatrices.
Chapter 5: Scattering in Wires I: One Scatterer
74
We begin with a single scatterer in free space at x = y = 0. The tmatrix of that scatterer is (see section 2.4) t+(E ) = s+(E ) j0ih0j (5.24) which is renormalized by scattering from the boundaries of the wire:
T +(E ) = S +(E ) j0ih0j
(5.25)
where
S + (E ) = s+(E ) 1 ; s+(E )G+ (0 0 E ) B
1=S + (E ) = 1=s+ (E ) + 1 +W
h
i
;
1
= 1=s+ (E ) ; G+ (0 0 E ) B
h
i
;
1
:
(5.26)
Thus, for a periodic wire,
N ip + i X 1 ; W 2W E W a=1 ka 2i a
X 1
a>N
; 2Wa ; 21 ln a
W E
2p
1 + 4 Yo(R) (0)
(5.27)
and in the Dirichlet bounded wire, 1=S + (E ) = 1=s+ (E ) + 1 +W
N ip + i X sin2 a y 2W E W a=1
; 2Wa ka i
1
sin2 ya a>N
X
1
; 2Wa a
1 + 4 Yo(R) (0):
0 0
So we have
0 0
y + 41 ln sin2 W
0 0
; 21 ln hpE
(5.28)
G+(x y x y E ) = G+ (x y x y E ) + G+ (x y 0 0 E )S + (E )G+ (0 0 x y E ): (5.29) B B B
with S + (E ) de ned above.
5.4 From the Green function to Conductance
Since we now have the full Green function in the position representation, we can compute it between any two points in the wire. We can use this to compute the unitless conductance, g, of the wire via the FisherLee relation 16, 40],
g = Tr(T T )
y
(5.30)
Chapter 5: Scattering in Wires I: One Scatterer
75
where T is the transmission matrix, i. e., (T )ab is the amplitude for transmitting from channel a in the left lead to channel b in the right lead. T is constructed from G+ (r r E ) via s k (T )ab = ;iva k b G+ (x x E ) exp ;i(kb x ; ka x )] (5.31) ab
0 0 0
where
a
^ G+ (x x E ) = x a G+(E ) x b = ab
0 0 0
D
E Z
+ a (y ) G (x
y x y E ) b (y ) dy dy
0 0 0
0
(5.32)
is the Green function projected onto the channels of the leads. Since the choice of x and x are arbitrary, we can choose them large enough that all the evanescent modes are arbitrarily small and thus we can ignore the closed channels. So there are only a nite number of propagating modes and the trace in the FisherLee relation is a nite sum. We p note that the prefactor va ka =kb is there simply because we have normalized our channels R via 0h j a (y)j dy = 1 rather than to unit ux. More detail on this is presented in 16] and even more in the review 40].
5.5 Computing the channeltochannel Green function
We write the Green function of the in nite ordered wire as (see, e.g., 10])
X ^B G+ (z ) =
a
0
Z
1
;1
so Since
jk aihk aj z ; "a ; k2 + i dk
Z
1 ; ;1
(5.33) (5.34) (5.35)
D
^B x a G+ ( z ) x b =
E
ab
eik(x x0) dk: z ; "a ; k2 + i
and T (z ) = S (z ) jrs ihrs j we have
^ ^B ^B ^ ^B G+(z ) = G+ (z) + G+ (z)T + (z )G+ (z )
0
G+ (x x z ) = ab
0
+; ab go x x z + +go (x xs z
; "a)
X
ij
a ( ys ) S
+ (z )
+; b (ys ) go xs
x z ; "b
0
If the tmatrix comes from the multiple scattering of many zero range interactions the tmatrix can be written: ^ t(z) =
jrii (t(z))ij hrjj :
(5.36)
Chapter 5: Scattering in Wires I: One Scatterer
76
In that case we will have a slightly more complicated expression for the channeltochannel Green function:
G+ (x x z ) = ab
0
+; ab go x x z X + + go (x xi ij
0
+ z ; "a ) a (yi) (t(z))ij b (yj ) go xj x z ; "b (5.37)
0
;
5.6 One Scatterer in a Narrow Wire
Whereas the wide wire can be modeled classically, the narrow wire is essentially quantum mechanical. In fact, what we mean by a narrow wire is a wire with fewer than one halfwavelength across it. When the wire becomes narrow, the renormalization of the scattering amplitude due to the presence of the wire walls becomes signi cant. This is immediately apparent when we consider the case of a wire which is narrower than the scattering length of the scatterer. In essence, we consider reducing a twodimensional scattering problem to a onedimensional one by shrinking the width of the wire. We'd like to consider the case where the wavelength, , is much larger than the width of the wire. This cannot be done for a wire with Dirichlet boundary conditions since there the lowest energy with one open channel gives a half wavelength across the wire. We will instead use a wire with periodic boundary conditions to consider this case. If the wide wire result remains valid, we would expect the transmission coe cient to be zero when there is only one open channel since, in the wide wire, our scatterer re ected exactly one channel. However, as we already saw in gure 5.4 this is not the case. As we see in more detail in gure 5.5, the transmission coe cient has a surprisingly nontrivial behavior when the width of the wire shrinks below a wavelength.
Chapter 5: Scattering in Wires I: One Scatterer
1 Numerical Data Renormalized tmatrix Theory
77
a)
0.8 0.6 T 0.4 0.2 0
0
0.5
1.5 1 Number of half wavelengths across wire
2
Figure 5.5: Transmission coe cient of a single scatterer in a narrow periodic wire. It is possible to de ne a sort of crosssection in one dimension, for instance via the d = 1 optical theorem from section 2.2. In gure 5.6, we plot the crosssection of the scatterer rather than the transmission coe cient.
Chapter 5: Scattering in Wires I: One Scatterer
2.2 2 1.8 1.6 Cross Section (1D) 1.4 1.2 1 0.8 0.6 0.4 Cross Section 0.2 0 0.2 0.4 0.6 0.8 1 1.2 Half Wavelengths Across Wire 1.4 1.6 1.8
78
Figure 5.6: CrossSection of a single scatterer in a narrow periodic wire. There are various unexpected features of this transmission. In particular, why should the transmission go to 0 at some nite wire width? Also, why should the transmission go to 0 for a zero width wire? A less obvious feature, but one which is perhaps more interesting, is that for very small widths the behavior of the transmission coe cient is independent of the original scattering strength. These features are consequences of the renormalization of the scattering amplitude. To see how this transmission occurs, we compute the full channelto channel Green function for one scatterer in the center (x = 0) of a narrow (h ! 0) periodic wire. Since the wire is narrow, there is only one open channel so the channel to channel Green function has only one term. lim G+ (x W 0 00
!
+ y x y E ) = go (x x E )
0 0 0
+ +go (x 0 E ) p1
W
lim S + (E ) W 0
!
p1W go+(0 x
0
E ):
Chapter 5: Scattering in Wires I: One Scatterer
79
The small W limit of S + (E ) is straightforward. Using the result (5.27), we have
S + (E ) =
For small W ,
(
1 ; 21 ln 2Wa + W X 1a ; 2Wa 2W E a=1
ip
1
)
;
1
:
(5.38) (5.39)
1= a
1
and so the closedchannel sum may be approximated
1
W 1 + EW 2 2 a 8 2 a2
!
! 1 X 1 EW 2 = EW 2 X 1 = EW 2 (3) 1:2 EW 2 : 2 a=1 a 8 2 a2 16 3 a=1 a3 16 3 16 3
S +(E ) W<<1= E =
p
(5.40)
So
ip ; 1 ln 2 a + 1:2 EW 2 1 : (5.41) W 16 3 2W E 2 p Using the de nition of go and factoring out the large i=2W E in S + we have ; piE ei E x x0 lim0 G+ (x y x y E ) 00 W 2 p + ;i ei E x p1 2 E " Wp # p 2W E 1 + W E ln 2 a ; 1:2 E 3=2 W 3 i i W 8i 3 ; p1W 2piE ei E x0 (5.42)
; p 0 0 j ; j ! p j j p j j 0 0
"
#
which, after the dust settles, can be rewritten as lim G(+) (x y x y E ) W 0 00
! p j ; j
Since we are interested only in transmission, we can assume x < 0 and x > 0 so jx ; x j = jxj + jx j. With this caveat, we have
0 0 0
; piE ei E x x0 2 " p i ei E ( x + x0 ) 1 + W E ln + p i 2 E
p j j j j
2 a
3=2 3 ; 1:2 E 8i W(5.43) : 3 W
#
lim G+ (x W 0 00
!
y x y E)
0 0
ei E( x + x0 )
p j j j j
"
h ln 2 a 2 W
3 ; 1:2 EW3 : 16
#
(5.44)
Finally, from (5.31), we have
(T )00 = ;i W E ln 2Wa
"
p
3=2 3 ; 1:2 E W
#
8
3
(5.45)
Chapter 5: Scattering in Wires I: One Scatterer
80
since, in units where h2 =2m = 1, v = 2 E . We have plotted this small W approximation j(T )00j2 in gure 5.5. We see that there is good quantitative agreement for widths of fewer than 1=4 wavelength. What is perhaps more surprising is the reasonably good qualitative agreement for the entire range of one wavelength.
p
Chapter 6
Scattering in Rectangles I: One Scatterer
In this section we begin the discussion of closed systems and eigenstates. As with the scatterer in a wire problem, we have quite a bit of preliminary work to do. We rst compute the Background Green function, GB and the diagonal di erence hr jGB ; Go j ri for the Dirichlet rectangle. We then perform a simple numerical check on this result by using it to compute the ground state energy of a 1 1 square with a single zero range interaction at the center. We compare these energies with those computed by successiveoverrelaxation (a standard lattice technique 33]) for a hard disk of the same e ective radius as speci ed for the scatterer. This provides the simplest illustration of the somewhat subtle process of extracting discrete spectra from scattering theory. We then compute the background Green function and diagonal di erence for the periodic rectangle (a torus), as we will need that in 9.
6.1 Dirichlet boundaries
6.1.1 The Background Green Function
We consider a rectangular domain, D, in R2 de ned by
D = 0 l] 0 W ] = f(x y)j0 < x < l 0 < y < W g:
81
(6.1)
Chapter 6: Scattering in Rectangles I: One Scatterer
82
We wish to solve Schrodinger's equation
"
h2 r2 + E (~) = 0 r 2m ~ 2 @ D ) (~) = 0 r r
#
(6.2)
in the domain D subject to the boundary condition (6.3) where @ D is the boundary of D. 2 As in the previous chapter we set 2hm = 1. Our equation reads
@2 + @2 + E @x2 @y2
!
(x y) = 0:
(6.4)
@ @ The eigenfunctions of L = ; @x22 + @y22 in the above domain with given boundary condition are 2 n x m y (6.5) nm (x y) = p sin l sin W
which satisfy
lW
;
0
@2 + @2 @x2 @y2
!
n2 2 + m2 2 nm (x y) = l2 W2
1
!
nm (x
y ):
(6.6)
We can thus write down a box Green function in the position representation
0
; ;m y 0 m y0 4 X sin n l x sin n lx sin W sin W GB (x y x y E ) = lW E ; n22 2 ; m2 22 n m=1
l W
(6.7)
which satis es
@2 @2 z + @x2 + @y2 GB (x y x y z) = (x ; x ) (y ; y ):
0 0 0 0
!
(6.8)
6.1.2 Resumming the Dirichlet Green Function
As in the previous chapter, we'd like to rewrite this double sum as a single sum. We can do this by rearranging the sum and recognizing a Fourier series. First we rearrange 6.7 as follows:
X 2 X sin h sin W : GB (x y x y E ) = 2 sin n l x sin n l x W 2 2 2 2 l n=1 m=1 E ; n l2 ; m 2 W
1 0 1 0 0
;m y
m y0
(6.9)
Chapter 6: Scattering in Rectangles I: One Scatterer
83
We de ne
kn = N =
n
s
=
s
E ; n l2 l pE
2 2
(6.10) (6.11) (6.12)
n2 2 ; E l2
where x] is the greatest integer less than equal or equal to x. and then apply a standard trigonometric identity to the product of sines in the inner sum to yield
X X cos GB (x y x y E ) = 2 sin n l x sin n l x W 2 l n=1 m=1
1 0 1 0 0
m (y y0 ) W W22 k2 n
;
y ; cos m (W+y0) ; m2
:
We now need the Fourier series (see, e.g., 18])
1
(6.13) (6.14)
X cos nx cos a( ; x)] 1 2 ; n2 = 2 a sin a ; 2a2 n=1 a
1 0
for 0 x 2 . So we have
0 0
X GB (x y x y E ) = 2 sin n l x sin n l x l n=1 1 cos kn (W ; (y ; y ))] ; cos kn (W ; (y + y ))] 2 kn sin hkn
0 0
and, after applying trigonometric identities, we have
; 0 N 2 X sin n l x sin n lx sin(kn y ) sin kn (y ; W )] GB (x y x y E ) = l kn sin (kn W ) n=1
0 0 0 1 0
n x sin n x l l n=1 sin kn (W ; y))] sin kn y ] : kn sin kn W We now rewrite our expression for GB yet again: GB (x y x y E ) =
0 0 1 0 0
; 2 X sin l
; 0 2 X sin n l x sin n lx sinh( n y ) sinh n (y ; W )] (6.15) +l n sinh ( n W ) n=N +1
Chapter 6: Scattering in Rectangles I: One Scatterer
84
6.1.3 Properties of the Resummed GB
As we will show below, the sum in (6.15) is not uniformly convergent as y ! y . However, this limit is essential for calculation of renormalized tmatrices. We can, however, add and subtract something from each term so that we are left with convergent sums and singular sums with limits we understand. Since the sum is symmetric under y $ y (this is obvious from the physical symmetry as well as the original double sum, equation 6.7), we may choose y < y . We de ne = y ; y > 0. and rewrite the sum (6.15) as follows:
0 0 0 0
; 0 N 2 X sin n l x sin n lx sin(kn y ) sin kn (y ; W )] 2) = GB ( x y x y k l n=1 kn sin (knW )
0 0 0
+1 l where
X
1
n=N +1
0
un
(6.16) (6.17)
We rewrite un to simplify the following analysis.
un = 2 sin n l x sin n l x sinh ( n y) sinh ( n (W ); y ; )] : n sinh n W un = ; sin n l x sin n l x h n e ; e n (2y+ ) + e
0 ; ;
1;e
n (2W ;
;
2 nW
;
1 )
;
)
n
;e
;
n (2W ;2y+
i
:
(6.18) (6.19)
We de ne
1 ; e 2 nW gn ( ) = sin n l x sin n l x n We observe that for M < n 1 jgn( )j 1 ; exp(;2 nW )] exp (; n )
0 ; ;
;
1
e
;
n
:
n
1 ; exp(;2 M W )] 1 exp (; ) n M
;
1 < 1 ; exp(;2 M W )] exp
;
M
; ln
exp 2El n exp
1 El < 1 ; exp(;2 M W )] exp 2M M
;
; ln
(6.20)
Chapter 6: Scattering in Rectangles I: One Scatterer
85
and therefore
1 (6.21) l 1 ; exp l which converges for > 0 but not uniformly. Thus we cannot take the ! 0 limit inside the sum. However, if we can subtract o the diverging part, we may be able to nd a uniformly converging sum as a remainder. With that in mind, we de ne
1 ;
1 El gn ( ) < 1 ; exp (;2 M W )] exp 2M M n=M
X
exp
;M
;
(6.22) hn ( ) = sin n l x sin n l x nl exp ; nl P We rst note that the sum n=1 hn ( ) may be performed exactly (see appendix D) yielding:
0 1
< sin2 hn( ) = 4l ln : 2 h sin n=1 X
1
8
h
(x+x0 ) i + sinh2 2l 2l (x x0 ) i + sinh2 2l 2l
;
9 =
:
(6.23)
We now subtract hn ( ) from gn ( ) to get
gn( ) ; hn ( ) = sin n l x sin n l x nl 2 2! 2 n W 1 1 ; El 4 1;e n2 2
0 ; ;
;
1=2
0 1 s n 1 ; El2 A ; exp exp @; l 2 n2 P
1
; nl
3 5: (6.24)
As we show in appendix D, we can place a rigorous upper bound on n=M gn ( ) ; hn ( )] for su ciently large M . More precisely, given
8 <lp ls 2 1 M > max : 2E E + 2W ln El2
2=
9
(6.25)
then
and n=M We now have
1 0
P
2 4 gn ( ) ; hn ( )] < El 2 h + 3 l exp ; Ml 2M 2 n=M gn( ) ; hn( )] is uniformly convergent in 2 0 W ].
1
X
:
(6.26)
GB (x y x y E ) =
0
2
N X sin
; n x sin l
n=1
n x0 sin(kn y ) sin kn (y l kn sin (knW )
0
; W )]
Chapter 6: Scattering in Rectangles I: One Scatterer
86 ) + hn (2W ; 2y + )] ) + hn (2W ; 2y + )]
+1 l +1 l
; 1 X ;hn( ) + hn(2y + ) ; hn(2W ; l
N n=1
X
1
n=1
;hn( ) + hn(2y + ) ; hn(2W ; ; gn( ) ; hn( )]
gn(2y + ) ; hn (2y + )]
X
1
+1 l +1 l
n=N +1
X
1 1
; 1 X gn(2h ; ) ; hn(2h ; )] l
n=N +1
n=N +1
X
1
n=N +1
gn(2W ; 2y + ) ; hn (2W + 2y ; )]
(6.27)
The in nite sums of h's can be performed (see appendix D) so we have
GB (x y x y E ) =
0 0
2
N X sin
; n x sin l h
;l
n=1 N 1X n=1
n x0 sin(kn y ) sin kn (y l kn sin (kn W )
0
; W )]
) )
< 2 ; 41 ln : sin2 h sin 8 2h < ; 41 ln : sin2 h sin X
1 1
8
;hn( ) + hn(2y + ) ; hn(2W ;
; ;
+ + + +
n=N +1
; gn( ) ; hn( )]
i 9 8 h (x+x0 ) i + sinh2 y = 1 < sin2 (x2+x0) + sinh2 (22l+ 2l i 2l l i + 4 ln : 2 h (x x0 ) y (x x0 ) + sinh2 + sinh2 (22l+ sin 2l 2l 2l i 8 h (x+x0 ) i + sinh2 (2W ) 9 = 1 < sin2 (x2+x0) + sinh2 2l i 2l l i + 4 ln : 2 h (x x0 ) (x x0 ) + sinh2 (2W ) sin + sinh2 2l 2l 2l
; ; ; ;
) + hn (2W ; 2y + )]
9 =
(2W 2y+ ) 2l (2W 2y+ ) 2l
; ;
9 =
X X
1 1
n=N +1 n=N +1
gn (2y + ) ; hn(2y + )]
; gn(2h ; ) ; hn(2h ;
)]
(6.28) (6.29)
X
n=N +1
gn (2h ; 2y + ) ; hn (2h + 2y ; )] :
When using these expressions we truncate the in nite sums at some nite M . The analysis in appendix D allows us to bound the truncation error. With this in mind we de ne a
Chapter 6: Scattering in Rectangles I: One Scatterer
87
quantity which contains the nonsingular part of GB truncated at the M th term:
SM (x y x y E ) =
0 0
2
N X sin
; n x sin l h
;l
n=1 N 1X n=1
n x0 sin(kn y ) sin kn (y l kn sin (kn W )
0
; W )]
8 h (x+x0 ) i 1 ln < sin2 h 2l i + sinh2 ; 4 : sin2 (x x0) + sinh2 2l 9 =
;
1 ln < sin2 h ; 4 : sin2 8 2h < sin + 41 ln : 2 h sin + + + + So
M X n=N +1 M X n=N +1 M X n=N +1 M X n=N +1
8
;hn( ) + hn(2y + ) ; hn(2W ;
i (x+x0 ) + sinh2 2l i (x x0 ) + sinh2 2l (x+x0 ) i + sinh2 2l (x x0 ) i + sinh2 2l
; ; ;
(2y+ ) 9 = 2l (2y+ ) 2l (2W 2y+ ) 2l (2W 2y+ ) 2l
;
) + hn (2W ; 2y + )]
(2W ) 2l (2W ) 2l
; ;
9 =
; gn( ) ; hn( )]
gn (2y + ) ; hn(2y + )]
; gn(2h ; ) ; hn(2h ;
)] (6.30)
(x+x0 ) i + sinh2 h (y0 y) i 9 = 2l i h (y20 l y) i : (x x0 ) + sinh2 2l 2l
; ; ;
gn (2h ; 2y + ) ; hn(2h + 2y ; )] :
8 2h < sin GB (x y x y E ) SM (x y x y E ) ; 41 ln : 2 h sin
0 0 0 0
(6.31)
We now have an approximate form for GB which involves only nite sums and straightforward function evaluations. This will be analytically useful when we subtract Go from GB . It will also prove numerically useful in computing GB .
6.1.4 Explicit Expression for GB (r k2)
Within the dressedt formalism, we often need to calculate GB (r r z ) ; Go (r r z ). This quantity deserves special attention because it involves a very sensitive cancellation of in nities. Recall that ; i lim0 Go (r r k2 ) = rlim0 21 ln kjr ; r j + 1 Yo(R) (0) ; 4 (6.32) r r r 4
0 0 ! !
Chapter 6: Scattering in Rectangles I: One Scatterer
88
where Yo(R) is the regular part of Yo as de ned in equation A.48 of section A.5.1. If GB ; Go is to be nite, we need a canceling logarithmic singularity in GB . Equation 6.31 makes it apparent that just such a singularity is also present in GB . The logarithm term in that equation has a denominator which goes to 0 as r ! r . We carefully manipulate the logarithms and write an explicitly nite expression for GB ; Go
0
GB (r E ) = SM (x y x y E ) + 1 PM=1 sin n l n
2
n x
(6.33)
i +4
; 21 ln kl ; Yo(R)(0) + 21 ln ; 2 ; 41 ln ;sin lx
6.1.5 Ground State Energies
One nice consequence of equation 3.44 is that it predicts a simple formula for the ground state energy of one scatterer with a known Green function background. Speci cally, (3.44) implies that poles of T (E ) occur when 1 = G (r E ) (6.34) B s s(E ) From section 2.4 we have 1 = 1 i ; Yo (ka) (6.35) s(E ) 4 Jo (ka) for ka < 8 we have 1 = i ; 1 ln ( ) ; 1 Y (R) ( ) = i ; 1 ln kl ; 1 ln : (6.36) s(E ) 4 2 4 o 4 2 2 We de ne F (E ) = s(1 ) ; GB (r E ) so E
F (E ) =
1 ; 4YJoo((ka)) + 21 ln kl + 4 Yo(R) (0) ka 2n x ;SM (X Y X Y ) ; 1 X sin n l n=1 1 ln + 1 ln sin X ;2 2 4
1
We can use the ka < 8 expansion of the Neumann function to simplify F a bit: h i F (E ) = ; 21 ln a=l ; 1 Yo(R) (ka) ; Yo(R) (0) 4
Chapter 6: Scattering in Rectangles I: One Scatterer
89
1
;SM (x y x y) ; 1 X sin n l n=1 1 ln + 1 ln sin x ;2 2 4 l
2n x
In gure 6.1 we compare the numerical solution of the equation F (E ) = 0 with a numerical simulation performed with a standard method (Successive Over Relaxation 33]).
6.2 Periodic boundaries
6.2.1 The Background Green Function
We consider a rectangular domain, , in R2 de ned by = 0 l] 0 W ] = f(x y)j0 < x < l 0 < y < W g: (6.37)
We wish to solve Schrodinger's equation
"
h2 r2 + E (~) = 0 r 2m
= (l y ) @ = @x (x y) x=l = (x W ) @ = @y (x y) :
y=W
#
(6.38)
in the domain
subject to the boundary conditions
@ @x @ @y
(0 y) (x y) x=0 (x 0) (x y )
y=0
2 We set 2hm = 1. Our equation reads
@2 + @2 + E @x2 @y2
!
(x y) = 0:
(6.39)
@ @ The eigenfunctions of L = ; @x22 + @y22 in the above domain with given boundary condition are (1) (x y ) = p2 sin 2n x sin 2m y (6.40) nm l W lW (2) (x y ) = p 2 cos 2n x cos 2m y (6.41) nm
lW
l
W
Chapter 6: Scattering in Rectangles I: One Scatterer
90
Succesive OverRelaxation and DressedtMatrix: Ground State Energies 120 100
1
E
80 60 40 20
0 .25
2
.75 1
SOR Dressedt Matrix 0 0.05
Scattering Length (radius),
0.1
0.15
0.2
0.25
Numerical Simulation and Dressedt Theory for Ground State Energy Shifts (small ) 28 26
3 3 33 E 24 3 3 3 3
22 20 3 0
3
3
3
3
0.002
0.004 0.006 Scattering Length ( )
Simulation 3 Full Theory Eg 0.008 0.01
Figure 6.1: Comparison of Dressedt Theory with Numerical Simulation
Chapter 6: Scattering in Rectangles I: One Scatterer
(3) (x nm (2) (x nm
91 (6.42) (6.43) (6.44)
y) = y) =
2 plW sin 2 plW cos
2n x cos 2m y l W 2n x sin 2m y
l
W
all of which satisfy
;
@2 + @2 @x2 @y2
!
2 2 2 2 (x y) = n l2 + m 2 nm W
!
nm (x
y ):
(6.45)
Note that the m = 0 and n = 0 is permissible but only the cosine state survives so there is no degeneracy. We can thus write down a box Green function in the position representation
n x x cos m y y 2 + 4 X cos l W GB (x y x y E ) = lWE lW (6.46) n22 2 ; m2 22 E; l n m=1 W where trigonometric identities have been applied to collapse the sines and cosines into just two cosines. We note that this Green function depends only on jx ; x j and jy ; y j as it must.
1 j ; j ; 0 0 0 0
0j
0j
6.2.2 Resumming GB
As with the Dirichlet case, we'd like to resum this Green function to make it a single sum and to create an easier form for numerical use and the Go ; GB subtraction. We begin by reorganizing GB as follows:
m x x 2 + 4 X cos n jy ; y j X cos l GB (x y x y E ) = lWE lW (6.47) n22 2 ; m2 22 W n=1 m=1 E ; l W and then apply the following Fourier series identity (see, e.g., 18]) to the inner sum:
1 0 1 j ; 0 0
0j
X cos kx cos a( ; x)] ; 1 2 ; k 2 = 2 a sin a 2a2 k=1 a
1
(6.48)
for 0 x 2 . We de ne
N = kn =
s2
W pE E ; 4W n 2
2 2
(6.49) (6.50)
Chapter 6: Scattering in Rectangles I: One Scatterer
2 2 = 4W n ; E n 2 X = jx ; x j Y = jy ; y j
0 0
92 (6.51) (6.52) (6.53)
s
where x] is the greatest integer equal to or less than x. We now have
hp l i h i N cos E 2 ; X X cos 2 nY cos kn 2l ; X 1 G B (X Y E ) = pE sin pE l + W n=1 W kn sin kn l 2h 2 h l i 2 2 cosh 1 ; W X cos W nY sinh nl 2 ; X : (6.54) n n2 n=N +1
1
We now follow a similar derivation to the one for Dirichlet boundaries. We choose an M > N such that we may approximate n 2 l n . We can then approximate GB by a nite sum plus a logarithm term arising from the highest energy terms in the sum. That sum looks like 2 1 X cos 2l nY e l nX (6.55) 2 n=M +1 n
1 ;
l where X = X mod 2 . We can sum this using (see e.g., 18])
X xk
1
k=1
1 = ln 1 ; x : k
(6.56)
hp l i h l i N cos E 2 ; X 1 X cos 2 nY cos kn 2 ; X W ( SpM ) (X Y E ) = p p l + W n=1 l 2h E sin E 2 kn sin kn 2 h l i M 1 X cos 2 nY cosh n 2 ; X : W ;W (6.57) l n sinh n 2 n=N +1
and write our approximate GB as
We de ne
GB (X Y E )
M 1 X cos 2 nY e W +2 n n=1
( SpM ) (X Y E ) + 41 ln 1 ; 2e
;
;
2
WX
2
WX
cos 2 Y + e W
;
4
WX
(6.58)
Chapter 6: Scattering in Rectangles I: One Scatterer
93
6.2.3 Explicit Expression for GB (r E )
With this resummed version of GB we can write an explicitly nite expression for
GB (r E ) = GB (r r E ) ; Go (r r E )
(6.59)
though both GB (r r E ) and Go (r r E ) are (logarithmically) in nite. As with the Dirichlet case, all we need to do is carefully manipulate the logarithm in GB in the X Y ! 0 limit. Our answer is
GB (r
( E ) = SpM ) (0
M 1 ln kW + 1 X 1 ; Y (R) (0) + i X 0 E) ; 2 2 2 n=1 n o 4
(6.60)
Chapter 7
Disordered Systems
7.1 Disorder Averages
7.1.1 Averaging
When working with disordered systems we are rarely interested a particular realization of the disorder but instead in average properties. The art in calculating quantities in such systems is cleverly approximating these averages in ways which are appropriate for speci c questions. In this section we will consider only the average Green function of a disordered system, hGi (rather than, for instance, higher moments of G). Good treatments of these approximations and more powerful approximation schemes may be found in 10, 34, 9]. ^ Suppose, at xed energy, we have N ZRI's with individual tmatrices ti (E ) = si jri ihrj j. Any property of the system depends, at least in principle, on all the variables si ri . We imagine that there are no correlations between di erent scatterers (location or strength) and that each has the same distribution of locations and strengths. Thus we can de ne the ensemble average of the Green function operator G(E ):
hGi =
V
"Y Z N
i=1
dri dsi r (ri) s(si ) G
#
(7.1)
We will typically use uniformly distributed scatterers and xed scattering strengths. In this case r (ri ) = 1 where V is the volume of the system and s (si ) = so (si ; so ) where so is the xed scatterer strength. We can write G as Go + GoTGo and, since Go is independent of the disorder,
hGi = Go + Go hT i Go:
94
(7.2)
Chapter 7: Disordered Systems
95
Thus we must compute hT i. This is such a useful quantity, there is quite a bit of machinery developed just for this computation.
7.1.2 SelfEnergy
The selfenergy, , is a sort of average potential (though it is not hV i) de ned via
hGi = Go + Go hGi
and thus
(7.3) (7.4) (7.5)
hGi 1 = Go 1 ;
; ; ;
:
This last equation explains why we call (E ) the selfenergy,
Go 1 ; = E ; ] ; Ho :
Within the rst two approximations we discuss, the selfenergy is just proportional to the identity operator so it can be thought of as just shifting the energy. We can also use (7.3) to nd in terms of hT i: = hT i (1 + Go hT i) or for hT i in terms of :
;
1
(7.6) (7.7)
Thus knowledge of either hT i or is equivalent. Recall that G = Go + Go TGo = Go + GoV Go + GoV GoV Go + means that the amplitude for a particle to propagate from one point to another is the sum of the amplitude for it to propagate from the initial point to the nal point without interacting with the potential and the amplitude for it to propagate from the initial point to the potential, interact with the potential one or more times and then propagate to the nal point. We can illustrate this diagrammatically:
hT i =
(1 ; Go ) 1 :
;
G=
+
+
+
+
+
+
+
(7.8)
where solid lines represent free propagation (Go ) and a dashed line ending in an \ " indicates an interaction with the impurity potential (V ). Each di erent \ " represents an interaction with the impurity potential at a di erent impurity. When multiple lines connect
Chapter 7: Disordered Systems
96
to the same interaction vertex, the particle has interacted with the same impurity multiple times. An irreducible diagram is one which cannot be divided into two subdiagrams just by cutting a solid line (a free propagator). The selfenergy, , is equivalent to a sum over only irreducible diagrams (with the incoming and outgoing free propagators removed): = + + + + (7.9)
which is enough to evaluate G since we can build all the diagrams from the irreducible ones by adding free propagators:
hGi = Go + Go
Go + Go Go Go +
= Go (1 ; Go)
;
1
(7.10)
There are a variety of standard techniques for evaluating the selfenergy. The simplest approximation used is known as the \Virtual Crystal Approximation" (VCA). This is equivalent to replacing the sum over irreducible diagrams by the rst diagram in the sum, i.e., = hV i. Since we don't use the potential itself, this approximation is actually more complicated to apply then the more accurate \average tmatrix approximation' (ATA). We note that, in a system where the impurity potential is known, hV i is just a real number and so the VCA just shifts the energy by the average value of the potential. The ATA is a more sophisticated approximation that replaces the sum (7.9) by a sum of terms that involve a single impurity, + + + + (7.11)
but this is, up to averaging, the same as the single scatterer tmatrix, ti . Thus the ATA is P equivalent to = h i ti i. This approximation neglects diagrams like
which involve scattering from two or more impurities. We note that scattering from two or more impurities is included in G, just not in . Of course while scattering from several impurities is accounted for in G, interference between scattering from various impurities is
Chapter 7: Disordered Systems
97
neglected since diagrams which scatter from one impurity than other impurities and then the rst impurity again are neglected. That is,
is included but is not. At low concentrations, such terms are quite small. However, as the concentration increases, these diagrams contribute important corrections to G. One such correction comes from coherent backscattering which we'll discuss in greater detail in section 7.5. We will use the ATA below to show that the classical limit of the quantum mean free path is equal to the classical mean free path. For N uniformly distributed xed strength scatterers, the average is straightforward:
*X + "Y Z #X N N 1 ^ ti = V dri dsi (si ; so) si (ri ; r) :
i=1 i=1 i
(7.12)
For each term in the sum, the r delta function will do one of the volume integrals and the rest will simply integrate to , canceling the factors of 1=V out front. The s delta functions will do all of the s integrals, leaving
*X + N
i=1
^ ti = N so = nso : V
(7.13)
Thus the self energy is simply proportional to the scattering strength multiplied by the concentration. We note that so is in general complex and this will make the poles of the Green function complex, implying an exponential decay of amplitude as a wave propagates. We will interpret this decay as the wavemechanical mean free time.
7.2 Mean Free Path
7.2.1 Classical Mechanics
Consider a domain of volume V in ddimensions with re ective walls. V has units of length]d . Suppose we place N scattering centers in this domain at random (uniformly distributed), each with classical cross section . has units of length]d 1 . Now suppose
;
Chapter 7: Disordered Systems
98
we have a point particle that has just scattered o of one of the scattering centers. It now points in a random direction. What is the probability that it can travel a distance ` without scattering again? If the particle travels a distance x without scattering, then there must be a tube of volume x which is empty of scattering centers. The probability of that is given by the product of the chances that each of the N scatterers (without the re ective walls this would be N ; 1 but since we will take N large and we do have re ective walls, we'll leave it as N ) is not in the volume x . That chance is 1 ; x so V More precisely 1 ; P (N ) (x) is the probability that the free pathlength is less than or equal to x. We de ne n = N=V , the concentration. So We take the N ! 1 while n = const. limit which is valid for in nite systems and a good approximation when the mean free path is smaller than the system size. We have
P (N ) (x) = 1 ; x V
N
(7.14)
P (N ) (x) = 1 ; xNn
N
(7.15)
P (x) = Nlim P (N ) (x) = e
!1
;
n x
(7.16)
and thus the quasiclassical mean free path is
`qc = hxi =
Z
1
0
@ x @x (1 ; P (x)) dl =
Z
0
1
P (x) dl = n1 :
(7.17)
Quasiclassical (indicated by the subscript \qc") here means that the transport between scattering events is classical but the crosssection of each scatterer is computed from quantum mechanics.
7.2.2 Quantum Mechanics
The mean free path is not so simple to de ne for a quantum mechanical system. After all, a particle no longer has a trajectory or a well de ned distribution of path lengths. What then do we mean by the mean free path in a quantum mechanical system? One possibility is simply to replace the classical crosssection in the expression (7.17) with the quantum mechanical crosssection, (we'll refer to this as the \quasiclassical"
Chapter 7: Disordered Systems
99
mean free path). In what follows we'll show that this is equivalent to the lowdensity weakscattering approximation to the selfenergy discussed above. We begin by noting that the free Green function takes a particularly simple form in the momentum representation: G (p p E ) = (p ; p ) : (7.18)
o
0
From this and the lowdensity weakscattering approximation to G(p p E ) = (p ; p ) :
0
E ; Ep
0
we have (7.19) (7.20)
If we write =
; i; we have
E;
; Ep
0
0
Now we consider the Fourier transform of this Green function with respect to energy which gives us the timedomain Green function in the momentum representation (we are ignoring the energy dependence of only for simplicity):
;p) G(p p E ) = E ; (p+ i; ; E : p
0
G(p p t ) =
0
Z Z
1
;1 1
dE e dE e
;
iEt iEt
G(p p E )
0
= =
;
;i (p ; p )e
0
;1
E;
;
i(Ep + )t e ;t
;
(p ; p ) + i; ; Ep
0
which implies an exponential attenuation of the wave if ;; = Imf g is negative. For the ATA, we have = nso which, for a two dimensional ZRI with scattering 4 Ea length a, is n HiJ(o( Ea) ) and thus o
p ; p
2 ;; = ;4n Jo (p Ea) 2 Ho( Ea)
p
(7.21)
which is manifestly negative. We can associate the damping with a mean free time, via ; = 1=2 . Since, at p xed energy, the velocity (in units where h2 =2m = 1 is v = 2 E ) we have for the mean free path, ` pE Ho(pEa) 2 1 ` = v = 4n 2 p = (7.22) Jo ( Ea) n reproducing the quasiclassical result.
Chapter 7: Disordered Systems
100
7.3 Properties of Randomly Placed ZRI's as a Disordered Potential
In this section we will be concerned with averages of T in the momentum representation. As in 7.2.2 we will assume the validity of the lowdensity weakscattering approximation to T : N X T si jriihrij : (7.23) In momentum space, this can be written
0
i=1
k jT j k = T (k k )
0
N X i=1
sie
;
i(k k0 ) ri :
;
(7.24) (7.25) (7.26)
Using the average (7.1), we have
T (k k )
0
N hsi f (k ; k )
0 ;
where
1 Z e i(q) r dr: f (q) =
The function f , has a two important properties. Firstly, f (0) = 1 which implies, as we saw in 7.1.1, that hT i = N hsi. Also, when the bounding region, , is all of space we have (7.27) f (q) = 1 (q) =
q0
Together, these properties imply that the average ATA tmatrix cannot change the momentum of a scattered particle except insofar as the system is nite. A nite system will give a region of low momentum where momentum transfer can occur but for momenta larger than 1=Lo , momentum transfer will still be suppressed. We now consider the second moment of T or, more speci cally,
D
0
T (k k ) 2 ; T (k k )
0 0
2E :
(7.28)
;
Since we have
T (k k ) 2
N X i j =1
si sj e
;
i(k k0 ) ri ei(k k0 ) rj
;
(7.29)
D
T (k k ) 2
0
E
=
N N X D 2E X
6
jsj + jhsij2 f (k ; k ) 2 i=1 D 2E i=j2 N jsj + N ; N jhsij2 f (k ; k ) 2 :
0 0
(7.30)
Chapter 7: Disordered Systems
101
2E 2E
Thus
D
We note that if si = so 8i we have
T (k k ) 2 ; T (k k )
0 0
N
hD 2 E
jsj ; jhsij2 f (k ; k ) 2
0
i
(7.31) (7.32) (7.33)
D
T (k k ) 2 ; T (k k )
0 0
N jso j2 1 ; f (k ; k ) 2 :
0
h
i
In this case,
D
jT (k k)j2 ; jhT (k k)ij2
E
0
At this point it is worth considering our geometry and computing f explicitly. We'll assume we are placing scatterers in a rectangle with length (xdirection) a and width (ydirection) b. Then we have, up to an arbitrary phase,
x f (q) = sinc q2xa sinc q2yby
where
0
(7.34) (7.35)
Thus 1 ;jf (k ; k )j2 is zero for k = k and then grows to 1 for larger momentum transfer. The zero momentum transfer \hole" in the second moment of T is an artifact of a potential made up of a xed number of xed size scatterers. To make contact with the standard condensed matter theory of disordered potentials, we should allow those xed numbers to vary, thus making a more nearly constant second moment of T . We can do this easily enough by allowing the size of the scatterers to vary as well. Then < jsj2 > ;j < s > j2 6= 0. In fact we should choose a distribution of scatterer sizes such that < jsj2 > ;j < s > j2 < jsj2 >. Of course, the scatterer strength, s, is not directly proportional to the scattering length. For example, if the scattering length varies uniformly over a small range a. It is straightforward to show that, for small a,
0
sinc x = sin x : x
< jsj2 > ;j < s > j2
(k a)2 ds 2 : 12 d(ka)
(7.36)
7.4 Eigenstate Intensities and the PorterThomas Distribution
One interesting quantity in a bounded quantum system with discrete spectrum (e.g., a Dirichlet square with random scatterers inside) is the distribution of wavefunction
Chapter 7: Disordered Systems
102
intensities, t, at xed energy E ,
P (t) =
*X
1
t ; j (ro
)2
j
(E ; E )
+
(7.37)
where is the mean level spacing and ro is a speci c point in the system. Since P (t) is a R probability distribution, we have 0 P (t) dt = 1 and since the integrated square wavefuncR tion is normalized to 1, we also have 0 tP (t) dt = 1= . Though computing this quantity in general is quite hard, we can, as a baseline, compute it in one special case. As a naive guess at the form of the wavefunction in a disordered system, we conjecture that the wavefunction is a Gaussian random variable at each point in the system. That is, we assume that the distribution of values of , ( ) is
1
( ) = ae b
; j
j
2
(7.38)
where a and b are constants to be determined. We note that in a bounded system, we can make all the wavefunctions real. We proceed to determine the constants a and b. First, we R (x) dx = 1 which gives use the fact that is a probability distribution so
1
1=a
Z
;1
1
;1
e
;
bx2 dx = a
r
b
implying a =
implying b = a2 . We also know that the normalization of implies which implies 1 = a Z x2 e a2 x2 dx = 1 2 a2
1 ;
R x2 (x) dx = 1=
1 ;1
(7.39)
p =2 . So we have
;1
(7.40)
2 =2 : ( )= 2 e From this, we can compute P (t) via Z Z (x ; pt) + (x + pt) 2 (x) dx = P (t ) = t;x (x) dx: 2jxj We use the delta functions to do the integral and have p (p t) : P (t) =
; j j 1 1 ;1 ;1
s
(7.41) (7.42) (7.43)
After substituting our previous result for , we have
t
P (t) = 2 t e
s
;
t=2
(7.44)
Chapter 7: Disordered Systems
103
which is known as the \PorterThomas" distribution. If timereversal symmetry is broken, e.g., by a magnetic eld, the wavefunction will, in general, be complex. In that case we can use the same argument where the real part and imaginary part of are each Gaussian random variables. In that case we get
P (t) = e
;
t:
(7.45)
This di erence between timereversal symmetric systems and their nonsymmetric counterparts is a recurring motif in disordered quantum systems. A derivation of the above results from Random Matrix Theory (using the assumption that the Hamiltonian is a random matrix with the symmetries of the system) is available many places, for example 14]. In this language, the Hamiltonian of timereversal invariant systems are part of the \Gaussian Orthogonal Ensemble" (GOE) whereas Hamiltonians for systems without timereversal symmetry are part of the \Gaussian Unitary Ensemble" (GUE). We will adopt this bit of terminology in what follows. Of course, most systems do not behave exactly as the appropriate random matrix ensemble would indicate. These di erences manifest themselves in a variety of properties of the system. In the numerical simulations which follow in chapters 8 and 9 we will see these departures quite clearly. For disordered systems, GOE behavior is expected when there are many weak scattering events in every path which traverses the disordered region, guaranteeing di usive transport without signi cant quantum e ects from scattering phases. More precisely, to see GOE behavior, we expect to need a mean free path which is much smaller than the system size (` << L) but much larger than the wavelength (` >> ). We will see this limit emerge in wavefunction statistics in chapter 9.
7.5 Weak Localization
In the previous section, we discussed one consequence of the assumption of a random wavefunction. Of course, the wavefunction is not random, it shows the consequences of the underlying classical dynamics. In the next few sections we explore some consequences of the underlying dynamics for the quantum properties. Weak localization is a generic name for enhanced probability of nding a particle in a given region due to short time classical return probability. In quantized classically
Chapter 7: Disordered Systems
104
chaotic (but not disordered) systems, wavefunction scarring 20] is the best known form of weak localization. In disordered systems, the most important consequence of of weak localization is the reduction of conductance due to coherent backscattering. It is not di cult to estimate the coherent backscattering correction to the conductance. We begin by noting the conductance we expect for a wire with no coherent backscattering. Speci cally, when L >> ` >> we expect the DC conductivity of a disordered wire to satisfy the Einstein relation ; = e2 d D (7.46)
where ; is the conductivity, e is the charge of the electron, d is the ddimensional density of states per unit volume and D is the classical di usion constant. The DC conductivity, ; is proportional to P (r1 r2 ), the probability that a particle starting at point r1 on one side of the system reaches r2 on the other side. Quantum mechanically, this quantity can be evaluated semiclassically by a sum over classical paths, p, X 2 P (r1 r2 ) = Ap (7.47) where Ap = jAp j eiSp and Sp is the integral of the classical action over the path. The quantum probability di ers from the classical in the interference terms:
p
P (r1 r2) = P (r1 r2 )classical +
X
6
p=p0
Ap Ap0 :
(7.48)
Typically, disorder averaging washes out the interference term. However, when r1 = r2 , the terms arising from paths which are timereversed partners will have strong interference even after averaging since they will always have canceling phases. Since every path has a time reversed partner, we have
hP (r r)i = 2 P (r r)classical
:
But this enhanced return probability implies a suppressed conductance since P (r r )dr = ; 1 by conservation of probability. Thus ; must be smaller by a factor of ; P (r r)classical due to this interference e ect. But P (r r)classical is something we can compute straightforwardly. If we de ne R(t) to be the probability that a particle which left the point r at time t = 0 returns at
0 0
R
(7.49)
Chapter 7: Disordered Systems
105
time t, we have
P (r r)classical =
Z tc
R(t)dt:
(7.50)
The lower cuto , = v=` is there since our particle must scatter at least once to return and that takes a time of order the mean free time. The upper cuto is present since we have only a nite disordered region and so, after a time tc = L2 =D the particle has di used out o p and will not return. For a square sample, Lo is ambiguous up to a factor of 2. The upper cuto can also be provided by a phase coherence time . If particles lose phase coherence, for instance by interaction with a nite temperature heat bath, only paths which take less time than will interfere. In this case the expression for the classical return probability is slightly modi ed: Z P (r r)classical = e t= R(t)dt: (7.51) The return probability, R(t)dt, can be estimated for a di usive system. Of all the trajectories that scatter, only those that pass within a volume v dt of the origin contribute. The probability that a scattered particle falls within that volume is just the ratio of it to the total volume of di using trajectories, (Dt)d=2 Vd where d is the e ective number of dimensions (the number of dimensions of the disordered sample >> `) and Vd = d=2 =(d=2)! is the volume of the unit sphere in ddimensions (this is easily calculated using products of Gaussian integrals, see e.g., 32] pp. 5012) and D = v`=d. So
1 ;
v dt R(t)dt = (Dt)d=2 V :
d
(7.52)
With this expression for R(t)dt in hand, we can do the integral (7.50) and get
8 p p > 2 tc ; > < P (r r)classical = Dd=vV > ln(tc = ) 2 d > : d 1 1
1; 2
;
t
d ;
2
d=1 d=2 d > 2:
(7.53)
For future reference we state the speci c results for one and two dimensions. Rather than state the result of the estimation above, we give the correct leading order results (computed by diagrammatic perturbation theory, see, e.g., 4, 12]). These results have the same dependence on ` and Lo but slightly di erent prefactors than our estimate. ;=;h :
2 e2 < 3
8
2 ln 2Lo `
p
p2Lo ; `
d=1 d=2
(7.54)
Chapter 7: Disordered Systems
106
7.6 Strong Localization
Strong localization, characterized by the exponential decay of the wavefunction around some localization site, has dramatic consequences for both transport and wavefunction statistics. Strong enough disorder can always exponentially localize the wavefunction. Weak disorder can sometimes localize the wavefunction but this depends on the strength of the disorder and the dimensionality of the system. Below, we sketch out an argument, originally due to Thouless, which clari es how strong localization occurs as well as its dimensional dependence. The idea is simple. As a wavepacket di uses we consider, at each instant of time, 2 a ddimensional box of volume = Crd = (Dt) d which surrounds it. At each moment we can project the wavepacket onto the eigenstates of the surrounding box. The average energy level spacing in that box is 2 2 E = 1 = (Dt) d E 1 d : (7.55)
; ;
d
However, the wavepacket has been di using for a time t and so we can use its autocorrelation function, A(t), de ned by
A(t) = h (0) j (t) i) =
X
n
h
(0) jn i e iEn t
;
(7.56)
to look at the spectrum of the wavepacket. If we can completely resolve the spectrum, no more dynamics occurs except the phase evolution of the eigenstates of the box. Thus the wavepacket has localized. After a time t, we can resolve levels with spacing E = h . We t de ne the \Thouless Conductance" g via
E 2 2 g = E = hDd=2 E d 1 t d 1:
; ;
(7.57)
If g < 1 we can resolve the levels of the wavepacket and it localizes. Conversely, if g > 1, we cannot resolve the eigenstates making up the wavepacket and di usion continues. The rst conclusion we can draw from this argument is the dimensional dependence of the t ! 1 limit of g. In one dimension, it is apparent that limt g = 0 and thus all states in a weakly disordered one dimensional system localize. In two dimensions, this argument is inconclusive and seems to depend on the strength of the disorder. In fact, it is believed that all states in weakly disordered two dimensional systems localize as well but
!1
Chapter 7: Disordered Systems
!1
107
with exponentially large localization lengths. For d > 2, limt g = 1 and we expect the states to be extended. When measuring conductance, the di erence between localized and extended states in the disordered region is dramatic. If the state is exponentially localized in the disordered region, it will not couple well to the leads and the conductance will be suppressed. We can look for this e ect by looking at the conductance of a disordered wire as a function of the length of the disordered region. If the states are extended, we expect the conductance to vary as 1=L whereas if the states are localized we expect the conductance to vary as e L= where is the localization length. The e ect of exponential localization on wavefuntion statistics is equally dramatic. For instance, in two dimensions, since we know that
;
(r) = we can compute
s
2 e 2
;j ;
r r o j=
(7.58)
2r=
2 ln 2 2 t =2 t : (7.59) In gure 7.1 we plot the PorterThomas distribution and the exponential localization disp p tribution for a localization length = =10 and = =100 so we can see just how stark this e ect is.
P (t) = 2
Z
r dr t ;
j (r)j2
=2
Z
r dr t ; 2 2 e
;
7.7 Anomalous Wavefunctions in Two Dimensions
The description above does not explain much about how strong localization occurs, either in the timedomain or as a function of disorder. This transition is particularly interesting in two dimensions since even the existence of strongly localized states in weakly disordered systems is subtle. While it is not at all obvious that large uctuations in wavefunction intensity in extended states is related to strong localization, it seems an interesting place to look. While this was perhaps the original reason these large uctuations were studied, they have spawned an independent set of questions. The simplest of those questions is the one answered by P (t), namely, how often do these large uctuations occur? Various eldtheoretic techniques have been brought to bear on this problem 39, 15, 29]. All predict the same qualitative behavior of P (t). Namely, that for large values of
Chapter 7: Disordered Systems
Figure 7.1: PorterThomas and exponential localization distributions compared.
1 PorterThomas Strong localization, localization length = L/10 Strong localization, localization length = L/100
0.1
0.01
0.001
P(t)
0.0001
1e05
1e06
1e07
1e08 0 5 10 15 t 20 25 30
108
Chapter 7: Disordered Systems
109
C2 ln2 t :
t, P (t) has a lognormal form:
P (t) e
;
(7.60)
Various workers have argued for di erent forms for C2 which depends on the energy E , the mean free path, `, the system size, Lo and the symmetry class of the system (existence of timereversal symmetry). The lognormal distribution looks strikingly di erent from either the PorterThomas distribution or the strong localization distribution. In gure 7.2 we plot all of these distributions for an example choice of for t > 20. We only consider large values of t because for small values of t, P (t) will have a di erent form. For small enough values of t these calculations predict that P (t) has the PorterThomas form. This allows them to be trivially distinguished from the stronglocalization form which is very di erent from PorterThomas for small t. We will focus in particular on two di erent calculations. The rst, appearing in 39], uses the direct optimal uctuation method (DOFM) and predicts where D1 is an O(1) constant. Another calculation, appearing in 15] and using the supersymmetric sigma model (SSSM) predicts
(2) C2 = 4 ln(F k` =`) L 2 o (1) C2 = 4 ln(Fk` ) kL 1
o
(7.61)
(7.62)
where = 1 for timereversal invariant systems and = 2 for systems without timereversal symmetry and D2 is an O(1) constant. (1 In order to see the di erences between C2 2) , in gure 7.3 we plot both of these coe cients versus both wavelength and mean free path for various values of D1 and D2 .
7.8 Conclusions
In this chapter we reviewed material on quenched disorder in open and closed metallic systems. In the chapters that follow we will often compare to these results or try to verify them with numerical calculations.
Figure 7.2: PorterThomas,exponential localization and lognormal distributions compared.
Chapter 7: Disordered Systems
0.0001 1e06 1e08 1e10 1e12 P(t) 1e14 1e16 1e18 1e20 1e22 1e24 20 30 40 50 60 t 70 80 90 100 PorterThomas Strong localization, localization length = L/100 Lognormal form
110
Chapter 7: Disordered Systems
14 12
111
3 333 33 3 33 33 33 333 10 333 33 33 333 333 8 33 333 333 333 33 333 6 333 333 3 333 333 333 333 4 333 3333 3333 3333 3333 3333 2 3333 3333 333 333
0 0.04 0.06 0.08 0.1 0.12 0.14
DOFM SSSM
`
18
16 3 14 12 10 8 6 4 2
3 3 3 3 3 3 3 3 3 3 33 33 33 33 333 333 3333 3333 33333 333333 333333333 333333333 3333333333333333 333333333333333333 3333 3
0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05
DOFM SSSM
0 0.005
Figure 7.3: Comparison of lognormal coe cients for the DOFM and SSSM.
Chapter 8
Quenched Disorder in 2D Wires
We consider an in nite wire of width W with a disordered segment of length L as in Fig. 8.1. This wire will be taken to have periodic boundary conditions in the transverse (y) direction.
1111 0000 1111 0000 Disordered Region Leads 1111 0000 0000 1111 11111111111111111111111111111111111111111111111111 00000000000000000000000000000000000000000000000000 111111111111111 000000000000000 11111111111 00000000000 11111 00000 11111 00000 11111 00000 11111 00000 11111 00000
L
W
Figure 8.1: The wire used in open system scattering calculations. To connect with the language of mesoscopic systems, we may think of the disordered region as a mesoscopic sample and the semiin nite ordered regions on each side as perfectly conducting leads. For example, one can imagine realizing this system with an AlGaAs \quantum dot." 27]. We can measure many properties of this system. For instance, we have used renormalized scattering techniques to model a particular quantum dot 23]. Typical quantum 112
Chapter 8: Quenched Disorder in 2D Wires
113
dot experiments involve measuring the conductance of the quantum dot as a function of various system parameters (e.g., magnetic eld, dot shape or temperature). Thus we should consider how to extract conductance from a Green function. In this section we discuss numerically computed transport coe cients. This allows us to verify that our disorder potential has the properties that we expect classically. Since we are interested in intensity statistics and how they depend on transport properties, it is important to compute these properties in the same model we use to gather statistics. For instance, as discussed in the previous chapter, the ATA breaks down when coherent backscattering contributes a signi cant weak localization correction to the di usion coe cient. In this regime it is useful to verify that the corrections to the transport are still small enough to use an expansion in =`. When the disorder is strong enough, strong localization occurs and a di erent approximation is appropriate. Of course, transport in disordered systems is interesting in its own right. Our method allows a direct exploration of the theory of weak localization in a disordered two dimensional systems.
8.1 Transport in Disordered Systems
8.1.1 Di usion from Conductance
Computing the conductance of a disordered wire is not much di erent than computing the conductance of the one scatterer system. We use (5.37) to compute the channeltochannel Green function, Gab , from the multiple scattering tmatrix, nd the transmission matrix Tab from (5.31) and compute the conductance, ; = (e2 =h) using the FisherLee relation (5.30). In chapter 5, we assumed the (classical) transport to and from the scatterer was ballistic and derived an expression which related the intensity re ection coe cient (which can be trivially related to the observed conductance) to the scattering cross section of an obstacle in the wire. When many scatterers are placed randomly in the wire, this assumption is no longer valid. Most scattering events occur after other scattering events and thus the probability of various incident angles is uniform rather than determined by the channels of the wire. Also, there will typically be many scattering events. So, when we have many randomly placed scatterers, we will assume that the transport is di usive. This will give
Chapter 8: Quenched Disorder in 2D Wires
114
us a di erent relationship between the re ection coe cient and the crosssection of each obstacle. It is instructive to compute the expected re ection coe cient as a function of the concentration and crosssection under the di usion assumption and compare that result, RD , to RB = 4W computed under the ballistic assumption. We begin from the relation between the intensity transmission coe cient, TD , ; and the number of open channels, Nc: 2 T = (h=e ); (8.1)
D
i.e., the transmission coe cient is just the unitless conductance per channel. From this we see that ; does not go to 1 for an empty wire as we might expect. Only a nite amount of ux can be carried in a wire with a nite number of open channels and this gives rise to a socalled \contact resistance" 8]. We thus split the unitless conductance into a disordered region dependent part (;s ) and a contact part: ! h ; = e2 =h + 1 1 (8.2) e2 ;s Nc where the contact part is chosen so that limT 1 ;s = 1. At this point we invoke the assumption of di usive transport. This allows us to L use the Einstein relation 8] to relate the conductivity of the sample W ;s to the di usion constant via L 2 (8.3) W ;s = e D where is the density of states per unit volume = 1=(4 ) in two dimensions. The L=W in front of ;s relates conductivity to conductance in two dimensions. We now have h;= L + 1 1 = hW DNc : (8.4) e2 hW D Nc LNc + hW D If we substitute this into (8.1) we have
; ! ;
Nc
D TD = LN hWhW D : (8.5) c+ As in the previous chapters, we choose units where h = 1 and m = 1=2, so h2 =(2m) = 1. We also choose units where the electron charge, e = 1. We now use D = v`=d = k` 37] and Nc kW= and get ` (8.6) TD = 2 ` L+ 2
Chapter 8: Quenched Disorder in 2D Wires
115
and
RD = 1 ; TD = RD =
Finally, we use l = LW=(N ) to get
L+
L : `
2
(8.7) (8.8)
For very few scatterers we usually have N << W=2 and thus R =N 2 :
D
1 : W 1 + 2N
Compare this to the ballistic result and we see that they are related by RB =RD = 2 =8. This factor arises from the di erent distributions of the incoming angles. In practice, there can be a large crossover region between these two behaviors, when the nonuniform distribution of the incoming angles can make a signi cant di erence in the observed conductance. We note that (8.7) can be rearranged to yield: (8.10) ` = 2L 1 ; 1 = L 2 TD
W
(8.9)
RD
RD
which we will use as a way to compare numerical results with the assumption of quasiclassical di usion.
8.1.2 Numerical Results
As our rst numerical calculation we look at the mean free path as a function of scatterer concentration at xed energy. In gure 8.2 we plot ` vs. n for a periodic wire of width 1 with a length 1 disordered region. There are about 35 wavelengths across the wire is and thus the wire has 71 open channels. We use scatterers with maximum swave cross section of 2 . We consider concentrations from 50 to 700 scatterers per unit area. At each concentration we compute the conductance of 20 realizations of the scatterers and average them. This range of concentrations corresponds to a range of quasiclassical mean free path from 1:1 to :069. We plot both the quasiclassical mean free path and the numerical result computed via (8.10). From the gure, we see that the numerical result is noticeably smaller than the quasiclassical result, though it di ers by at most 20% over the range of concentrations studied. This di erence is partially explained by two things. At low concentrations, the numerically observed mean free path is smaller than the quasiclassical because the transport
Chapter 8: Quenched Disorder in 2D Wires
700
116
quasiclassical numerical (diffusive)
Figure 8.2: Numerically observed mean free path and the classical expectation.
0.05 100 0.45 0.35 0.25 0.15 0.4 0.3 0.2 0.1 mean free path
0.55
0.5
200
300
400 concentration
500
600
Chapter 8: Quenched Disorder in 2D Wires
700
117
quasiclassical numerical (corrected diffusive)
Figure 8.3: Numerically observed mean free path after rstorder coherent backscattering correction and the classical expectation.
0.05 100 0.45 0.35 0.25 0.15 0.4 0.3 0.2 0.1 mean free path
0.55
0.5
200
300
400 concentration
500
600
Chapter 8: Quenched Disorder in 2D Wires
118
is not completely di usive. As we saw at the beginning of this chapter, ballistic scattering leads to a larger re ection coe cient than di usive ones. This leads to an apparently smaller mean free path. More interestingly, there is coherent backscattering at all concentrations (see section 7.5), though its e ect is larger at higher concentration since the change in conductance due to weak localization is proportional to =`. We correct the conductance, via (7.54), to rst order in =` and then plot the corrected mean free path and the classical expectation in gure 8.3. The agreement is clearly better, though there is clearly some other source of reduced conductance. At the lowest concentrations there is still a ballistic correction as noted above but this cannot account for the lower than expected conductance at thigher concentrations where the motion is clearly di usive. As =` increases, the di erence between the classical and quantum behavior does as well. For large enough =` this will lead to localization. In order to verify that the transport is still not localized, we compute the transmission coe cient vs. the length of the disordered region for xed concentration. If the transport is di usive, T will satisfy (8.5) which predicts T 1=L for large L. If instead the wavefunctions in the disordered region are exponentially localized, T will fall exponentially with distance, i.e., T e L= . In gure 8.4 we plot T versus L for two di erent concentrations and energies. In both plots, 5 realizations of the disorder potential are averaged at each point. In gure 8.4a there are 35 wavelengths across the width of the wire, as in the previous plot, and the concentration is 250 scatterers per unit area. T is clearly more consistent with the di usive expectation than the strong localization prediction. We compare this to gure 8.4b where the wavelength and mean free path are comparable and the wire is only a few wavelengths wide. What we see is probably quasione dimensional strong localization. Consequently, T does not satisfy (8.5) but rather has an exponential form. We note that the data in gure 8.4b is rather erratic but still much more strongly consistent with strong localization than di usion. Numerical observation of exponential localization in a true two dimensional system would be very di cult since the two dimensional localization length is exponentially long in the mean free path.
;
8.1.3 Numerical Considerations
Numerical computation using the techniques outlined in chapters 3 and 5 is quite simple. For each set of parameters computation proceeds as follows:
Chapter 8: Quenched Disorder in 2D Wires
1 Numerical Diffusive Strong localization (fitted)
119
a)
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.5 1 L 1.5 T
2
1
b)
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.5 1 L 1.5 T
Numerical Diffusive Strong localization (fitted)
2
Figure 8.4: Transmission versus disordered region length for (a) di usive and (b) localized wires.
Chapter 8: Quenched Disorder in 2D Wires
120
1. Compute the random locations of N scatterers. 2. Compute the renormalized tmatrix of each scatterer. 3. Compute the scattererscatterer Green functions for all pairs of scatterers. 4. Construct the inverse multiple scattering matrix, T 1 .
;
5. Invert T 1 , giving T , using the Singular Value Decomposition.
;
6. Use formula (5.37) to compute the channeltochannel Green function. 7. Use (5.31) and the FisherLee formula to nd the conductance ;. The bottleneck in this computation can be either the O(N 3 ) SVD or the O(N 2 Nc2 ) application of (5.37) depending the concentration and the energy. The computations appearing above were done in one to four hours on a fast desktop workstation (DEC Alpha 500/500). There are faster matrix inversion techniques than the SVD but few are as stable when operating on nearsingular matrices. Though that is crucial for the nite system eigenstate calculations of the next chapter, it is nonessential here. If one were to switch to an LU decomposition or a QR decomposition (see appendix C) we could speed up the inversion stage by a factor of four or two respectively. For most of the calculations we performed, the computation of the channeltochannel Green function computation was more time consuming and so such a switch was never warranted. 8. Repeat for as many realizations as required to get h;i.
Chapter 9
Quenched Disorder in 2D Rectangles
While analytic approaches to disordered quantum systems abound, certain types of experimental or numerical data is di cult to come by. There is a particular dearth of wavefunction intensity statistics for 2D disordered systems. Experimentally, wavefunctions are di cult quantities to measure. The application of atomic force microscopy to quantum dot systems is a promising technique 11]. An atomic force microscope (AFM) can be capacitively coupled to a quantum dot in such a way that the at the point in the twodimensional electron gas below the tip, electrons are e ectively excluded. The size of the excluded region depends on a variety of factors but is typically of the order of one wavelength square. One scenario for measuring the wavefunction involves moving the tip around and measuring the shift in conductance of the dot due to the location of the tip. For a small enough excluded region, this conductance shift should be proportional to the square of the wavefunction. Even for a wavelength sized excluded region, this technique should be capable of resolving most of the nodal structure of the wavefunction from which the wavefunction could be reconstructed. Preliminary numerical calculations suggest that this technique could work and at least one experimental group is working on this scheme or a near variant. One successful experimental approach has been using microwave cavities as an analog computer to solve the Helmholtz equation in disordered and chaotic geometries 24]. This technique has led to some of the only data available on disordered 2D systems. How121
Chapter 9: Quenched Disorder in 2D Rectangles
122
ever, experimental limitations make it di cult to consider enough ensembles to do proper averaging. Instead, the statistics are gathered from states at several di erent energies. Since the mean free path and wavelength both depend on the energy, the mixing of di erent distributions makes analysis of the data di cult. Still, the data does suggest that large uctuations in wavefunction intensities are possible in two dimensional weakly disordered systems. Numerical methods, which would seem a natural way to do rapid ensemble averaging, have not been applied or, at least, not been successful at reaching the necessary parameter regimes and speed requirements to consider the tails of the intensity distribution. There are some results for the tightbinding Anderson model 30]. However, the nature of the computations required for the Anderson model makes it similarly di cult to gather su cient intensity statistics to t the tails of the distribution. It is the purpose of the section to illustrate how the techniques discussed in this thesis can provide the data which is so sorely needed to move the theory (see section 7.7) forward. We begin with an abstract treatment of the extraction of discrete eigenenergies and corresponding eigenstates from tmatrices. This is a topic worth considering carefully since it is the basis for all the calculations which follow. The tmatrix has a pole at the eigenenergies so the inverse of the tmatrix is nearly singular. Once the preliminaries are out of the way, we discuss the di culties of studying intensity statistics in the parameter regime and with the methods of 24]. In particular, we consider the impact of Dirichlet boundaries on systems of this size. We consider the possibility that dynamics in the sides and corners of a Dirichlet rectangle can have a strong e ect on the intensity statistics and perhaps mask the e ects predicted by the FieldTheory. We also discuss eigenstate intensity statistics for a periodic rectangle (a torus) with various choices of disorder potential (i.e. various ` and ). We'll brie y discuss the tting procedure used to show that the distributions are well described by a lognormal form and extract coe cients. We'll then compare these coe cients to the predictions of eld theory. The computation of intensity statistics in the di usive weak disorder regime is the most di cult numerical work in this thesis. It also produces results which are at odds with existing theory. Thus we spend some time exploring the stability of the numerics and possible explanations for the di erences between the results and existing theory. Finally, as in the last chapter, we'll discuss the numerical techniques more speci cally and outline the algorithm. Here, some less than obvious ideas are necessary to gather
Chapter 9: Quenched Disorder in 2D Rectangles
123
statistics at the highest possible speed.
9.1 Extracting eigenstates from tmatrices
We want to apply the results of Chapter 3 to nite systems, i. e., systems with discrete spectrum. In some sense this is very similar to section 6.1.5 but in this case we have many scatterers instead of one what was simply root nding becomes linear algebra. ^ ^ ^ Suppose t 1 (z ) has a nontrivial nullspace when z = En . De ne PN and PR as ^ projectors onto the nullspace and range of t 1 (En ) respectively. Since we know that En is ^ a pole of T (z ), we know that, for 0 < j j << 1 and 8v 2 S we have
; ;
^ We de ne a pseudoinverse (on the range of T 1 ) BR , via
;
^ ^ ^ ^ T 1 (En + ) jvi T 1 (En)PR jvi + C PN jvi
; ;
(9.1)
^ ^ ^ ^ t 1 (En)BR PR jvi = PR jvi
; ; ;
(9.2)
^ ^ ^ which exists since t 1 (En ) is explicitly nonnull on PR jvi. We can now invert t 1 (z ) in the neighborhood of z = En : ^ ^ ^ ^ t(En + ) jvi BR PR jvi + C PN jvi :
;
(9.3)
^ ^ Thus, the residue of t(z ) at z = En projects any vector onto the nullspace of t 1 (En ). Recall that the full wavefunction is written ^ ^ ^ j ij i + GB t^j i : / lim0 GB C PN j i
!
(9.4)
Since the tmatrix term has a pole, the incident wave term is irrelevant and the wavefunction ^ ^ is (up to a constant) GB PN j i. When the state at En is nondegenerate (which is generic in disordered systems), ^ there exists a vector j i the projector may be written PN = j ih j and thus ^ j ni = N GB (En) j i where N is a normalization constant. In position representation,
n (r) =
(9.5)
N
Z
S
GB (r r En) (r ) dr :
0 0 0
(9.6)
Chapter 9: Quenched Disorder in 2D Rectangles
124
P ( ^ If the state is mfold degenerate, PN = m j j ih j j and we have the solutions nj ) = j =1 ^ Nj GB j j i Thus, the task of nding eigenenergies of a multiple scattering system is equivalent ^ to nding En such that t 1 (En ) has a nontrivial null space. Finding the corresponding eigenstates is done by nding a basis for that null space. Suppose that our tmatrix is generated by the multiple scattering of N zero range scatterers (see 3.1). In this case, the procedure outlined above can be done numerically using the Singular Value Decomposition (SVD). We have
;
E
^ t 1=
;
X
ij
jrii (A)ij hrj j
(9.7)
We can decompose A via A = U VT where U and V are orthogonal and is diagonal. The elements of are called the \singular values" of A. We can detect rank de ciency (the existence of a nullspace) in A by looking for zero singular values. Far more detail on numerical detection of rankde ciency is available in 17]. P Once zero singular values are found, the vector, (where jalphai = i = i jri i), needed to apply (9.5), sits in the corresponding column of V. It is important to note that we have extracted the eigenstate without ever actually inverting A which would incur tremendous numerical error. Thus our wavefunction is written (r) = N
X
i
GB (r ri En ) i:
(9.8)
We normalize the wavefunction by sampling at many points and determining N numerically. In practice, we may use this procedure in a variety of ways. Perhaps the simplest is looking at parts of the spectra of speci c con gurations of scatterers. We de ne SN (E ) ^ as the smallest singular value of t 1 (E ) (standard numerical techniques for computing the SVD give the smallest singular value as ( )NN ). Computing SN (E ) is O(N 3 ). Then we 2 use standard numerical techniques (e.g., Brent's method, see 33]) to minimize SN (E ). We then check that minimum found is actually zero (within our numerical tolerance of zero, to be precise). These standard numerical techniques are more e cient when the minima are quadratic which is why we square the smallest singular value. We have to be careful to consider SN (E ) at many energies per average level spacing so we catch all the levels in the spectrum. Since we know the average level spacing (see 2.27) is 1 = h2 4 (9.9) V %2(E) 2m V
;
Chapter 9: Quenched Disorder in 2D Rectangles
125
we know approximately how densely to search in energy. The generic level repulsion in disordered systems 14] helps here, since the possibility of nearby levels is smaller.
9.2 Intensity Statistics in Small Disordered Dirichlet Bounded Rectangles
Our rst attempt to gather intensity statistics will be performed on systems with relatively few scatterers (72) and thus large mean free paths. This regime is chosen for comparison with the experimental results of 24]. When gathering intensity statistics we need to nd eigenstates near a particular energy. This is most simply done by choosing an energy and then hunting for a zero singular value in some window about that energy. For windows smaller than the average level spacing, we will frequently get no eigenstate at all. This method works but is very ine cient since we frequently perform the timeconsuming SVD on matrices which will not yield a state. However, for this number of scatterers, this ine ciency is tolerable. In the next section we will consider improvements on this technique. In gure 9.1 we plot j j2 on a grayscale (black for high intensity and white for low) in a 72 scatterer system along with the scatterers plotted as black dots for a set of typical low energy wavefunctions in order to give the reader a picture of the sort of scatterer density and wavelength of the system we are considering. In the experiments of 24], higher energies than those depicted in gure 9.1 are used. Some sample wavefunctions in the relevant energy range are shown in gure 9.2. In the original work, this energy range was chosen because must be signi cantly smaller than the system size for various eldtheoretic techniques to be applicable. Unfortunately, there are not enough scatterers in the system to make the motion di usive. This is clear from the large mean free paths in the states in 9.2, where `=Lo varies from 1=4 to 1=2. This problem was overlooked in the original work due to a confusion about the relevant mean free path. So, rather than ` Lo , we have < ` < Lo . This means that we expect the boundaries to play a large role in the dynamics of the system. In gure 9.3 we plot the intensity statistics for a 1 1 square with Dirichlet boundaries, ` = :31 and = :07. In these and subsequent intensity statistics plots, we nd P (t) by computing wavefunctions, accumulating a histogram of values of j j2 and then dividing the number of counts in each
Chapter 9: Quenched Disorder in 2D Rectangles
126
Figure 9.1: Typical low energy wavefunctions (j j2 is plotted) for 72 scatterers in a 1 1 Dirichlet bounded square. Black is high intensity, white is low. The scatterers are shown as black dots.For the top left wavefunction ` = :12 = :57 whereas ` = :23 = :11 for the bottom wavefunction. ` increases from left to right and top to bottom whereas decreases in the same order.
Chapter 9: Quenched Disorder in 2D Rectangles
127
Figure 9.2: Typical medium energy wavefunctions (j j2 is plotted) for 72 scatterers in a 1 1 Dirichlet bounded square. Black is high intensity, white is low. The scatterers are shown as black dots. For the top left wavefunction ` = :25 = :09 whereas ` = :48 = :051 for the bottom wavefunction. ` increases from left to right and top to bottom whereas decreases in the same order.
Chapter 9: Quenched Disorder in 2D Rectangles
30
128
PorterThomas center sides corners
0.001
0.0001
1e05
1e06
Figure 9.3: Intensity statistics gathered in various parts of a Dirichlet bounded square. Clearly, larger uctuations are more likely in at the sides and corners than in the center. The (statistical) error bars are di erent sizes because four times as much data was gathered in the sides and corners than in the center.
1e07 0.01 P(t)
1
0.1
0
5
10
15 t
20
25
Chapter 9: Quenched Disorder in 2D Rectangles
129
bin by the total number of counts and the width of each bin. As is clear from the gure, anomalous peaks are more likely near the sides and most likely near the corners. This large boundary e ect makes it unlikely that existing theory can be t to data which is, e ectively, an average over all regions of the rectangle.
9.3 Intensity Statistics in Disordered Periodic Rectangle
In order to minimize the e ect of boundary conditions, we change to periodic boundary conditions and decrease `=Lo while holding =Lo constant. While periodic boundary conditions won't eliminate nite size system e ects they do remove the enhanced localization near the Dirichlet boundary. Decreasing `=Lo at constant =Lo comes at a cost. Since the number of scatterers required to hold the concentration constant scales as the volume of the system, a larger system means more scatterers and a larger multiple scattering matrix. The method used in the previous section will not work quickly enough to be of practical use here (at least not on a workstation). This leads us to some very speci c numerical considerations which we address below.
9.3.1 Numerical Considerations
The most serious ine ciency in computing wavefunctions are all the wasted SVD's which do not result in a state. However, we can overcome this bottleneck by looking more closely at the small singular values. We will see that small singular values correspond to states of a system with slightly di erent size scatterers. First, we recall that the renormalized scattering strength of a ZRI in a periodic rectangle is independent of position and thus the scattering strengths of all the scatterers are the same, just as they were in freespace. This means that the denominator of the multiple scattering tmatrix, 1 ; S (E )GB , is symmetric since S (E ), the real scalar renormalized scattering strength, is the same for all scatterers. If there is an eigenstate at energy E , there exists some vector, w, such that
;1 ; S (E )G w = 0: B
(9.10) 1, then there exists
If we have no zero eigenvalue but instead a small one,
Chapter 9: Quenched Disorder in 2D Rectangles
130
vector w such that which implies and so
(1 ; S (E )GB )w = w 1 GB w = S (; ) w E
(9.11) (9.12) (9.13)
~ Thus there exists a nearby S (E ) = S (E )=(1 ; ) such that we do have an eigenstate at E . ~ Since S (E ) parameterizes the renormalized scatterer strength, S (E ) corresponds to a nearby scatterer strength. Thus, in order to include states corresponding to small eigenvalues we must choose an initial scattering strength such that S (E )=(1 ; ) is still a possible scattering strength. As discussed in section C.3, for a symmetric matrix, the singular values are, up to a sign, the eigenvalues. Thus we may use the columns of V corresponding to small singular values to compute eigenstates. This provides a huge gain in e ciency. We pick a particular energy, choose a realization of the potential, nd the SVD of the tmatrix and then compute a state from every vector corresponding to a small singular value. Small here means small enough that the resultant scatterer strength is in some speci ed range. We frequently get more than one state per realization. With this technique in place, there is little we can do to further optimize each use of the SVD. The other bottleneck in state production is the computation of the background Green function. Each Green function requires performing a large sum of trigonometric and hyperbolic trigonometric functions. Thus for moderate size systems (e.g., 500 scatterers), lling the inverse multiple scattering matrix takes far longer than inverting it. Green function computation time is also a bottleneck when computing the wavefunction from the multiple scattering tmatrix. We could try to improve the convergence of the sums and thus require fewer terms per Green function. That would bring a marginal improvement in performance. Of course, when function computation time is a bottleneck a frequent approach is to tabulate function values and then use lookups into the table to get function values later. A naive application of this is of no use here since our scatterers move in every realization and thus tabulated Green functions for one realization are useless for the next. There is a simple way to get
E 1 ; S (; ) GB w = 0: 1
Chapter 9: Quenched Disorder in 2D Rectangles
131
the advantages of tabulated functions and changing realizations. Instead of tabulating the Green functions for one realization of scatterers, we tabulate them for a larger number of scatterers and then choose realizations from the large number of precomputed locations. For example, if we need 500 scatterers per realization, we precompute the Green functions for 1000 scatterers and choose 500 of them at a time for each realization. In order to check that this doesn't lead to any signi cant probability of getting physically similar ensembles, we sketch here an argument from 4]. Consider a random potential of size L with mean free path `. A particle di using through this system typically undergoes L2 =`2 collisions. The probability that a particular scatterer is involved in one of these collisions is roughly this number divided by the total number of scatterers, nLd , where n is the concentration and d is the dimension of the system. Thus a shift of one scatterer can, e.g., shift an energy level by about a level spacing when
E= L 2 1 = 1 (9.14) ` nLd n`2Ld 2 is of order one. That is, we must move n`2 Ld 2 scatterers. In particular, in two dimensions we must move no = 0 2 scatterers to completely change a level. n` 1 M There are @ A ways to choose N scatterers from M possible locations and N 0 12 @ M A ways to independently choose two sets. Of those pairs of sets N
; ;
0 @ M
have N ; no or more scatterers in common. The rst factor is the number of ways to choose the common N ; no scatterers and the second is the number of ways to choose the rest independently. Thus the probability, p2 , that two independently chosen sets of scatterers have N ; no or more in common is
N ; no
10 12 M ; N + no A A@
no
(9.15)
0 10 12 M A @ M ; N + no A @ N ; no no : p2 = 0 12 MA @
N
(9.16)
Chapter 9: Quenched Disorder in 2D Rectangles
132
We will be looking at thousands or millions of realizations. The probability that no pair among R realizations shares as many as no scatterers is
pR = 1 ; (1 ; p2 )R(R 1) :
;
(9.17)
For large M and N , this probability is extremely small. For example, the fewest scatterers we use will be 500 which we'll choose from 1000 possible locations. Our simulations are in a 1 1 square and the mean free path is :08 implying that no < 4. The chance that all but 4 scatterers are common between any of one million realizations 500 scatterers chosen from a pool of 1000 is less than 10 268 . Thus we need not really concern ourselves with the possibility of oversampling a particular con guration of the disorder potential. The combination of getting one or more states from nearly very realization and precomputing the Green functions, leads to an improvement of more than three orders of magnitude over the method used for the smaller systems of 9.2. This allows us to consider systems with more scatterers and thus get closer to the limit ` Lo . With this improvement we can proceed to look at the distribution of scatterer intensities for various values of ` and in a Lo = 1 square.
;
9.3.2 Observed Intensity Statistics
In gure 9.4 we plot the numerically observed P (t) for various with ` = :081 in a 1 1 square (Lo = 1). It is clear that large values of j j2 are more likely at larger =`. In order to compare these distributions with theoretical calculations we must t them to various functional forms. Since we are counting independent events, the expected distribution of events in each bin is Poisson. This needs to be taken into account when tting the numerically computed P (t) to various analytic forms. While there may be many forms of P (t) which the data could t, among the various ones motivated by existing theory only some product of lognormal and powerlaw t reasonably well. That is, we will t P (t) to
e
;
Co C1 ln t C2 ln2 t
; ;
(9.18)
where Co is just a normalization constant, C1 is the power in the powerlaw and C2 is the lognormal coe cient. We note that the expectation is that lognormal behavior will occur in the tail of the distribution. In order to account for this we t each numerically computed
Chapter 9: Quenched Disorder in 2D Rectangles
133
PorterThomas lambda=.063 lambda=.031 lambda=.0016
0.001
1e05
1e06
1e07
1e08
0.0001
Figure 9.4: Intensity statistics gathered in various parts of a Periodic square (torus). Larger uctuations are more likely for larger =`. The erratic nature of the smallest wavelength data is due to poor statistics.
1e09 P(t)
0.01
0.1
10
t
Chapter 9: Quenched Disorder in 2D Rectangles
10000
134
1000
Reduced ChiSquared
100
10
1
0.1 2 4 6 8 10 12 Minimum t 14 16 18 20
0.8
0.9
C2 and 95% Confidence Interval
1
1.1
1.2
1.3
1.4
1.5 2 4 6 8 10 12 Minimum t 14 16 18 20
Figure 9.5: Illustrations of the tting procedure. We look at the reduced 2 as a function of the starting value of t in the t (top, notice the logscale on the yaxis) then choose the C2 with smallest con dence interval (bottom) and stable reduced 2. In this case we would choose the C2 from the t starting at t = 10.
Chapter 9: Quenched Disorder in 2D Rectangles
135
P (t) beginning at various values of t. We then look at the reduced 2 , 2 = ( 2 ; D)=Nd (where there are D tting parameters and Nd data points) for each t. A plot of a typical
and
sequence of 2 values is plotted in gure 9.5 (top). Once 2 settles down to a near constant value, we choose the t with the smallest con dence interval for the tted C2 . A typical sequence of C2 's and con dence intervals for one t is plotted in gure 9.5 (bottom). The behavior of 2 is consistent with the assumption that P (t) does not have the form (9.18) until we reach the tail of the distribution. As discussed in section 7.7 there are two eld theoretic computations which give two di erent forms for C2 : (1) C2 = 4 ln(Fk` ) (9.19) kL
1
o
We can attempt to t our observed C2 's to these two forms. We nd that neither form works very well at all. In gure 9.6 we compare these ts to the observed values of C2 as we vary k at xed ` = :081 (top) and vary ` at xed k = 200 ( = :031). Thus, while the numerically computed intensity statistics are well tted by a lognormal distribution as predicted by theory, the coe cients of the lognormal do not seem to be explained by existing theory.
(2) C2 = 4 ln(F k` =`) : L 2 o
(9.20)
9.3.3 Anomalous Wavefunctions
It is interesting to look at the anomalous wavefunctions themselves. In gures 9.7 we plot two typical wavefunctions and in 9.8 we plot two anomalously peaked wavefunctions. For each we show a contour plot of and a density plot of j j2 . The scale is di erent on each wavefunction so that each is shown with maximum visual dynamic range. This necessarily obscures the fact that the typical wavefunction have a maximum j 2 j of 20 as opposed to 40 or 60 for the anomalous states. It is di cult to determine much information from the entire wavefunction. In order to simplify things a bit we look at the average scaled intensity at various distances from the peak, ro , via Z2 1 2 R(r) = j (r )j2 1 ^ ^ (9.21) r 0 j (ro + r x cos + y sin ]) j d : o
Chapter 9: Quenched Disorder in 2D Rectangles
136
4
numerical Fitted C2(1)
3.5
Fitted C2(2)
3
C2
2.5
2
1.5
1
0.5 100 150 200 250 k 300 350 400
14 numerical 12 Fitted C2
(1)
Fitted C2(2) 10
8 C2 6 4 2 0 0.1 0.2 0.3 l 0.4 0.5 0.6
Figure 9.6: Numerically observed lognormal coe cients ( tted from numerical data) and tted theoretical expectations plotted (top) as a function of wavenumber, k at xed ` = :081 and (bottom) as function of ` at xed k = 200.
Chapter 9: Quenched Disorder in 2D Rectangles
137
Figure 9.7: Typical wavefunctions (j j2 is plotted) for 500 scatterers in a 1 1 periodic square (torus) with ` = :081 = :061. The density of j j2 is shown.
Chapter 9: Quenched Disorder in 2D Rectangles
138
Figure 9.8: Anomalous wavefunctions (j j2 is plotted) for 500 scatterers in a 1 1 periodic square (torus) with ` = :081 = :061. The density of j j2 is shown. We note that the scale here is di erent from the typical states plotted previously.
Chapter 9: Quenched Disorder in 2D Rectangles
Typical Peaks 1 Scaled Average Radial Wavefunction (peak=19.3) 0.9 Scaled Average Radial Wavefunction (peak=22.4) 0.8 0.7 0.6 <psi2> 0.5 0.4 0.3 0.2 0.1 0 0 0.5 1 1.5 2 kr/(2 pi) 2.5 3 3.5 4 4.5
139
Anomalous Peaks 1 Scaled Average Radial Wavefunction (peak=34.1) 0.9 Scaled Average Radial Wavefunction (peak=60.1) 0.8 0.7 0.6 <psi2> 0.5 0.4 0.3 0.2 0.1 0 0 0.5 1 1.5 2 kr/(2 pi) 2.5 3 3.5 4 4.5
Figure 9.9: The average radial intensity centered on two typical peaks (top) and two anomalous peaks (bottom).
Chapter 9: Quenched Disorder in 2D Rectangles
140
In gure 9.9 we plot R(r) for two typical peaks (one from each wavefunction in gure 9.7) and the two anomalous peaks from the wavefunctions in gure 9.8. Here we see that each set of peaks have a very similar behavior in their average decay and oscillation. The anomalous peaks have a more quickly decaying envelope as they must to reach the same Gaussian random background value. This is predicted in 39], although we have not yet con rmed the quantitative prediction of those authors. Again we note
9.3.4 Numerical Stability
Throughout this work we have checked the accuracy and stability of our numerics in several ways. In chapter 5 we checked that the numerical procedure we use gives the correct classical limit crosssection for one scatterer. In chapter 8 we checked that for an appropriate choice of parameters, many scatterers in a wire give the classically expected mean free path. In chapter 6, we checked that the analytic structure of renormalized tmatrices correctly predicts ground state energies in Dirichlet bounded squares. However, we do not have a direct way to check higher energy eigenenergies or eigenstates. Thus we do the next best thing and check the stability of the numerical procedure which produces them. That is, we check that small perturbations of the input to the method (scatterer locations, scatterer strengths, Green functions, etc.) do not result in large changes to the computed wavefunction. As a simple perturbation we will consider random changes in scatterer location. We consider one realization of the scattering potential, compute the wavefunction, , and then proceed to move the scatterers each by some small amount in a random direction. We then compute the new wavefunction, ~, and compare the two wavefunctions. We repeat this for ever larger perturbations of the scatterers until we have completely decorrelated the wavefunctions. We consider two measures for comparison. We de ne (r) = (r) ; ~(r) and we consider both q V hj (r)j2 i (9.22)
where we average over r and
maxf( )2 g : (9.23) maxfj j2 g The former is a standard measure of the di erence of functions and the latter we expect to be more sensitive to changes in anomalously large peaks.
s
Chapter 9: Quenched Disorder in 2D Rectangles
141
In order to see if a small perturbation in scatterer locations produces a small change in the wavefunction, we need an appropriate scale for the scatterer perturbation. If Random Matrix Theory applies, we know that a single state can be shifted by one level (and thus completely decorrelated) by moving one scatterer with crosssection approximately the wavelength by one wavelength. From that argument and that the fact that the motion of each scatterer is uncorrelated with the motion of the others, we can see that the appropriate parameter is approximately p = N x (9.24) where x is average distance moved by a single scatterer. That is, if the error in the wavefunction is comparable to we can assume it comes from physical changes in the wavefunction not numerical error. In gure 9.10 we plot our two measures of wavefunction deviation against (on a loglog scale) for several between 3:6 10 7 and 3 for a state with an anomalously large peak. Since our deviations are no larger then expected from Random Matrix Theory, we can assume that the numerical stability is su ciently high for our purposes. Though not exactly a source of numerical error, we might worry that including states that result from small but nonzero singular values has an in uence on the statistics. If this were the case, we would need to carefully choose our singular value cuto in order to match the eld theory. However, the in uence on the statistics is minimal as summarized in table 9.1.
;
C1 C2
Maximum Singular Value
.1 .01 ;2:05 :03 ;2:2 :1 1:335 :006 1:38 :03
Table 9.1: Comparison of lognormal tails of P (t) for di erent maximum allowed singular value.
9.3.5 Conclusions
In contrast to the well understood phenomena observed in disordered wires, we have observed some rather more surprising things in disordered squares. While the various theoretical computations of the expected intensity distribution appear to correctly predict the shape of the tail of the distribution, none of them seem to correctly predict the dependence of that shape on wavelength or mean free path.
Chapter 9: Quenched Disorder in 2D Rectangles
142
sqrt(maxdpsi2/maxpsi2)
Random Matrix Theory
sqrt(V<dpsi2>)
0.001
0.0001
0.01
1e05
1e06
1e07
Figure 9.10: Wavefunction deviation under small perturbation for 500 scatterers in a 1 1 periodic square (torus). ` = :081 = :061.
1e08 0.1
10
1
1e06
1e05
0.0001
0.001 scaled perturbation
0.01
0.1
1
Chapter 9: Quenched Disorder in 2D Rectangles
143
We have considered a variety of explanations for the discrepancies between the eld theory and our numerical observations. There is one way in which our potential di ers drastically from the potential which the eld theory assumes. We have a nite number of discrete scatterers whereas the eld theory takes the concentration to in nity while taking the scatterer strength to zero while holding the mean free path constant. Thus it seems possible that there are processes which can cause large uctuations in j j2 which depend on there being nite size scatterers. In order to explore this possibility we consider the dependence of P (t) on the scatterer strength at constant wavelength and mean free path. Speci cally, at two di erent wavelengths ( = :06 and = :03) and xed ` = :08 we double the concentration and halve the crosssection of the scatterers. The results are summarized in table 9.2. While changes in concentration and scatterer strength do in uence the coe cients of the distribution, they do not do so enough to explain the discrepancy with the eld theory. = :061 = :031 Strong Scatterers Weak Scatterers
C1 C2 C1 C2
;2:05 :03 1:335 :006 ;4:1 :1 1:84 :02
;1:96
:09 1:27 :01
;4:1 :34 1:83 :066
Table 9.2: Comparison of lognormal tails of P (t) for strong and weak scatterers at xed and `.
9.4 Algorithms
We have used several di erent algorithms in di erent parts of this chapter. We have gathered spectral information about particular realizations of scatterers, gathered statistics in small systems where we only used realizations with a state in a particular energy window, gathered statistics from nearly every realization by allowing the scatterer size to change and computed wavefunctions for particular realizations and energies. Below, we sketch the algorithms used to perform these various tasks. We will frequently use the smallest singular value of a particular realization at a ( particular energy which we denote SNi) (E ) where i labels the realization. When only one ( realization is involved, the i will be suppressed. To compute SNi) (E ) we do the following:
Chapter 9: Quenched Disorder in 2D Rectangles
144
1. Compute the renormalized tmatrix of each scatterer. 2. Compute the scattererscatterer Green functions for all pairs of scatterers. 3. Construct the inverse multiple scattering matrix, T 1 .
;
4. Find the singular value decomposition of T value. 1. Load the scatterer locations and sizes.
;
1
( and assign SNi) (E ) the smallest singular
In order to nd spectra from Ei to Ef for a particular realization of scatterers 2. choose a E less than the average level spacing. 3. Set E = Ei (a) If E < Ef nd SN (E ), SN (E + E ) and SN (E + 2 E ), otherwise end. (b) If the smallest singular value at E + E is not smaller than for E and E = 2 E , increase E by E and repeat from (a). (c) Otherwise, apply a minimization algorithm to SN (E ) in the region near E + E . Typically, minimization algorithms will begin from a triplet as we have calculated above. (d) If the minimum is not near zero, increment E and repeat from (a). (e) If the minimum coincides with an energy where the renormalized tmatrices are extremely small, it is probably a spurious zero brought on by a state of the empty background. Increment E and repeat from (a). (f) The energy Eo at which the minimum was found is an eigenenergy. Save it, increment E by E and repeat from (a). The bottleneck in this computation is the lling of the inverse multiple scattering matrix and computation of the O(N 3 ) SVD. Performance can be improved by a clever choice of E but too large a E can lead to states being missed altogether. The optimal E can only be chosen by experience or by understanding the width of the minima in Sm (E ). The origin of the width of these minima is not clear. The simpler method for computing intensity statistics by looking for states in an energy window of size 2 E about an energy E goes as follows:
Chapter 9: Quenched Disorder in 2D Rectangles
145
1. Choose a realization of scatterer locations. 2. Use the method above to check if there is an eigenstate in the desired energy window about E . If not, choose a new realization and repeat. 3. If there is an eigenstate, use the singular vector corresponding to the small singular value to compute the eigenstate. Make a histogram of the values of j j2 sampled on a grid in position space with spacing approximately one halfwavelength in each direction. 4. Combine this histogram with previously collected data and repeat with a new realization. The bottlenecks in this computation are the same as the previous computation. In this case, a clever choice of window size can improve performance. The more complex method for computing intensity statistics by looking only at energy E is a bit di erent: 1. Randomly choose a number of locations (approximately twice the number of scatterers). 2. Compute and store all renormalized tmatrices and the Green functions from each scatterer to each other scatterer. 3. Choose a grid on which to compute the wavefunction. 4. Compute and store the Green function from each scatterer to each location on the wavefunction grid. 5. Choose a subset of the precomputed locations, construct the inverse multiple scattering matrix, T 1 and nd the SVD of T 1 .
; ;
6. For each singular value smaller than some cuto (we've tried both :1 and :01), compute the associated eigenstate on the grid, count the values of j j2 and combine with previous data. Choose a new subset and repeat. For this computation the bottlenecks are somewhat di erent. The O(N 3 ) SVD is one bottleneck as is computation of individual wavefunctions which is O(SN 2 ) where S is the number of points on the wavefunction grid.
Chapter 9: Quenched Disorder in 2D Rectangles
146
For all of these methods, near singular matrices are either sought, frequently encountered or both. This requires a numerical decomposition which is stable in the presence of small eigenvalues. The SVD is an ideal choice. The SVD is usually computed via transformation to a symmetric form and then a symmetric eigendecomposition. Since the matrix we are decomposing can be chosen to be symmetric, we could use the symmetric eigendecomposition directly. We imagine some marginal improvement might result by the substitution of such a decomposition for the SVD.
Chapter 10
Conclusions
Scattering theory can be applied to some unusual problems and in some unexpected ways. Several ideas of this sort have been developed and applied in this work. All the methods developed here are related to the fundamental idea of scattering theory, namely the separation between propagation and collision. The applications range from the disarmingly simple single scatterer in a wire to the obviously complex problem of intensity statistics in weakly disordered two dimensional systems. These ideas allow calculation of some quantities which are di cult to compute other ways, for example the scattering strength of a scatterer in a narrow two dimensional wire as discussed in section 5.6. It also allows simpler calculation of some quantities which have been computed other ways, e.g., the eigenstates of one zero range interaction in a rectangle, also known as the \Seba billiard." The methods developed here also lead to vast numerical improvements in calculations which are possible but di cult other ways, for example the calculation of intensity statistics in closed weakly disordered two dimensional systems as demonstrated in section 9.3. The results of these intensity statistics calculations are themselves quite interesting. They appear to contradict previous theoretical predictions about the likelihood of large uctuations in the wavefunctions in such systems. At the same time some qualitative features of these theoretical predictions have been veri ed for the rst time. There are a variety of foreseeable applications of these techniques. One of the most exciting, is the possible application to superconductor normalmetal junctions. For instance, a disordered normal metal region with a superconducting wall will have di erent 147
Chapter 10: Conclusions
148
dynamics because of the Andre'ev re ection from the superconductor. The superconductor energy gap can be used to probe various features of the dynamics in the normal metal. Also, the eld of quantum chaos has, so far, been focused on systems with an obvious chaotic classical limit. Systems with purely quantum features very likely have di erent and interesting behavior. A particlehole system like the superconductor is only one such example. A di erent sort of application is to renormalized scattering in atomic traps. The zero range interaction is a frequently used model for atomatom interactions in such traps. The trap itself renormalizes the scattering strengths as does the presence of other scatterers. Some of this can be handled, at least in an average sense, with the techniques developed here. We would also like to extend some of the successes with point scatterers to other shapes. There is an obvious simplicity to the zero range interaction which will not be shared with any extended shape. However, other shapes, e.g., nite length line segments, have simple scattering properties which can be combined in much the way we have combined single scatterers in this work.
Bibliography
1] M. Abramowitz and I.A. Stegun, editors. Handbook of Mathematical Functions. Dover, London, 1965. 2] S. Albeverio, F. Geszetsky, R. H eghKrohn, and H. Holden. Solvable models in quantum mechanics. SpringerVerlag, Berlin, 1988. 3] S. Albeverio and P. Seba. Wave chaos in quantum systems with point interaction. J. Stat. Phys., 64(1/2):369{83, 1991. 4] Boris L. Altshuler and B.D. Simons. Universalities: From anderson localization to quantum chaos. In E. Akkermans, G. Montambaux, J.L. Pichard, and J. ZinnJustin, editors, Les Houches 1994: Mesoscopic Quantum Physics (LXI), Les Houches, pages 1{98, Sara Burgerhartstraat 25, P.O. Box 211, 1000 AE Amsterdam, The Netherlands, 1994. Elsevier Science B.V. 5] G.E. Blonder, M. Tinkham, and T.M. Klapwijk. Transition from metallic to tunneling regimes in superconducting microconstrictions: Excess current, charge imbalance, and supercurrent conversion. Phys. Rev. B, 25(7):4515{32, April 1982. 6] M. Born. Zur quantenmechanik der stossvorgange. Zeitschrift fur Physik, 37:863{67, 1926. Translated into English by Wheeler, J.A. and Zurek W.H. (1981) and reprinted in Quantum Theory and Measurement, Wheeler, J.A. and Zurek W.H. eds., Princeton Univeristy Press (1983). 7] A. Cabo, J. L. Lucio, and H. Mercado. On scale invariance and anomalies in quantum mechanics. ajp, 66(6):240{6, March 1998. 8] S. Datta. Electronic Transport in Mesoscopic Systems. Number 3 in Cambridge Stud149
Bibliography
150
ies in Semiconductor Physics and Microelectronic Engineering. Cambridge University Press, The Pitt Building, Trumpington Street, Cambridge CB2 1RP, 1995. 9] S. Doniach and E.H. Sondheimer. Green's Functions for Solid State Physicists. Number 44 in Frontiers in Physics Lecture Note Series. The Benjamin/Cummings Publishing Company, Inc., 1982. 10] E.N. Economou. Green's Functions in Quantum Physics. Number 7 in SolidState Sciences. SpringerVerlag, 175 Fifth Ave., New York, NY 10010, USA, 2nd edition, 1992. 11] J.D. Edwards, LupuSax A.S., and Heller E.J. Imaging the single electron wavefunction in mesoscopic structures. To Be Published, 1998. 12] F. Evers, D. Belitz, and Wansoo Park. Density expansion for transport coe cients: Longwavelength versus fermi surface nonanalyticities. prl, 78(14):2768{2771, April 1997. 13] P. Exner and P. Seba. Point interactions in two and three dimensions as models of small scatterers. Phys. Rev. A, 222:1{4, October 1996. 14] Haake F. Quantum Signatures of Chaos, volume 54 of Springer Series in Synergetics. SpringerVerlag, SVaddr, 1992. 15] V.I. Fal'ko and K.B. Efetov. Statistics of prelocalized states in disordered conductors. Physical Review B, 52(24):17413{29, December 1995. 16] Daniel S. Fisher and Patrick A. Lee. Relation between conductivity and transmission matrix. Phys. Rev. B, 23(12):6851{4, June 1981. 17] Gene H. Golub and Charles F. Van Loan. Matrix Computations. Johns Hopkins University Press, 2715 North Charles Street, Baltimore, MD 212184319, 3 edition, 1996. 18] I.S. Gradshteyn and I.M. Ryzhik. Table of Integrals, Series, and Products. Academic Press, Inc., 1250 Sixth Avenue, San Diego, CA 921014311, 5th edition, 1994.
Bibliography
151
19] C. Grosche. Path integration via summation of perturbation expansions and applications to totally re ecting boundaries, and potential steps. Phys. Rev. Lett., 71(1), 1993. 20] E.J. Heller. Boundstate eigenfunctions of classically chaotic hamiltonian systems: scars of periodic orbits. Phys. Rev. Lett., 55:1515{8, 1984. 21] E.J. Heller, M.F. Crommie, C.P. Lutz, and D.M. Eigler. Scattering and absorption of surface electron waves in quantum corrals. Nature, 369:464, 1994. 22] K. Huang. Statistical Mechanics. Wiley, New York, 1987. 23] J.A. Katine, M.A. Eriksson, A.S. Adourian, R.M. Westervelt, J.D. Edwards, A.S. LupuSax, E.J. Heller, K.L. Campman, and A.C. Gossard. Point contact conductance of an open resonator. prl, 79(24):4806{9, December 1997. 24] A. Kudrolli, V. Kidambi, and S. Sridhar. Experimental studies of chaos and localization in quantum wave functions. Phys. Rev. Lett., 75(5):822{5, July 1995. 25] L.D. Landau and Lifshitz E.M. Mechanics. Pergamon Press, Pergamon Press Inc., Maxwell House, Fairview Park, Elmsford, NY 10523, 3rd edition, 1976. 26] L.D. Landau and Lifshitz E.M. Quantum Mechanics. Pergamon Press, Pergamon Press Inc., Maxwell House, Fairview Park, Elmsford, NY 10523, 3rd edition, 1977. 27] C. Livermore. Coulomb Blocakade Spectroscopy of TunnelCoupled Quantum Dots. PhD thesis, Harvard University, May 1998. 28] A Messiah. Quantum Mechanics, volume II. John Wiley & Sons, 1963. 29] A.D. Mirlin. Spatial structure of anomalously localized states in disordered conductors. J. Math. Phys., 38(4):1888{917, April 1997. 30] K. Muller, B. Mehlig, F. Milde, and M. Schreiber. Statistics of wave functions in disordered and in classically chaotic systems. prl, 78(2):215{8, January 1997. 31] M. Olshanii. Atomic scattering in the presence of an external con nement and a gas of impenetrable bosons. prl, 81(5):938{41, August 1998.
Bibliography
152
32] R.K Pathria. Statistical Mechanics, volume 45 of International Series in Natural Philosophy. Pergamon Press, Elsevier Science Inc., 660 White Plains Road, Tarrytown, NY, 105915153, 1972. 33] W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical recipes in C: The art of scienti c computing. Cambridge University Press, The Pitt Building, Trumplington Street, Cambridge CB2 1RP, 40 West 20th Street, New York, NY 100114211, USA, 2nd edition, 1992. 34] G. Rickayzen. Green's Functions and Condensed Matter. Number 5 in Techniques of Physics. Academic Press, Academic Press Inc. (London) Ltd., 24/28 Oval Road, London NW1, 1980. 35] L.S. Rodberg and R.M. Thaler. Introduction to the quantum theory of scattering. Academic Press, Inc., 111 Fifth Avenue, New York, NY 10003, USA, 1st edition, 1970. 36] J.J. Sakurai. Modern Quantum Mechanics. Addison Wesley, 1st edition, 1985. 37] P. Sheng. Introduction to Wave Scattering , Localization, and Mesoscopic Phenomena. Academic Press, Inc., 525 B Street, Suite 1900, San Diego, CA 921014495, 1995. 38] T. Shigehara. Conditions for the appearance of wave chaos in quantum singular systems with a pointlike scatterer. Phys. Rev. E, 50(6):4357{70, December 1994. 39] I.E. Smolyarenko and B.L. Altshuler. Statistics of rare events in disordered conductors. Phys. Rev. B, 55(16):10451{10466, April 1997. 40] Douglas A. Stone and Aaron Szafer. What is measured when you measure a resistance?{ the landauer formula revisited. IBM Journal of Research Development, 32(3):384{412, May 1988. 41] D. Zwillinger. Handbook of Integration. Jones & Bartlett, One Exeter Plaza, Boston, MA 02116, 1992.
Appendix A
Green Functions
A.1 De nitions
Green functions are solutions to a particular class of inhomogeneous di erential equations of the form ; ; z ; L(r)] G r r z = r ; r : (A.1) G is determined by (A.1) and boundary conditions for r and r lying on the surface S of the domain . Here z is a complex variable while L(r) is a di erential operator which is timeindependent, linear and Hermitian. L(r) has a complete set of eigenfunctions f n (r)g which satisfy L(r) n(r) = n n(r): (A.2) Each of the n (r) satisfy the same boundary conditions as G (r r z ). The functions f n (r)g are orthonormal, Z (A.3) n (r) m (r) = nm and complete, X (A.4) n (r) n (r ) = (r ; r ):
0 0 0 0 0 0
In Dirac notation we can write
n
^ (z ; L)G(z ) = 1 ^ Lj ni = n j ni h n j m i = nm X j nih nj = 1:
n
(A.5) (A.6) (A.7) (A.8)
153
Appendix A: Green Functions
154
In all of the above, sums over n may be integrals in continuous parts of the spectrum. For z 6= n we can formally solve equation A.5 to get ^ G(z) = 1 ^ : (A.9) z;L Multiplying by A.8 we get X X 1 ih ^ j nih nj = X j zn; nnj : (A.10) G(z) = 1 ^ j n ih n j = ^ z;L n n z;L n We recover the rrepresentation by multiplying on the left by hrj and on the right by jr i X n(r) n (r ) G(r r z) = : (A.11)
0 0
In order to nd (A.11) we had to assume that z 6= n . When z = n we can write a limiting form for G^z ): ( 1 ^ G (z ) = (A.12) ^ z ;L i and it's corresponding form in position space, X n(r) n (r ) G (r r z ) = lim (A.13)
0 !
n
z;
0
n
^ ^ where G+(z ) is called the \retarded" or \causal" Green function and G (z ) is called the \advanced" Green function. These names are re ections of properties of the corresponding timedomain Green functions. Consider the Fourier transform of (A.13) with respect to z : Z X n(r) (r ) n iEt=h dE: (A.14) G (r r = t ; t ) = lim 21 0 E; n i e n We switch the order of the energy sums (integrals) and have X n(r) n (r ) R G (r r ) = 21 lim E ; n i e iE =h dE: (A.15) 0 n We can perform the inner integral with contour integration by closing the contour in either the upper or lower half plane. We are forced to choose the upper or lower half plane by the sign of . For > 0 we must close the contour in the lower halfplane so that the exponential forces the integrand to zero on the part of the contour not in the original integral. However, if the contour is closed in the lower half plane, only poles in the lower half plane will be picked up by the integral. Thus G+(r r ) is zero for tau < 0 and therefore corresponds only to propagation forward in time. G (r r ), on the other hand, ^ is zero for > 0 and corresponds to propagation backwards in time. G is frequently useful in formal calculations.
; 0 0 1 0 ; ! ;1 0 1 0 ; ! ;1 0 ; 0 ;
0 n
z;
0
n
i
Appendix A: Green Functions
155
A.2 Scaling L
^ We will nd it useful to relate the Green function of the operator L to the Green ^ function of L where is a complex constant. Suppose h i ^ ^ z ; L G (z ) = 1 (A.16) ^ ^ We note that L and L have the same eigenfunctions but di erent eigenvalues. i.e., ^ Lj ni = so So we have ^ G (z ) =
n
j ni
z
(A.17) (A.18) (A.19)
Xj
n
ih nj = 1 X j znih nj = 1 G1 z; n ; n ^ n
n
^ ^ G (z ) = 1 G1 z
A.3 Integration of Energy Green functions
Claim: Proof:
I=
or but so We expand the Green functions: ^ G(z) =
Z Z
d G(r1 r z)G(r r2 z) dr = ; dz G(r1 r2 z ):
(A.20) (A.21) (A.22) (A.23) (A.24) (A.25)
G(r1 r z )G(r r2 z ) dr =
^ I = hr1 j G(z)
Z
^ ^ hr1j G(z) jrihrj G(z) jr2i dr ^ G(z) jr2 i
Z
jrihrj dr
Z
jrihrj dr = 1
^ ^ I = hr1j G(z)G(z) jr2 i :
X jnihnj
1
n
z ; En
Appendix A: Green Functions
156
so But, since hn jm i = nm ,
I = hr1 j
X jnihnj X jmihmj
1
n
z ; En
1
1
m
z ; Em jr2 i :
(A.26)
(A.27) (z ; En ) which is very like the Green function except that the denominator is squared. We can get around this by taking a derivative:
n
X jnihnj I = hr1 j 2 jr2 i
I = hr1 j
X d jnihnj jr i : ;
1
n
dz z ; En
2
(A.28)
We move the derivative outside the sum and matrix element to get
d d ^ ^ I = ; dz hr1 j G(z) jr2 i = ; dz G(r1 r2 z)
as desired.
(A.29)
A.4 Green functions of separable systems
Suppose we have a system which is separable. That is, the eigenstates are products of functions in each of the domains into which the system can separated. For example, a system in which an electron can move freely in the xdirection but is con ned to the region jyj < h=2 has eigenfunctions which are products of plane waves in x and quantized modes in y. The Green function of such a system is written ^ G( )(z) =
XX j
i j ih j h j : z;" ;" i
(A.30)
We can move either of the sums inside to get ^ G( ) (z ) =
X
ih j ih j X z ; "j ; " j
i
(A.31)
but now the inner sum is just another Green function (albeit of lower dimension), so we have X X ^ ^ ^ (A.32) G( ) (z) = j ih j G( ) (z ; " ) = j ih j G( )(z ; " )
Appendix A: Green Functions
157
Consider the example mentioned at the beginning of this section. We'll label the eigenstates in the xdirection by their wavenumber, k and label the transverse modes by a channel index, n. So we have ^ G( ) (z ) =
Z
1
;1
jkihkj
^ Gn(E ; "(k)) i ) dk =
X
n
jnihnj
go ) (E ; "n) ^(
(A.33)
where go ) (z ) is the one dimensional free Green function. In the position representation ^( this latter equality is rewritten as
G( ) (x y x y z ) =
0 0
X
n
n(y) n (y )g( ) (x
0
x E ; "n ):
0
(A.34)
A.5 Examples
Below we examine two examples of the explicit computation of Green functions. We begin with the mundane but useful free space Green function in two dimensions. Then we consider the more esoteric Gor'kov Green function the Green function of a single electron in a superconductor.
A.5.1 The FreeSpace Green Function in Two Dimensions
In twodimensions, we can derive the freespace Green function of the operator L = ;r2 (corresponding to 2D free quantum mechanics with h2=2m = 1) using symmetry and limiting properties. See, e.g., 10]. The Green function, Go satis es
z + r2 Go (r r z ) = r ; r :
0 0 0 0
;
(A.35)
By translational symmetry of L, Go (r r z ) is a function only of = jr ; r j. For 6= 0, G( z ) satis es the homogeneous di erential equation
z + r2 G( z ) = 0:
Recall that so we may rewrite (A.35) as 2 z (r ; r ) = 2 1 ( )
0
(A.36) (A.37)
Z
0
Go d +
0 0
Z
0
2
0
r2 Go (
0
z) d = 1:
0
(A.38)
Appendix A: Green Functions
158 )
In radial polar coordinates ( so Thus
r2 = 1 @@
Z
0
@ + 1 @2 2@ 2 @ @f : @
0
(A.39) (A.40)
0
r2f ( ) = @@
2
0
r2Go(
0
z) d = 2
0
where the last equality follows from Gauss' Theorem (in this case, the fundamental theorem of calculus). So we have Z 2 z Go d + 2 @Go = 1 (A.42) @
0 0
Z @ ; Go( z) d = 2 @Go @ @
0
0
0
(A.41)
which, as Also,
! 0, gives
0
2 @Go@( z ) = 1 ) Go ( z ) ! 21 ln( ) + const. lim Go ( z ) = 0
!1
(A.43) (A.44)
General solutions of (A.35) are linear combinations of Hankel functions of the rst and second kind of the form (See e.g., 1]).
h
(1) (2) AnHn ( z ) + Bn Hn ( z ) e
p
p i
in
:
(A.45)
;
(2) H0 (
we have
are = 0. pz Since weup as looking ,for0 a= 0.independent solution we must have n A0 = ) blows !1 B The boundary condition (A.43) xes
Since i 4 . So
(1) where Ho is the Hankel function of zero order of the rst kind: (1) Ho (x) = Jo (x) + iYo (x)]
i (1) p Go(r r z) = ; 4 Ho ( z jr ; r j)
0 0
(A.46) (A.47)
where Jo (x) is the zero order Bessel function and Yo (x) is the Neumann function of zero order. It will be useful to identify some properties of Yo (x) for use elsewhere. We will often be interested in Yo (x) for small x. As x ! 0 (A.48) Y (x) = Y (R) (x) + 2 J (x) ln(x)
o o o
Appendix A: Green Functions
159
Yo(R) (x) is called the \regular" part of Yo(x). We note that Yo(R) (0) 6= 0. Ordinarily,
the speci c value of this constant is irrelevant since it is overwhelmed by the logarithm. However, we will have occasion to subtract the singular part of Go and the constant, Yo(R) (0), will be important.
A.6 The Gor'kov (bulk superconductor) Green Function
In this appendix we will derive the Green Function for particles and holes in a superconductor. We plan to use this Green function to simulate Andre'ev scattering systems. We will nd an explicit solution in the case of a uniform bulk superconductor. Andre'ev scattering takes place at normalmetal superconductor boundaries (NS). When an electron in the normal metal hits the superconductor boundary it can scatter as a hole with the timereversed momentum of the electron. That is, the electron disappears at the boundary and a hole appears which goes back along the electron's path at the same speed as the electron came in. This type of scattering leads to a class of trajectories which interfere strongly (even in chaotic/disordered systems) and produce weaklocalization e ects. The simplest way to deal with the coexistence of particles and holes is to solve a set of coupled Schrodingerlike equations known as the Bogoliubov equations (see, e.g., 5]):
h^ i @ ih @t jf i = Ho ; ^ jf i + ^ jgi h^ i @ ih @t jgi = ; Ho ; ^ jgi + ^ jf i
y
(A.49) (A.50) (A.51)
^ where Ho is the single particle Hamiltonian, ^ = jri (r) hrj is the (possibly position R dependent) chemical potential and ^ = jri (r) hrj is the (possibly position dependent) superconductor energy gap. In the ^ = 0 case, we have the Schrodinger equation for jf i (the electron state) and the timereversed Schrodinger equation for jgi (the hole state). If we form the spinor 0 1
R
j i = @ jjfii A g
(A.52)
Appendix A: Green Functions
160
we can write the Bogoliubov equations as ^ H ;^ @ ih @t j i = @ o^
y
0
; ;
^ Ho ; ^
^
1 A j i = Hj i :
(A.53)
^ In order to compute the Green function operator, G(z ), for this system we form
0 ^ z ; Ho ; ^ ^ ^ G 1 (z ) = z ; H = @ ^
; y
^ z + Ho ; ^
;^
1 A
(A.54)
which, at rst, looks di cult to invert. However, there are some nice techniques we can apply to a matrix with this form. To understand this, we need a brief review of 2 2 quantum mechanics. Recall the Pauli spin matrices are de ned
1
0 1 0 1A = @ 0 0 = @
0 1 0
2
3
0 1 1 0 A = @
i 0
;i A ;1
1
and that the set fI 1 2 3 g is a basis for the vector space of complex 2 2 matrices. The Pauli matrices satisfy
f i jg
i j]
= i j ; j i = ijk k = i j + j i = I ij
(A.55) (A.56)
where ijk is the threesymbol ( ijk is 1 if ijk is a cyclic permutation of 123, 1 if ijk is a noncyclic permutation of 123 and 0 otherwise) and ij is the Kronecker delta. ^ To simplify the later manipulations we rewrite G 1 :
;
^ G 1 (z ) = g 1 (z) ^
; ;
3
^ ) G(z) =
3
;
1 g (z ) ^
(A.57)
where
0 ^ z ; Ho ; ^ g 1 (z ) = @ ^ ^
;
;
y
;z ;
^ A: ^ Ho ; ^
1
(A.58)
Appendix A: Green Functions
161 ^ g 1 (z) = aI + ib ^ ^
;
We now write where
(A.59)
^ b = Im( ^ ) Re( ^ ) ;iz = (
1 2 3) :
a = ^
;
^ Ho ; ^
Let's make the additional assumptions that
h ^i a b = 0 8i ^ h^ ^ i i
bi bj = 0 8i j:
For our problem, as long as ^ = I const, we satisfy these assumptions. That is, we are in a uniform superconductor. So 0 1 a ; i^3 i(^1 ; i^2 ) A ^ b b b g 1 (z ) = @ ^ ^ ^ (A.60) i(b1 + ib2 ) a + i^3 ^ b Since all the operators commute, we can invert this like a 2 2 matrix of scalars:
;
a + i^ ^ b g(z ) = 2 ^2 1 ^2 ^2 @ ^ 3^ ^ a + b3 + b1 + b2 ;i(b1 + ib2 ) ^
We clarify this by explicit multiplication
0
;i(^1 ; i^2) A = 1 aI ; ib b b ^ ^ ^ a2 I + b2 ^ a ; i^3 ^ b
= ^ a2 I + b ^ a2 + b2 ^ ^ ^ g = b2 I :
1
:
(A.61)
g(z)^ 1 (z) = ^ g
;
^ Now we use the relation (A.56) to simplify b : ^ b
3 X^ ^ = bi ibj j =
^ aI ; ib ^ a2 + b2 ^ ^
I
^ aI + ib ^
I
2
: (A.62)
i j =1
i j=1 i<j
2
3 X ^^ bibj f
i j
(A.63)
So we have
g (z )^ 1 (z) = ^ g
;
^ a2 I + b ^ a2 + b2 ^ ^
I
=
a2 + b2 ^ ^
I
^ a2 I + b2 I = I : ^
(A.64)
Appendix A: Green Functions
162
At this point we have an expression for g (z ) but it's not obvious how we evaluate ^ ^ ^ ^ a2 +b2 . We use another trick, and factor g (z ) as follows (de ning b = jbj): ^ ^
I
g (z) = ^
a2 + b2 ^ ^
I
^ aI ; ib ^
X I + s b^ b =1 2 s= 1 a + is^ : ^ b
^
(A.65)
Why does this help? We've replaced the problem of inverting a2 + ^2 with the problem of q 2 ^ 2 b2 p ^ inverting a i^. We recall that a = ;(Ho ; ^) and b = b1 + b2 + b3 = j j2 ; z 2 . So ^ b ^ a i^ = ; Ho ; ^ i ^ b which means where ^ (^ i^) 1 = G a b
;
q
j j2 ; z2
(A.66) (A.67) (A.68)
i
q
j j2 ; z2
^ G (z ) =
^ ^ We note that if ^ = Efermi = const., G (z ) = Go (Efermi + z ). We de ne
z;
1 : ^ Ho ; ^
f (z ) = 1 1 p 2 z 2 z ;j = p2 2 z ; j j2
; y
=
q
z2 ; j
j2
j2
!
0 ^ ^ G ( )f+ (hz ) + G (; )f i(z ) ^ G(z) = 3 1g (z) = @ ^ ^ ^ ; G ( ) ; G (; )
; ;
and then write 0 ^ h^ i 1 ^ ^ G ( )f+h(z ) + G (; )fi (z ) G ( ) ; G (; ) A: g(z ) = @ ^ ^ ^ ^ G ( ) ; G (; ) G (; )f+ (z ) + G ( )f (z ) (A.69) So, nally, we have a simple closed form expression for G(z ):
; y
^ G ( ) ; G (; ) A ^ ^ ;G (; )f+(z ) ; G ( )f (z ) : (A.70)
;
h^
i
1
Various Limits
=0
Appendix A: Green Functions
163
When
= 0 we have
f+(z f (z
;
= 0) = 1 = 0) = 0 = 0 0 A ^ ;G (; ) :
and thus
0 as we would expect for a normal metal.
0 ^ G() ^ G(z) = @
1
(A.71)
Appendix B
Generalization of the Boundary Wall Method
Handling the general boundary condition
C
(s) (r(s)) + 1 ; (s)] @n(s) (r(s)) = 0
C
(B.1)
requires a more complex potential,
V (r) = ds (s) (r ; r(s)) (s) ; 1 ; (s)] @n(s)
C
Z
n
o
(B.2)
and thus is somewhat more di cult than the case of Dirichlet boundary conditions considered in section 4.2. First, we assume n(s) a unit vector normal to C at each point s, and @n(s) f (r(s)) = n(s) rf (r(s)): (B.3) Second, we insert (B.2) into (4.3) to get (r) = (r) + ds (s ) G0 (r r(s ))
0 0 0 C 00
Z
n
which then we consider at a point r(s ) on C (with the same notational abbreviation used in Section 4.2) (s ) = (s ) + ds (s ) G0 (s s )
00 00 0 0 00 0 00 00
(s ) ; 1 ; (s ) @n(s0 )
0 0
o
(r(s ))
0
(B.4)
Z
n
(s ) + 1 ; (s ) @n(s0 )
0 0
o
(s ):
0
(B.5)
As it stands, (B.5) is not o linear equation in . To x this, we multiply both sides by a n (s ) + 1 ; (s )] @n(s00 ) and de ne
B (s
00
) =
n
(s ) + 1 ; (s ) @n(s00 )
00 00
o
(s )
00
164
Appendix B: Generalization of the Boundary Wall Method
165
) = (s ) + 1 ; (s ) @n(s00 ) (s ) n o GB (s s ) = (s ) + 1 ; (s ) @n(s00 ) G0 (s s ): 0
B (s
00 00 00 00 00 0 00 00 00 0
n
o
(B.6) (B.7) (B.8)
This yields
) = B (s ) + ds (s ) GB (s s ) B (s ) 0 a linear equation in B , and solved by h ~ i ~B = ~ ; GB ~ 1 ~B I 0
00 00 0 0 00 0 0 ;
B (s
Z
where again the tildes emphasize that the equation is de ned only on C . The diagonal operator ~ is ~ f (s) = (s)f (s): (B.9) We de ne h ~ i1 T B = ~ ~ ; GB ~ I 0 (B.10) that solves the original problem
;
(r) = (r) + ds G0 (r r(s )) T BB (r(s ))
0 0 0
Z
(B.11)
for
T BB (r(s )) = ds T B (s s) B (s):
0 0
Z
As in Section 4.2, in the limit (s) = when inserted into (B.7), gives
! 1, T B converges to ;
;
h ~Bi
G0
(B.12)
1
;
which, (B.13)
n
(s) + 1 ; (s)] @n(s)
o
h i (s) = B (s) = ~ ; GB GB I ~0 ~0
1
~B (s) = 0
the desired boundary condition (B.1). For completeness, we expand T B in a power series
0 1 X h ~ B ~ ij A T B = ~ + ~ @ G0
1
so where
0 1 X B (j) T B (s s ) = (s ) (s ; s ) + (s ) @ T ] (s s )A
1 00 0 00 00 0 00 00 0
j =1
(B.14) (B.15) (B.16)
j =1
T B ](j ) (s s ) = ds1 : : : dsj GB (s sj ) (sj ) : : : GB (s2 s1 ) (s1 ) (s1 ; s ) 0 0
00 0 00 0 00 0
Z
allowing one, at least in principle, to compute T B (s s ), and thus the wavefunction everywhere.
Appendix C
Linear Algebra and NullSpace Hunting
In this appendix we deal with the linear algebra involved in implementing various methods discussed above. We begin with the standard techniques for solving Ax = v type equations when A is of full rank. We do this mostly to establish notation. In many of the techniques above, we had a matrix which was a function of a real parameter (usually a scaled energy), A(t) and we wanted to look for t such that A(t)x = 0 has nontrivial solutions (x 6= 0). The standard technique for handling this sort of problem is the Singular Value Decomposition (SVD). We'll discuss this in (C.3). There are other methods to extract nullspace information from a matrix and they are typically faster than the SVD. We'll discuss one such method, the QR Decomposition (QRD). 17] is a wonderful reference for all that follows.
C.1 Standard Linear Solvers
A system of N linear equations in N unknowns:
N X i=1
aij xi = bj for j 2 f1:::N g
(C.1)
may be written as the matrix equation
Ax = b
166
(C.2)
Appendix C: Linear Algebra and NullSpace Hunting
167
;
where (A)ij = aij . This implies that a formal solution is available if the matrix inverse A 1 exists. Namely, x = A 1b: (C.3)
;
Most techniques for solving (C.2) do not actually invert A but rather \decompose" A in a form where we can compute A 1 b e ciently for a given b. One such form is the LU decomposition, A = LU (C.4)
;
where L is a lower triangular matrix and U is an uppertriangular matrix. Since it is simple to solve a triangular system (see 17], section 3.1) we can solve our original equations with a two step process. We nd a y which solves Uy = b and then nd x which solves Lx = y. This is an abstract picture of the familiar process of Gaussian elimination. Essentially, there exists a product of (unit diagonal) lower triangular matrices which will make A uppertriangular. Each of these lower triangular matrices is a gauss transformation which zeroes all the elements below the diagonal in A one column at a time. The LU factorization returns the inverse of the product of the lower triangular matrices as L.and the resulting upper triangular matrix in U. It is easy to show that the product of a lower(upper) triangular matrices is lower(upper) triangular and the same for the inverse. That is, L represents a sequence of gauss transformations and U represents the result of those transformations. For large matrices, the LUD requires approximately 2N 3 =2 ops ( oating point operations) to compute. The LUD has several drawbacks. The computation of the gauss transformations involves division by aii as the ith column is zeroed. This means that if aii is zero for any i the computation will fail. This can happen two ways. If the \leading principal submatrix" of A is singular, i.e., det(A(1 : i 1 : i)) = 0 then aii will be zero when the ith column is zeroed. If A is nonsingular, pivoting techniques can successfully nd an LUD of a rowpermuted version of A. Row permutation of A is harmless in terms of nding the solution. However, if A is singular then we can only chase the small pivots of A down r columns where r = rank (A). At this point we will encounter small pivots and numerical errors will destroy the solution. Thus we are led to look for methods which are stable even when A is singular.
Appendix C: Linear Algebra and NullSpace Hunting
168
C.2 Orthogonalization and the QR Decomposition
Suppose we wanted to keep the basic idea of the LU decomposition, zeroing the columns of A but avoid the pivot problem. We have to be careful because the inverse of the transformation matrix (the L in LU) has to be easy to invert or the decomposition won't be much help. One class of transformations which might work are orthogonal transformations. Orthogonal transformations include re ections and rotations. Orthogonal transformations do not rescale (so they are stable in the presence of small pivots) and they are easy to invert (the inverse of an orthogonal transformation is its transpose). The simplest such method is the QR decomposition, A = QR (C.5) where Q is orthogonal and R is uppertriangular. Performing this decomposition is straightforward. Consider the rst column of A. A rotation (or re ection about a correctly chosen plane) will zero all but the rst element (simply rotate a basis vector into the rst column of A and then invert that rotation). Now go to the second column. To zero everything below the diagonal it is su cient to consider a problem of 1 dimension smaller and rotate in that smaller space. This leaves the previously zeroed elements alone (since those vectors are null in the smaller space). We do this one column at a time and accomplish our goal. A wonderful consequence of the QRD is that the eigenvalues of A are the diagonal elements of R. Since Q is orthogonal, it does not change eigenvalues, it just rotates the eigenvectors. This makes it ideal for nullspace hunting since we can do QRDs and look for small eigenvalues. Typically the eigenvalues in a QRD are in generated in order of their absolute value as a side e ect of the transformation. If A is singular with rank N1, a typical QRD algorithm will leave the zero eigenvalue in the last row of R. That is, the last row of R is all zeroes. A modi ed backsubstitution algorithm will immediately return a vector in the null space. Since the null space is one dimensional, this vector spans it. A similar approach will work with multidimensional nullspaces but requires a bit more work. The QRD is not magic. It does not provide a solution where noneexists. While the algorithm is stable, the attempt to solve the Ax = b will still fail for singular A and nonzero b, it will simply do so in the backsubstitution phase rather than during decomposition. There is a x for this backsubstitution problem. We can pivot the columns of
Appendix C: Linear Algebra and NullSpace Hunting
169
A as we do the QRD and then, though we still cannot solve an illposed problem we can extract a least squares solution from this column pivoted QRD (QRD CP). For large N , the QRD requires approximately 4N 3 =3 ops (twice as many as LU) and QRD CP requires 4N 3 ops to compute.
C.3 The Singular Value Decomposition
It is possible to further reduce R by postmultiplying it by a sequence of orthogonal transformations. The SVD is one such \complete orthogonalization" It reduces R to a diagonal matrix with positive entries. That is, the SVD gives
A = U VT
(C.6)
where U and V are orthogonal and = diag( 1 : : : N ). In this formulation, zero singular values correspond to zero eigenvalues and the corresponding vectors may be extracted from V. Since the SVD is computed entirely with orthogonal transformations, it is stable even when applied to singular matrices. The SVD of a symmetric matrix is an eigendecomposition. That is, the singular values of a symmetric matrix are the absolute values of the eigenvalues and the singular vectors are the eigenvectors. If a symmetric matrix has two eigenvalues with the same absolute value but opposite sign, the SVD cannot distinguish these eigenvalues and may mix them. For nondegenerate eigenvalues, the sign information is encoded in UVT which is a diagonal matrix made up of 1. All of this follows from a careful treatment of the uniqueness of the decomposition. The SVD can be applied to computing only the singular values (SVD S), the singular values and the matrix V (SVD SV) and the singular values, the matrix V, and the matrix U (SVD USV). The op count is di erent for these three versions (we include LUD and QRD for comparison) and we summarize this in table C.3.
Appendix C: Linear Algebra and NullSpace Hunting
170
Algorithm SVD USV SVD SV SVD S QRD CP QRD LUD
ops seconds 20N 3 13.51 12N 3 9:55 3 4N 2:13 4N 3 1.73 3 =3 4N :43 2N 3 =3 N/A
Table C.1: Comparison of op counts and timing for various matrix decompositions. The timing test was the decomposition of a 500 500 matrix on a DEC Alpha 500/500 workstation. We include this table to point out that choice of algorithm can have a dramatic e ect on computation time. For instance, when looking for a t such that A(t) is singular, we may use either the SVD S or the QRD to examine the rank of A(t). However, using the QRD will be at least 4 times faster than using the SVD S.
Appendix D
Some important in nite sums
D.1 Identites from P xnn
Recall that Thus So
X xn
1
n=1
1 = ln 1 ; x for jxj < 1: n
;
(D.1) (D.2)
X ein e
1
;
n=1
n
n
1 = ln 1 ; ei
for
real > 0:
X cos(n )e
1
;
n=1
n
n
1 = Re ln 1 ; exp(i 1 = Im ln 1 ; exp(i
0
; ;
)
= 1 ln 2 cosh e; 2 cos 2
! !
(D.3)
and
X sin(n )e
1
;
n=1
n
n
)
0
e = ; arctan 1 ; e sin cos
; ;
:
(D.4) (D.5)
Since
we nd ( " X sin n sin n n = 1 ln e n 4 2 cosh n=1
1 0 ;
1 sin n sin n = 2 cos n(
; ) ; cos n(
e
+ )
0
1 = 4 ln cosh cosh
; cos( + ; cos( ;
171
; 2 cos( ;
0 0
#
0
)
; ln
"
2 cosh
; 2 cos(
e
#)
+ )
0
) )
Appendix D: Some important in nite sums
172
0 0
1 = 4 ln cosh cosh
1 sinh2 2 + sin2 = 4 ln sinh2 2 + sin2 We also note that for ln 2 cosh e; 2 cos
"
; 1 + 1 ; cos( + ; 1 + 1 ; cos( ;
+ 0# 2 0 2
;
) )
!
<< 1
=
; ln
ln
1+e
;
2
; ei ; e
;
;
i
;
2+ 2
(D.6)
since, in the small argument expansion of the exponentials, the constant and linear terms cancel. Similarly, for j ; j << 1
0
sinh2 2 + sin2 ln sinh2 2 + sin2
"
+ 0# 2 0 2
;
ln
sin2
+ 2
0
; ln
"
2+(
4
;
0
)2
#
(D.7)
D.2 Convergence of Green Function Sums
D.2.1 Dirichlet Boundary Green Function Sums
Consider
an ( ) = sin(nx) sin(nx ) (1 ; exp (;2 n h))
0
;
1
where n =
q n2
n2 2 =l2 ; E
;
1=2
exp (; n )
(D.8)
l
2
; E , and
bn ( ) = sin(nx) sin(nx ) nl exp
0
In this case, we may perform n=1 bn ( ) for 0 using the identities above (section D.1). P a ( );b ( ) converges and that it converges uniformly We need to show that n=1 n n with respect to for all 0. Since j sin(nx) sin(nx )j 1, we have
1 1 0
P
; nl
(D.9)
jan( ) ;2bn( )j = l 4 1;e 2 n
; p
nh
;
1
El2 1 ; n2 2
!
;
1=2
1 For x < 1, 1x < x and exp(; xa) < exp(;xa) so
p
0 1 s n 1 ; El2 A ; exp exp @; l 2 n2
; nl
3 5:
jan( ) ; bn( )j <
Appendix D: Some important in nite sums
173 exp ; nl
;
l 4 1;e n < nl exp
Since
2
;
2 nh
;
1
"
El2 1 ; n2 2
!
;
1
; nl
El2 1 ; n2 2
!# 2 4 1;e
exp El n
;
2 nh
1
El2 1 ; n2 2
!
; exp ; nl
;
3 5
1
; exp ; El 5 n
3
p 1. n > l 2E implies
p 2. and, since x < 1 ) 1 1 x < 1 + 2x, n > l 2E implies 2
;
exp ; nl
"
El2 1 ; n2 2
!#
< exp
;nl 2
!
(D.10)
3. n >
l
r
El2 1 ; n2 2
ln 2 2 + E 2h
!
;
1
2 2 < 1 + nEl2 2
(D.11)
implies 1;e
;
2 nh
;
1
< 1 + 2e
;
2 nh
(D.12)
4. x 0 ) e x 1 ; x,
;
we have, for n > max
(
l
p2E l r ;n
2l
jan( ) ; bn( )j < nl exp
Further, for n >
l
"
ln 2C 2 + E 2h
)
,
2 1 + 2El
1 + 2Ce ,
;
2 nh
!
s
n2
2
;
#
1 ; El n
#
: (D.13)
E+
1 El2 2 2h ln 2
jan( ) ; bn( )j < nl exp
Therefore, for M > max
;n
"
(
l
p2E l sE +
2l
El + 5El2 : n n2 2
(D.14)
an( ) ; bn( ) < exp ; Ml 2 n=M
1
X
"
1 El2 2 2h ln 2
)
, 5 3 + MEl 2 : 2
2El3
3M 2 1
; exp ; 2l
2l ;
#
(D.15)
Appendix D: Some important in nite sums
174
1 exp(
;
El2 h; 5El3 (D.16) an ( ) ; bn ( )] < exp ; Ml + 3M 2 2 : 2 M 2 1 ; exp ; h 2 2l n=M P a ( ) ; b ( )] converges for all 0. Thus n=M n n El 2 Further, since n exp ; n l < nEl2 , 2 2 2 7 3 (D.17) jan( ) ; bn( )j < nEl3 = fn 3 P P and n=M fn converges. Therefore n=M an ( ) ; bn ( )] converges uniformly with respect
1 1 1 1
But 1 x;x is a monotonically increasing function of x so e thus
;
2l
;
2l
)<
2l h 1 exp( 2l h)
; ;
and
X
"
#
to for
0
D.2.2 Periodic Boundary Green Function Sums
Consider where n = n2l 2 ; E , and
q
an ( ) = cos(njx ; x j) n2 2 =l2 ; E
0
;
1=2
exp (; n )
(D.18)
In this case, we may perform n=1 bn ( ) for 0 using the identities above (section D.1). P a ( );b ( ) converges and that it converges uniformly We need to show that n=1 n n with respect to for all 0. Since j cos(njx ; x j)j 1, we have
1 1 0
bn( ) = cos(njx ; x j) nl exp
0
P
; nl
(D.19)
jan( ) ;2bn( )j = ! 0 1 3 s l 4 1 ; El2 1=2 exp @; n 1 ; El2 A ; exp ; n 5 : 2 n2 n n2 2 l l p For x < 1, 1 < 1 and exp(; xa) < exp(;xa) so
; p
jan( ) ;2bn( )j < ! l 4 1 ; El2 n n2 2
< nl exp
x
x
;
1
"
exp ; nl
; nl
El2 1 ; n2 2
!# 2 ! El2 4 1; 2 2 n
exp El n
; exp ; nl
;
3 5
1
; exp ; El 5 n
3
Since
Appendix D: Some important in nite sums
175
p 1. n > l 2E implies
p 2. and, since x < 1 ) 1 1 x < 1 + 2x, n > l 2E implies 2
;
exp ; nl
"
El2 1 ; n2 2
!#
< exp
;nl 2
!
(D.20)
El2 1 ; n2 2
!
;
1
2 2 < 1 + nEl2 2
(D.21)
p we have, for n > l 2E
So
3. x 0 ) e x > 1 ; x, (proof?)
;
jan( ) ; bn( )j < nl exp
;n
"
2l
2 1 + 2El
!
n2
2
;
1 ; El n
#
:
(D.22) (D.23)
jan( ) ; bn( )j < nl exp p Therefore, for M > l 2E
an( ) ; bn( ) < exp ; Ml 2 n=M
1 ;
;n
"
2l
El + 2El2 : n n2 2
2l
#
X
"
2El3
3M 2 1
; exp ;; 2l
2l
;
El3 + M2 3 :
;
#
(D.24)
;
But 1 x;x is a monotonically increasing function of x so e thus
1 exp(
2l
)<
2l h 1 exp( 2l h)
;
and
X
1 1
Thus n=M an ( ) ; bn ( )] converges for all 0. 2El2 , El exp ; n Further, since n 2l < n2 2
P
n=M
an( ) ; bn( )] < exp ; Ml 2
"
El3 El2 h; + M2 3 : 2 M 2 1 ; exp ; h 2l
#
(D.25)
4 jan( ) ; bn( )j < nEl3 = fn (D.26) 3 P P and n=M fn converges. Therefore n=M an ( ) ; bn ( )] converges uniformly with respect
1 1
3
to for
0
Appendix E
Mathematical Miscellany for Two Dimensions
E.1 Polar Coordinates
r r2
@^ = @r r + @@ ^ 2 @ @ = 1 @r r @r + r12 @@ 2 r
(E.1) (E.2)
E.2 Bessel Expansions
eikr = eiky = eikr sin eikx = eikr cos
= =
X
1
l=
(i)l Jl (kr)
(E.3) (E.4) (E.5) (E.6) (E.7) (E.8)
X
1 1
;1
l=
Jl (kr)eil
(i)l Jl (kr) cos l
X
1
;1
l=
sin ky = sin (kr sin ) = sin kx = sin (kr cos ) =
X
1
;1
l=
J2l+1 (kr) sin ( 2l + 1] )
(;1)l J2l+1 (kr) cos ( 2l + 1] )
X
;1
l=
;1
176
Appendix E: Mathematical Miscellany for Two Dimensions
177
E.3 Asymptotics as kr ! 1
Jn (kr) Yn (kr)
(1) Hn (kr) (1) (2) Hn (kr) = Hn (kr)
r 2 n kr cos kr ; 2 r 2 sin kr ; n2 kr r 2 n
kr e r 2 i(kr kr e
;
i(kr
;
2
;
4
)
;4 ;4
(E.9) (E.10) (E.11) (E.12) (E.13)
;
n
2
;
4
)
E.4 Limiting Form for Small Arguments (kr ! 0)
Jn (kr) Yo(kr)
(1) Ho (kr) Yn (kr) (1) Hn (kr)
;(n) for n 6= ;1 ;2 ;3 (2= ) ln kr (2i= ) ln kr ; 1 ;(n) kr 2 ; i ;(n) kr 2
n n
1 kr n 2
(E.14) (E.15) (E.16) (E.17) (E.18)
;
for Refng > 0 for Refng > 0
;
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.