# MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

Introduction Uncertainty and Randomness The Basics Entanglement, Cats, and Other Paradoxes Energy Quanta Interpretations Wave/Particle Duality Evaluations

Quantum Mechanics
ABSTRACT: This paper constitutes a general overview of quantum mechanics. It is divided into three sections. The first section outlines, from a historical perspective, the major ideas and experiments that contributed to the development of quantum mechanics. The second section outlines the major interpretations that, in accounting for the results of quantum mechanical experiments, have made their way into the mainstream over the years. The third section evaluates these interpretations in order to assess their worth for further consideration, the goal being to decide upon one as the formal position we will be taking in this website.

Introduction

Classical Mechanics

1 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

what the states and nature of these phenomena are when they are not being measured - that is, when we aren't observing them. Obviously, any answer one conjures up to this question cannot be determined scientifically since science demands observation and measurement as the basis upon which answers can be drawn forth. Therefore, such answers are never more than speculation and guesswork, and thus it becomes emphatic that we can do nothing more than interpret the data. There are many such interpretations today, but for the sake of brevity, we will only look at the few major ones taken seriously by experts in the field. We will do this in the second part of this paper. First, however, let's understand the basics of quantum mechanics such that we understand the data that these interpretation aim to account for. We will avoid mathematics as much as possible (primarily because I don't understand it myself ), and stick to a chronological description of the subject, touching on each of the major contributions to the field as they made their mark in history.

The Basics
The world was not exposed to quantum mechanics over night. It was not presented as one whole theory as Darwin's evolution theory or Einstein theory of special relativity was. All told, quantum mechanics was a body of experimental work, theoretical insight, and mathematical development that evolved by the hands of numerous thinkers over the course of almost thirty years. It began in 1901 with a simple idea and culminated in 1927 with the formal doctrine of what we now call quantum mechanics. The evolution of quantum mechanics can be divided into two major eras - the pre-war era and the post-war era. The pre-war era features Planck's energy quantization hypothesis, Einstein's application of the latter to various problems in physics, and Bohr's revised model of the atom. The post-war era features de Broglie's hypothesis that material particles travel as waves, Heisenberg's Uncertainty Principle, the Davisson-Germer experiment, and Heisenberg and Bohr's overall interpretation of the above in what they called the Copenhagen Interpretation. It is really the post-war era that set quantum mechanics apart from the rest of physics, and in which we find principles of such counterintuitive caliber that it shakes the foundations of even a layman's understanding of how the everyday world works. The pre-war era features some pretty revolutionary ideas as well, but they weren't enough to be compartmentalized into a whole new discipline all its own. Nevertheless, the complexities of these pre-war insights are plenty and go deep. In fact, they stem from several centuries of accumulated knowledge that likewise go deep, and it would be difficult, if not impossible, to explain the pre-war developments without briefly touching on these pre-twentieth century concepts. Therefore, we will have to attempt a brief but thorough walkthrough of all the relevant physics as it was understood at the turn of the century and through the following decade and a half. For some readers, this may be too much for such a brief overview, and for this reason, I have supplied a list of links (below ) to some very good introductory websites for non-experts. Nonetheless, if the reader feels confident in delving right into the subject matter, then let's focus on the pre-war era first, beginning with the idea that started it all - Planck's energy quantization hypothesis.

http://freedocumentaries.net/media /123/Uncertainty_Principle/ http://msc.phys.rug.nl/quantummechanics/ http://phys.educ.ksu.edu/ http://www.hi.is/~hj/QuantumMechanics/quantum.html http://theory.uwinnipeg.ca/physics/quant/node1.html http://hyperphysics.phy-astr.gsu.edu/hbase/hframe.html

Energy Quanta
The history of quantum mechanics begins with Max Planck, a physicist whose interest lay in the phenomenon of black body radiation. This term refers to solid objects that absorb all the electromagnetic radiation that falls upon them. Light is a kind of electromagnetic radiation, as shown in figure 1, and so if a black body absorbs all the radiation incident upon it, then it absorbs all light, and is therefore rendered completely black - hence the name "black body". Although no radiation is reflected, black bodies do, nevertheless, emit electromagnetic radiation. Before 1901, when Planck grabbed the attention of the scientific community, physicists used a particular formula to calculate the

Max Planck

Black Body

2 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

Ultraviolet Catastrophe

Plank's Constant

number of "modes" corresponding to a particular frequency of black body radiation. What a mode is is not important for this discussion. In fact, for our purposes, we can use the word "energy" in place of "modes" since the energy corresponding to a specific frequency of radiation is proportional to the number of modes. This formula was problematic, the reason being that at high frequencies, it led to the "ultraviolet catastrophe" as it was called. When the frequency of radiation emitted by a black body is high enough (around the ultraviolet range and higher), the amount of energy (or number of modes) this formula yields is infinite. To physicists, this was clearly an absurd result. It meant that all black bodies everywhere, and even other objects that only approximate the description of black bodies, were emitting infinite energy, and we should all be doused with it (and consequently singed to death). Physicists longed for a solution to this problem, and when Planck came along, he proved to be just what the doctor ordered. What he proposed was that for a given frequency, there is a minimum amount of energy that can correspond to that frequency, and any other quantity of energy can only come in integer multiples of that minimum. For example, if we represent the frequency by f, and multiply it by h=6.626×10−34J·s (known as Planck's constant), the energy carried by a wave of electromagnetic radiation can be E=hf, E=2hf, E=3hf... but never E=½hf, E=1½hf, or E=0hf. This theory was the key to resolving the ultraviolet catastrophe because it meant that the formula needed revising in such a way that it no longer computed an infinite amount of energy for high frequencies of radiation.

Figure 1: The electromagnetic spectrum The quantization of energy was not initially intended to revolutionize physics, but scientists soon realized that the implications this subtle move had for physics in general was momentous. In fact, Planck himself was doubtful that the quantization of energy had any significant meaning beyond a mathematical formality - that is, he considered his solution to the ultraviolet catastrophe "fudging the math" in order to make the formula fit the data. It was Albert Einstein who saw the real potential in the idea of energy quanta to solve various conceptual problems that had been haunting physicists for a while. He proposed, in 1905, that the reason why the new formula for the modes of black body radiation worked so well was because the radiation emitted by black bodies was actually composed of particles of energy, the fundamental quanta of energy that Planck hypothesized only as a mathematical accommodation. Later dubbed "photons" by Gilbert Lewis, these particles now made energy seem very much like matter in that it could not be divided indefinitely - that is, just as matter can be repeatedly divided until one reaches the fundamental and indivisible particles that compose it, so it is with energy. Energy was no longer seen as the smooth and continuous thing that classical physics had assumed. One couldn't just have any arbitrary amount.

Albert Einstein

Photons

3 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

The Photoelectric Effect

Atomic Line Spectra

4 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

frequencies (colors) appear when light from these elements, when heated, are put through a prism. Figure 2, for example, shows the atomic line spectra for hydrogen, helium, and oxygen.

Figure 2: The atomic line spectra for hydrogen, helium, and oxygen. For the longest time, scientists couldn't understand why the light from these elements was refracted in such discrete strips, and so uniquely for each element. Planck's quantization hypothesis along with Einstein's proposal that electrons absorb and emit energy as photons containing specific energy amounts offered a plausible explanation for this, and in 1913, Neils Bohr seized the opportunity to propose it. He suggested that these strips come about by electrons in the elements relinquishing their energy, which occurs more readily the more the element is heated, only in a small set of discrete amounts, and these amounts are emitted as whole photons. Because the amount of energy carried by a photon corresponds to a specific frequency of electromagnetic radiation, these discrete amounts correspond only to a finite set of specific frequencies. Thus, the specific strips we get represent the amounts of energy that the element in question can relinquish as individual photons. These energy amounts determine the color and position of the strips. They do so by determining the frequency of the emitted radiation, and as we have seen, the frequency determines the color and the angle of refraction, and thus its position on the spectrum. Bohr's theory was actually more than just an idea on electrons emitting photons. In a metaphorical sense, he saved the atom. He saved it by replacing the older Rutherford model, which had its share of problems, with his own model. The major problem the Rutherford model suffered was that, if it was true, the atom shouldn't exist. The Rutherford model depicts atoms as a tight cluster of protons at the center (neutrons hadn't been discovered at the time) with the electrons orbiting this cluster a certain distance away (like the planets around the Sun). The problem with this model is that it predicts that the atom should collapse in a fraction of a second after it is created. This is because the electrons should be constantly radiating energy, thereby losing the energy required to keep them in their orbits. Consequently, they should crash into the nucleus, effectively ending the life of the atom. What Bohr postulated was that electrons only lose energy by way of radiation when they drop from a higher energy level to a lower one (explained below ). This drop is

Niels Bohr

The Rutherford Model

5 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

accompanied by the emission of a photon carrying the energy difference between the two levels, and in accordance with Plank's hypothesis, this energy corresponds to a specific frequency, and this is the frequency of the emitted photon. Likewise, electrons can jump to a higher energy level by absorbing photons. But when the electron is not jumping between energy levels, Bohr says, it radiates no energy. Furthermore, to remain perfectly consistent with Planck, there must be a minimum energy level. Planck's hypothesis says that no particle can have zero energy (E=0hf, remember, is not an option). This is the key to saving the atom from the Rutherford model. Bohr tells us that electrons don't constantly radiate energy - only when they drop energy levels and that electrons can't drop below a minimum level. Being at this minimum level allows the electrons to remain in orbit around the nucleus, and thus the atom's life is preserved. But what is an "energy level"? Bohr explains this with his concept of "orbitals". Unlike in the Rutherford model, electrons in the Bohr model can't orbit their nucleus at any arbitrary distance. To deviate from their orbit even slightly, moving either away from the nucleus or towards, would mean either acquiring the energy to do so or losing it. But according to the Bohr model, electrons only acquire or lose energy by full quantum amounts, and these amounts must correspond to the allowed distances from which the electron can orbit the nucleus. In other words, an electron cannot orbit its nucleus anywhere between these discrete distances. The few orbits made possible by this restriction Bohr called "orbitals". Because electrons need a specific amount of energy to be in a particular orbital, each orbital corresponds to a specific "energy level" that the electron is said to be at. Dropping or rising to lower or higher energy levels is essentially equivalent to dropping or rising to lower or higher orbitals. The difference in energy between orbitals depends, not only on how high up the orbitals in question are, but on the idiosyncrasies of the atom. For example, the number of protons and electrons belonging to the atom in question has an effect on the energy difference between orbitals. Also, the electrons have effects on each other, and this affects the amounts of energy they can emit or absorb, which in turn determines the energy levels they are capable of acquiring. All these factors make for a unique atomic signature, and this explains, not only the presence of atomic line spectra, but also their uniqueness for each and every element. The reader can see how useful the quantization of energy really was to the scientific community. It is rare that a scientific hypothesis like Planck's bears so many remedies. It therefore earned the esteem of physicists the world over, and a major shift in how they came to see nature took place - nature is quantized. But this shift wasn't without problems of its own, and we will now take a look at the major difficulty physicists had to grapple with if they were to accept this shift in perspective wholeheartedly.

Atomic Orbitals

Yet Another Model

Wave/Particle Duality
Quantum theory, as it eventually came to be known, although widely regarded as a revolutionary idea, did not formally split physics into two mutually exclusive camps - namely, what we now call classical and quantum mechanics. Despite opening the scientific community up to a new understanding of the nature of light, what really made quantum theory mind bogglingly strange was what they discovered after doing in-depth experiments on this fundamental particle of energy. But these experiments weren't conducted until after the war, and in my opinion, contributed to the zany character of the turbulent twenties. Between 1905 and the twenties, however, not much further development on Planck's quantum theory unfolded (with the exception of Bohr's atomic model in 1913 of course). During this time, the new corpuscular model of light, along with the solutions it afforded the ultraviolet catastrophe, the photoelectric effect, atomic line spectra, and other such enigmas, was still conceivable or intelligible - that is, it could still be imagined - and technically didn't violate the central tenets of classical physics (or physics as understood up until that time). One could still visualize light traveling as a stream of particles, as well as the inner workings of the photoelectric effect and atomic line spectra. But before the twenties were over, scientists were beginning to realize that the results yielded by quantum mechanical experiments were pointing in the direction of the unimaginable. The most plausible interpretations were extremely difficult, if not impossible, to visualize. It was this realization that truly prompted the schism between classical and quantum physics - it was only at the end of the twenties that scientists realized that they had embarked on a whole new discipline of science that had never been dreamt before.

6 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

The Double-Slit Experiment

The de Broglie Hypothesis

The DavissonGermer

7 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

Experiment

Superposition

Buckyballs!

electrons are streaming. This experiment shows that a stream of electrons will produce the same interference pattern as that seen when light is used instead. It gets even stranger when the gun is setup to fire only one electron at a time with lengthy rest periods in between. The same interference pattern shows up. This is indeed strange because it implies that a single electron will travel in the form of a wave and pass through both slits at the same time. Physicists call this phenomenon "superposition" - as in having more than a singular position. Furthermore, as a wave, it will interfere with itself, amplifying its own crests and troughs at key points, thus creating the interference pattern. The Davisson-Germer experiment has been done with a whole slew of material particles, including whole atoms (see sidenote ), and they all exhibit the same interference pattern. In other words, de Broglie knew that all matter, at least as individual particles (and sometimes atoms), travels as waves, and Davisson and Germer proved it experimentally. Of course, findings like these fly directly in the face of, not only the expectations of experts in the field, but the basic intuition of average laymen about the way the physical world works. If matter travels as waves in these experiments, why don't we experience matter that way in everyday life? Why is it that when a pitcher throws a baseball, the baseball doesn't end up diffusing itself in the form of a wave? Well, the kinds of experiments conducted on material particles were never used on large objects like baseballs, so the notion that material things traveled as waves could only be said about individual particles (and sometimes atoms). However, when physicists put their heads together to come up with some plausible interpretations of what was going on in these experiments, one possibility they agreed upon was that all material objects, no matter how large, travel in waves, except that the larger the object, the more difficult it is to notice its wave-like properties. In other words, a baseball does diffuse itself in the form of a wave, but unlike in the double-slit experiment wherein the wave spans several times the width of the particle in its point-like form, the wave of the baseball spans very little beyond the width of the baseball in its solid/spherical form. Consequently, we only see the baseball traveling along the single (and virtual) path that leads it to the batter. But before physicists could come to any consensus like this, they had to clarify exactly what constituted these waves. That is, they had to answer the question of what it meant for a material particle, and an energy particle for that matter, to take the form of a wave. Was the single electron in the double-slit experiment literally passing through both slits at the same time, or did it, in becoming a wave, take a different form, like a mechanical wave, such that it was only different points along the crest of this wave that passed through the two slits? And how is it that, in experiments like the double-slit one, material particles traveled as a wave that could span, at least, the two slits and the area on the screen covered by the interference pattern, yet stick to the confined space local to the nucleus of an atom, traveling as a planet orbits its sun. After all, if it's possible for things the size of whole atoms to travel as waves, what prevents them from dispersing themselves in all directions when they serve as the building blocks of macroscopic objects? To complicate the matter, other experiments conducted in the nineteen twenties revealed another dumbfounding quirk about the way material particles work, perhaps the most dumbfounding quirk of quantum mechanics - even physics in general. It was the discovery of randomness in nature, or at least what appeared to be randomness. As it turned out, however, this discovery shed just the right light on the question of the form material particles took when they traveled as waves - which would carry over to energy particles as well - and also the question of what suppresses this form when they are bound to each other.

Uncertainty and Randomness
To understand the discovery of randomness, we first need to understand the Heisenberg Uncertainty Principle. In 1925 Werner Heisenberg, along with his collaborator Neils Bohr, invented a mathematical system that described the fundamental workings of particle behavior, energy and material alike, and how they interacted. Heisenberg called his system "matrix mechanics". At around the same time, another very similar mathematical system describing exactly the same phenomena, but in a different way, was being developed by Erwin Schrödinger. He called his "wave mechanics". For a time, there was some dispute between Heisenberg and Schrödinger over whose system was superior, but in the

The Heisenberg Uncertainty Principle

Werner Heisenberg

8 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

Erwin Schrödinger

9 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

Even so, there is nothing in this formulation that makes reference to terms like "superposition", "randomness" or anything equivalent. One can comprehend Heisenberg with nothing but classical concepts. The quantization of energy was a relatively new theory in Heisenberg's time, and the diffraction of long wavelength photons was also known, but new theories abound all the time, whether in classical mechanics or any other branch of science, without upsetting the groundwork upon which they stand. Soon enough, however, the Heisenberg Uncertainty Principle gained a much more in-depth perspective that did incorporate concepts like superposition and randomness, preserving it as one of the cornerstones of quantum mechanics as it came into its own. What really drove the point of uncertainty home was the Davisson-Germer experiments that, not only confirmed the de Broglie hypothesis in 1927, but demonstrated very convincingly that nature indeed has the capacity for randomness - and not just in the epistemic sense. The Davisson-Germer experiment, as you will recall, is a double-slit experiment. The inference they drew from seeing the interference pattern - that particles can exist in multiple positions at the same time - was one thing; that these positions are selected randomly was another. But this random selection of positions could, nonetheless, be observed in the same experiment. Although the electron, in virtue of its wave-like form, can pass through both slits at the same time, it does not remain in the form of such a wide reaching wave when it hits the electrosensitive screen despite the fact that an interference pattern still emerges. As shown in figure 3, the interference pattern builds up after a whole population of white specks appears on the screen. These specks are where each electron - now, apparently, in the form of a point-like particle - makes its mark upon hitting the screen, and the great majority of them are concentrated within the regions covered by the bright bands, with fewer of them sporadically scattered between these regions. What this tells us is that, until the electron hits the screen, it travels as a wave, but upon hitting the screen, the wave "collapses" (a term we'll get to later) back into a point-like particle. But how does the electron know where to collapse? How does it know where on the screen to hit? That's the million dollar question. Davisson and Germer's experiment shows absolutely no discernable pattern in how the interference pattern evolves, except that there is a greater probability that the electron will hit those regions within the light bands than in the regions between and outside these bands.

Figure 3: Buildup of the interference pattern. Randomness can also be shown by setting up a particle detection device at each slit. That is, if, in both slits, we place a device that signals the presence of a particle (in this case, an electron) when it comes close to it, then we can test the notion that the particle indeed passes through both slits at the same time. But what happens instead is that only one of the devices detects the particle - that is, it seems as though the particle passes through only one slit. Which slit this turns out to be appears to be random. Furthermore, only when such detection devices are setup like this does the interference pattern disappear, and normal rectangular shaped blotches of points show up on the screen instead. Their positions are, of course, in line with the slits - that is, it is as if the particle was point-like all along, and thereby could stream only towards that region on the screen that it had access to via whichever slit it passed through. What are we to make of these findings? The current interpretation of this phenomenon - an interpretation that surfaced not long after 1927, when these kinds of experiments were conducted - was that a particle can only exist in multiple locations simultaneously when its position is not being measured. Measure its position with a screen, a detection device, or any other means, however, and the particle will settle upon that position and no other (what we call "collapse"), and cease to travel as a wave (or at least, propagate as a wave starting over from a much more confined region). Now, what "measurement" means in this interpretation is also subject to interpretation. Most conservative interpretations take it to mean human observation only, but other more encompassing interpretations take it to mean any physical interaction whatsoever - such as the electron bumping into something like the detection device. Whatever the interpretation of "measurement", one thing was certain - what these measurements turned up could not be predicted - they were random. At this point, it still wasn't fully clear what the essence of these waves were. Were they energy waves? Were they waves of "stuff"? Were they multiple instances of the particle under investigation propagating as a wave? Were they beyond our comprehension? The most all-encompassing answer to these questions came in 1927 after Heisenberg and Bohr convened their conference in Copenhagen, Denmark. The purpose of this meeting was to come up with a workable and thorough interpretation of what the thoughts, experiments, and mathematical breakthroughs in quantum theory of the 1920's had so far availed. Their final consensus was dubbed the "Copenhagen Interpretation", and its claim was that these waves were waves of probable positions. That is, if we take the double-slit experiment, what was happening to the particle wasn't so much that its position was dispersing

The Copenhagen Interpretation

10 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

itself in the form of a wave, but that it was becoming undetermined. That is, unlike the regular, macroscopic objects that we see everyday, things at the subatomic scale are never at one particular position in space, nor are they in multiple positions all at once - rather, they have "lost" their position - that is, their position has gone from definite to undefined. The more free they are from measurement - and remember that "measurement" is subject to interpretation - the more undetermined their position. Their positions are never completely undetermined, of course, since there is still a fuzzy region in space where they are more likely to show up when measured (the region swept by the wave) than other regions, but they are never fully determined either - that is, a particle, although it may make a point-like mark on the screen, never really collapses to an infinitely precise point. The collapse is random, of course, and this is what justifies the model of the probability wave. Because the particle isn't really "somewhere", yet neither is it "nowhere", the random collapse can be taken as an indication that its position is probable, and the region of space that the wave sweeps represents the region of highest probability - that is, the region in which, if one were to measure the particle's position, it most likely will show up. This was the most revolutionary idea physics had ever subsumed, and as soon as the whole of the scientific community got wind of it - and got used to it - a new discipline was born. Quantum mechanics was on the scene. The dawning of this new science marked an undeniable break from classical mechanics. Schrödinger, with his new equations, was comparable to Newton who set classical mechanics on a stable footing with his kinematic equations. The most frequently cited of Schrödinger's equations is the "wavefunction", which describes the state of a particle as it takes on the form of a wave. Another term that is often thrown around, usually accompanying the term "wavefunction", is "collapse". Together, they are commonly expressed as the "collapse of the wavefunction", which essentially refers to the mathematical description of a particle that has gone from a state of superposition to a more localized state (i.e. going from a multitude of possible positions to fewer possible positions). It is easy to misinterpret these terms as something physical or conceptual, when really they are mathematical. It is typical for amateurs to think of the "wavefunction" as the physical wave itself, and "collapse" as the physical process of a particle becoming less wave-like and more particle-like. But one has to keep in mind that the "wave" is only a region in space where the probability of finding a particle is highest - in other words, there's nothing really there in the utmost physical sense. The "wave" is an abstract concept that finds its best expression in mathematics. This is a crucial point to keep in mind, for quantum mechanics was born from mathematics, unlike classical mechanics, which was born from philosophy. Quantum mechanics, therefore, is first and foremost a mathematical system for describing and predicting observable results of particle behavior. The primary difference between the math of quantum mechanics and that of classical mechanics is that quantum mechanics gives us the probabilities that certain outcomes will occur, whereas classical mechanics gives us certainties that these outcomes will occur. Now that we have elucidated on the role superposition and randomness play in quantum mechanics, we can rephrase the Heisenberg Uncertainty Principle in its most formal articulation. The way nature localizes a particle (i.e. gives it a more precise location) is by superimposing various waves of different wavelengths overtop each other. The greater the range of wavelengths, the more precisely the particle will be localized. When waves of varying wavelengths are superimposed in such a manner, they tend to cancel each other out except at a specific point, the point at which the particle has been localized. In other words, a particle that has been localized does not have an exact wavelength - instead, it has a range whose boundary values taper off. The top portion of figure 4 offers a graphical representation of this process. This range of wavelengths constitutes superposition in regards to momentum. That is, because a specific momentum corresponds to a specific wavelength, a range such as this corresponds to momentum in a state of superposition - in other words, the particle's momentum, when its position is given a high degree of precision, is all the more undetermined. By similar reasoning, when a particle's momentum is given a high degree of precision, its position goes into a heightened state of superposition. This is because in order for momentum to acquire a more precise value, it must be constituted by a more precise wavelength - but this results in the particle taking on more of a wave-like form such that its breadth spans a greater region of space, thereby diffusing its position throughout that region. The bottom portion of figure 4 offers a graphical representation of this process. As the reader can now see, momentum and position really are inherently mutually exclusive - so much for pointing the finger at measurement.

Kinematic Equations

Schrödinger's Equations

11 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

Figure 4: The Heisenberg Uncertainty Principle understood as inherent. Another point this enlightens us about is that position is not the only property that can go into superposition. Although the word "superposition" sounds like a reference to position, it is a misnomer in this respect. There are many other properties that can go into superposition - momentum is just one. These properties usually come in pairs, and sometimes in triples. We call these pairs and triples "conjugate variables". Position and momentum are the first variables we have seen to be conjugate. Another pair is energy and time. A third is angular position and angular momentum. One triple is the spin of a particle around the x-, y-, and z-axis. That is, the more precisely one measures the spin of a particle around a chosen axis, the less precisely one can measure its spin around the other two. It is important to note, however, that "spin", in this context, does not have quite the same meaning as spin in the classical sense - that is, as a baseball or a planet might spin - but since this makes no differences to our purposes, we can think of spin as though the particle in question was a nanoscopic sphere resembling a billiard ball and it was rotating about the x-, y-, and z-axis. Interestingly, Einstein, along with Podolsky and Rosen, found that these three conjugate variables led to a paradox, but this also involved the phenomenon of quantum entanglement, which we will get to later. Suffice it to say, almost anything about a particle that one can measure bears a certain degree of uncertainty - almost anything - for there are some properties that have never varied across different measurements, such as a particle's mass or its charge (but see sidenote ) - but other than that, uncertainty plagues a great deal of things we used to take for granted as having definite values without our even knowing. The lesson to be learnt here, a lesson that carried through the decades and still rings loudly within physicist circles today, is that as we conduct our experiments and take our measurements, the objective being to gain ever more precise and plentiful knowledge of the state of our world, we unavoidably change this state. The answers we seek in this endeavor are true only for the instant these measurements are taken, and thereafter that which we measured has been changed to something unknown. This is a radical shift from the classical worldview in which it is taken for granted that the state of the world remains the same before, during, and after we conduct our experiments and take our measurements. We used to fancy ourselves to be independent, autonomous beings whose relation to the world is to observe it from a non-participating standpoint. We used to think that however we involve ourselves in the world in order to measure it and test it experimentally, we do so without disturbing it in the least. Indeed, from this vantage point, so long as our measurements are sufficiently precise and our experiments conducted with the utmost care, the knowledge gleaned from these labors would be perfect. The legacy quantum mechanics leaves us with, however, is that our knowledge of the natural world can never be perfect. As soon as it's acquired, something else in the world is changed. If we knew it beforehand, we know it no longer thereafter.

Conjugate Variables

Quantum Spin

Superposition Without Randomness

Entanglement, Cats, and Other Paradoxes
This was the view taken up by scientists as the twenties merged into the thirties, and for the next few decades, this view held its own. Debates carried on and challenges arose, of course, but quantum mechanics survived it all. One challenge worth noting, one proposed by Schrödinger of all people, was the paradox known as "Schrödinger's cat". The term "paradox" is a bit of an overstatement in this case since it technically doesn't demonstrate an impossibility

Schrödinger's

12 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

Cat

Quantum Entanglement

13 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

Local Realism

Theory of Relativity

Local Hidden Variables

Bell's Theorem

14 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

Interpretation, the first attempt, by Heisenberg and Bohr, to account for all these things. But over the years, other interpretations have surfaced, interpretations that contend with the Copenhagen one, and we need to look at the most salient ones before closing the subject. But even the Copenhagen Interpretation needs elaboration, and so we will begin the next section with this one.

Interpretations

15 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

the case, however, this interpretation, like many others we will touch on, can't be tested scientifically. It is perhaps best, therefore, that the Copenhagen Interpretation was the first to emerge, and promoted by some of the most ardent positivists in the field. Positivism is a semantic theory which states that the meaning of our statements and concepts can, and ought to, be put in terms of how one would go about verifying them empirically. So, for example, what it means to say that a wire has an electric current running through it is that if you applied a voltage meter to it, you would see it spike. Needless to say, positivism and empiricism, an epistemic theory that says we know the world by observing it, go hand-in-hand. If it cannot be observed, in other words, we not only have no knowledge of it, but it hasn't even a meaning, and thus makes little sense to talk of its reality. In this way, quantum mechanics, and thus the rest of science, escaped possible degeneration into what hardnosed positivists abhor most metaphysics and crackpotery. The findings of the nineteen twenties that led to the birth of modern quantum mechanics could have easily led elsewhere. These discoveries were so bewildering and mind blowing that it opened the floodgates for all sorts of novel and outlandish speculation to rush in. It was clear that classical mechanics was being overturned, but what would take its place was not immediately obvious. There was ample opportunity for something more ontological in its orientation, as compared to the Copenhagen Interpretation, to come first something that spoke to the more deeply seeded preferences of most people, including many scientists as much as they might deny it, to know reality as it actually is. If such an interpretation did come to light before the Copenhagen one, it might have been latched onto and quickly ushered into the position of the formal stance the scientific community would take on the question of what quantum mechanics meant. If this happened, it might have been too late for the Copenhagen Interpretation, and it's reasonable to doubt that it would have made much headway. This would have been a disaster for positivists and empiricists the world over, and they wouldn't have been overreacting - not necessarily at least. Science is at her best the more she abstains from unfalsifiable speculation. Not that speculation doesn't have its place, and brings science through in times of need, but when such speculation is unfalsifiable (i.e. cannot be tested), to accept it as science is to devalue science, and it leans all the more close to metaphysics and opinion. Although there is nothing wrong with metaphysics and opinion, not by my standards anyway, they are not science. Science should remain science. It is one of our most important and productive institutions, and serves a unique and vital role for humanity. If science ceases to be science, we lose an essential tool, and we take an enormous step backward after all the progress it has helped us achieve. Therefore, although the Copenhagen Interpretation leaves something to be desired when questions about reality are raised, we ought to respect it and be grateful that it was the first and most tenacious of the interpretations to account for the anomalies of quantum mechanics. Having said that, we do want to press on with our inquiries into the nature of the real world, and looking at a few of the most prominent interpretations in the field, other than the Copenhagen one, will surely help us in this task. Following close behind the Copenhagen Interpretation in popularity is the "Many Worlds Interpretation". Hugh Everett, who first proposed it in 1957, called it the "Relative State Formulation" - hinging on the relation between an observer and the phenomenon observed - and it was only three years later that Bryce Dewitt figured it deserved the title "Many Worlds". The central difference between the Many Worlds Interpretation and the Copenhagen one is that the former attempts to do away with non-determinism. It does so by replacing the collapse of the wavefunction with "decoherence". To understand decoherence, it is useful to imagine that superposition consists of multiple instances of the object under consideration. That is, for example, if it is a particle's position that is in a superposition state, then we can imagine that multiple instances of the particle coexist, each taking a unique position in, and exhausting, the region covered by the superposition state (we imagine this with caution, of course, for it too could pass as a speculative and unsupported interpretation). So long as each and every instance coexists in the same superposition state, we'd say that they "cohere" with each other. The moment one takes a position measurement, however, the superposition state "decoheres" - in a crude sense, they break from each other. More specifically, at least one instance, the one whose position was obtained by the measurement, decoheres from the rest. What happens at this point is that the universe "splits" into multiple copies of itself. One offspring universe inherits the instance whose position was captured by the measurement, while all others inherit the remaining instances, one for each. In essence, decoherence is the branching of the universe whereby each houses a different measurement outcome, and no instances are left in a state of superposition. Therefore, by this interpretation, all possible outcomes actually do occur - not all in the same universe, of course, but in a greater realm of existence that some call the "multiverse". So the wavefunction never collapses - it decoheres instead. With no collapse, the outcomes aren't really random. If all outcomes occur, no particular outcome is being selected at random. There is the question of what the observer taking the measurements observes, and at first it may seem as though he/she is being randomly paired up with one particular outcome, but if

Positivism

Many Worlds Interpretation

The Multiverse

16 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

we keep in mind that even the observer is split into each offspring universe, each instance of him/herself asking the same question - "Why this outcome?" - then it seems much less random after all. That is, since each and every instance of the observer gets a unique outcome, and all such outcomes are exhaustively assigned to each observer instance, there's nothing blatantly random about one particular observer getting one particular outcome. That outcome must necessarily be measured by at least one observer. The Many Worlds Interpretation has been extended to other unexplained phenomena. In particular, it has been suggested that not only does our universe split into offspring, but that these offspring can exchange particles and energy with each other. One phenomenon this accounts for nicely is quantum tunneling. Quantum tunneling can be seen when a particle that, under ordinary circumstances, would not be able to penetrate a barrier, say a thick sheet of metal, with the low amount of energy it has, is suddenly found on the other side of the barrier. By the principles of classical mechanics, the only way the particle could do this is if it was given the extra energy it needed to overcome the forces holding the barrier together. But it would also rule this out if no source was available to provide this energy - nothing acquires energy spontaneously or out of nothingness, it would say. Nonetheless, this is indeed what seems to be happening with quantum tunneling. The particle, suddenly and randomly, acquires the energy necessary to pass through the barrier. An interpretation based on the Many Worlds view could say that the particle "borrowed" this energy from a parallel universe, and after putting it to use, returned it from whence it came (but see sidenote ). Another phenomenon this exchange concept accounts for is the apparently spontaneous creation and destruction of "virtual particles". These are particles that seem to come into existence and disappear as swiftly as they came. They are called "virtual" because they exist far too briefly for anyone to measure or confirm their existence by experimental means (although scientists do have ways of testing for their effects). If parallel universes did exist along side ours, and if they can exchange particles and energy with ours, it is not unthinkable that these virtual particles are simply passing through our universe on their way to another. These are just some of the ways the Many Worlds Interpretation proves its versatility, and accounts for the longevity and prevalence it has enjoyed among thinkers, scientists and non-scientists alike. One variant on the Many Worlds Interpretation is the "Many Minds Interpretation". According to this interpretation, it is consciousness that splits rather than the universe itself. The Many Minds Interpretation is midway between the Many Worlds Interpretation and the radical form of the Copenhagen Interpretation wherein consciousness alone collapses the wavefunction. The Many Minds Interpretation holds the same notion as the latter that everything is constantly in a state of superposition - but it differs in the role it attributes to consciousness. Consciousness doesn't collapse the wavefunction, according to the Many Minds Interpretation, and nothing really does. The entire universe is always in a maximal superposition state. What splits instead are the individual minds of each and every observer. They split for every measurement they make - and in this context, any observation of the physical world counts as a measurement. When consciousness observes one particular outcome to the exclusion of all others, it does so parallel to an infinitude of copies of itself, each one observing a different outcome drawn from the same superposition state of that which it observes. What this view capitalizes on is the lack of need to assume that any sort of split occurs in the physical world. If one simply assumes that the universe is one grand wavefunction (i.e. it is always in a state of absolute superposition), then every observer in every time and place is observing all possible outcomes simultaneously. Although it is impossible to imagine observing all possible outcomes simultaneously, one need not imagine that all observations are taken in by the same consciousness. Each observation instance is matched up with one whole consciousness. It follows that each consciousness would be unconscious of any of its counterparts in the superposition of all minds observing the same physical phenomenon. If we take any one of these mind instances and follow its path as it goes on to observe other physical phenomena - phenomena that, unbeknownst to the mind in question, exist in superposition states - we would find that, the moment it finally observes the

Quantum Tunneling

The Proper Account of Quantum Tunneling

Virtual Particles

Many Minds Interpretation

17 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

outcomes of these other phenomena, it splits yet again. This is not the physical phenomena themselves splitting, as the Many Worlds Interpretation would have it, but the consciousness observing them splitting, each offspring pairing up with one instance among the superposition of physical phenomena. Some "single world" interpretations, as they might be called, also employ the concept of decoherence. It has been suggested that decoherence occurs with any physical interaction whatsoever, not just human measurement. This contrasts sharply with the ontological version of the Copenhagen Interpretation - the one that attributes the collapse to consciousness in a fully causal sense. If there were no conscious beings, the latter interpretation says, all properties of all things capable of going into superposition would be in superposition to the utmost extreme - that is, the Sun and the Moon would be absolutely everywhere, as would all other planets and stars, all particles and midsized objects, and every other physical thing in the universe. And even with the existence of conscious beings, so long as none of them are aware of the states of all these things, they persist in their extreme form of superposition. This is a very hard mouthful to swallow for many, as it goes against every fiber of common sense we have. For this reason, many welcome interpretations that endorse decoherence for any physical interaction whatsoever as fresh alternatives. One drawback to these interpretations, however, is that they are often vague on what constitutes an interaction. After all, if position is one of the most salient properties to go into superposition, it's hard to imagine how physical interactions ever take place. That is, for instance, if one particle enters the vicinity of another particle, and the positions of both are in a heightened state of superposition, there is no precise point at which they can be said to be in contact with each other (see sidenote ). Therefore, what are we to say? Are they impinging on each other? Are they just whizzing by each other? Are they interacting in some other manner? The fact is, because of the "fuzziness" that quantum mechanics introduces into physics, the exact meaning of a physical "interaction" must be reconsidered. It is not clear what it consists of or what brings it about. In fact, the most plausible interpretation is, like the collapse of the wavefunction due to measurement, that it occurs randomly. Nature whimsically decides: "Yes, these particles will interact." It is assumed that particle interactions are what keep electrons in their orbitals around nuclei. By way of the attractive force between protons and electrons, the electron's position collapses, or decoheres, to within the small orbitals that stick ever so close to the bundle of protons that make up the nucleus. Other forces would also be involved in the collapse, or decoherence, of these protons and the accompanying neutrons to within the nucleus. This would be the case for all such forces. Therefore, this interpretation explains very well why particles don't go propagating in all directions when they are bound together by the many forces that hold atomic structures together. And what happens, in single world interpretations, to the many particle instances after decoherence - that is, the instances that are measured by one observer and those that aren't? There are no extra worlds for the non-measured instances to go into. Well, it is often said that the non-measured instances "dissipate" into the environment. What "dissipate" means in this context can have different meanings depending on whose interpretation you consult, but in general it means that all other instances of the property or state you're interested in measuring have become lost or inaccessible to measurement. The wave has become "diluted", so to speak, in the surrounding environment - it has become mixed up and blended into the wavefunction of all other particles constituting the immediate environment. So far, none of these interpretations have dealt with the wave/particular duality of matter and energy in a way other than from a probabilistic perspective - that is, a model that explains the wave-like nature of particles in terms of probable positions spread throughout a region of space. There is one interpretation, however, whose account of wave/particle duality brings quantum mechanics back to the old classical view wherein particles are just particles, waves are just waves, and never the twain shall meet (well, maybe I should say never the twain shall be one, for they do meet ). This is the Bohm interpretation, presented in 1952 by David Bohm. Bohm suggested that there isn't just one entity that sometimes exists as a particle and other times as a wave, but two entities one always existing as a particle and one always existing as a wave. Every particle, Bohm says, is always accompanied by a "pilot wave". A pilot wave is a wave that guides the particle in its trajectory. It thus lays out the options the particle has for taking on one position or another. The particle can exist anywhere within the region covered by the wave. Although Bohm proponents claim that their model captures the same deterministic flavor as classical theories, they have not been able to make predictions, vis-à-vis the outcomes of our measurements, with any more reliability than with any other interpretation. It is simply

Particles Don't Touch

The Bohm Interpretation

David Bohm

18 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

posited that the pilot wave carries out some obscure algorithm when deciding the properties of the particles it guides. On the other hand, the Bohm interpretation deals with superposition quite effectively. All there is, the Bohm interpretation says, is a particle and a wave, and the form these take - at least the particle - is perfectly consistent with the classical worldview. Finally, there is the Orchestrated Objective Reduction Interpretation or Orch OR for short. First proposed by Roger Penrose in 1989, this interpretation says that there is an upper limit to how extreme a state of superposition can be. That is, the possible values a property like position, momentum, spin, energy, time, etc., can take on can never be equally probable across an infinite range. Quantum physicists generally agree that these properties can take on any value whatsoever, but that the more extreme these values, the less probable. For example, in the double-slit experiment, although there is a chance that the particle might be found on the opposite side of the galaxy from the laboratory, the probabilities of this are infinitesimally small, and that the bulk of the probability lies between the particle emitter and the screen. The Orch OR model adds that the universe enforces this inequality of probability distribution by setting an upper limit on how far and wide the wave can propagate. This limit is taken to be a universal constant much like the speed of light or the charge of an electron. If superposition ever hits this ceiling, it automatically collapses to a more precise state. So, for example, if we removed the screen from the double-slit experiment, allowing the particle to propagate indefinitely, the Orch Or model predicts that it would eventually collapse into a less varied state regardless of whether it interacted with anything or was somehow measured. Its travels wouldn't end there, of course; it would go on propagating as a wave, but it would have to begin over again, or at least from a state it had previously surpassed. Penrose extends this idea even further. With his Orch OR model, he builds a bridge between physics and psychology. He says that what causes the wavefunction to collapse is conscious decision making - that is, free will. He is effectively killing two birds with one stone with this assertion, accounting for both superposition and the randomness of collapse. Every instance of an entity in a state of superposition, he says, is actually having a conscious experience. This experience is the impetus for a decision that the entity is about to make, and when it finally makes this decision, either in response to the universal limit or some interaction with the environment, this corresponds to the decision being made. For instance, a particle in a state of superposition with respect to its position is actually in the midst of contemplating what position to take. Since it hasn't taken a position on the matter, so to speak, it doesn't have a position just yet. The most highly concentrated region of the probability distribution represents the choices it is leaning towards. When it finally settles on a choice, it collapses into the position so chosen. Collaborating with Penrose on this theory is Stuart Hameroff, an anesthesiologist who argues in favor of Penrose's interpretation by demonstrating quantum effects at the level of whole neurons. Essentially, the gist of this argument is that the Orch OR model can explain human consciousness when we consider that neurons exhibit the same quantum processes Penrose's model accounts for. Of course, no one has observed neurons in superposition states - the mere idea is an oxymoron according to quantum theory - but Hameroff claims that there is indirect evidence for this. We will explore this idea in more detail in the paper Determinism and Free-Will. The above are just a few of the numerous interpretations on quantum mechanics. If the reader feels I have overlooked some that are just as worthy of note as the ones above, I apologize with my only excuse being that there are just too many, and to go through them all would fall beyond the scope and purpose of this brief overview. What we have touched on above is more than enough background for the reader to take in as he/she moves onto other papers in this website. The only thing that remains is to explicate the position we are taking in this website - that is, which interpretation we deem to be the best. We will do so by evaluating each one on their strengths and their weaknesses, and in the end, picking the one that makes the most sense.

The Orchestrated Objective Reduction Interpretation

Roger Penrose

Evaluations
The way we will do this - evaluate each interpretation, that it - is by adhering to two criteria. First, we will lean towards the interpretation that best suits MM-Theory. This will not be easy for those interpretations that involve randomness, for randomness, as we will see in the paper Determinism and Free-Will, is detrimental to a theory like ours, depending on a deterministic framework for the universe as it does. We will resolve this problem in the aforementioned paper, but for now we will have to accept the possibility that even the most congruent interpretation will turn out to be troublesome if it includes randomness. Now, just to ensure that we aren't being circular in our reasoning - that is, judging these interpretations based on our theory, and not the other way around as it should be our second criteria will be that the best interpretation must be backed by the strongest justifications. Therefore, we will judge it as we would any philosophical theory - by its internal consistency and the plausibility of its claims. We ought to recall that these interpretations concern the true nature of the world when we aren't observing it, and therefore to take a philosophical approach to them would be more fruitful than a scientific one. They all agree on the data gathered from mountains of experimental evidence - they differ only in the speculations one is inclined to make after having seen the data. In short, we will assess their pros and cons based on their own merits, but with an

19 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

inclination towards supporting our theory. It may seem ironic, therefore, that we will refrain from judging the Copenhagen Interpretation at all. Why is this? Because, as we have seen, the Copenhagen Interpretation says nothing about reality whatsoever - it is simply a mathematical system that describes, quite accurately, the probabilities of the outcomes we observe in quantum mechanical experiments. This is just factual, not speculative. It was quite deliberately setup to allow others (i.e. non-scientists) to venture whatever guesses they felt were most probably true of the real world, which includes us of course, without seriously threatening the coherency and merit of the Copenhagen Interpretation itself. That is, it allows us to posit our theory as a possibility for what might really be the true nature of the universe (albeit, still needing reconciliation between determinism and randomness, but we will address that in another paper). Therefore, there is no need whatsoever to evaluate the logical worth of the Copenhagen Interpretation as it doesn't speak for or against our theory. So let's look at some of the other interpretations. Let's start with the ontologically oriented version of the Copenhagen Interpretation wherein consciousness causes collapse. Although the difference between the conception of consciousness held by this interpretation and that held by our theory is glaring, we will not argue over whose conception is the right one. Instead, we will examine this interpretation on its own ground, assuming, for the moment, that consciousness is as any classical or conventional notion would have it. The first problem that comes to mind is that, in order for this interpretation to work, consciousness would have to keep track of an enormous amount of information. It could not choose for superposition to collapse into any random state - rather, it seems that consciousness is bent on collapsing the world in accordance with classical mechanics. For example, suppose that late one night, as I look out my bedroom window, I see a full moon sitting just above the east horizon. I go to bed, and the next morning, as I walk out the door to go to work, I look up and see the moon sitting just above the west horizon. According to the ontologically oriented version of the Copenhagen Interpretation, the moon in both cases when I saw it the previous night and when I saw it this morning - was in a state of absolute superposition prior to my seeing it, taking on all possible positions at once, and it was due solely to my looking at it that it collapsed into the singular positions in which I saw it (of course, there are other people on Earth capable of collapsing the moon's position with their own gaze, but for the sake of this thought experiment, we'll pretend I'm the only one looking at the moon at those moments ). Yet, I'll stake my life on the prospect that if I were to do the proper calculations, consulting astronomic charts and all the physics textbooks I can get my hands on, I'll get results confirming the exact location at which I saw the moon this morning. That is, I'll bet the moon's position this morning can be explained accurately enough by the laws of classical mechanics. So if my consciousness was solely responsible for the position of the moon upon collapse, then somehow, perhaps unconsciously, I would have to be doing mountains of math and physics, starting with its initial position that I saw last night, to figure out where it should be when looking at it this morning. I would have to be doing this for absolutely everything in the universe I could potentially lay my eyes upon, which would be an enormous task for my consciousness to take on. The volumes of information to keep track of would be staggering - too much for any one mind to handle. Secondly, it stands to question why the mind would apply this classical or Newtonian scheme to the great majority of run-ofthe-mill phenomena, like the moon, but not to the occasional quantum experiment like the double-slit one. Asking this question another way, if consciousness collapses the wavefunction in virtual accordance with classical mechanics, why does it not do so in the experiments that led to this very interpretation? Well, one obvious reply to this is that the wavefunction always favors certain outcomes over others. In the case of the double-slit experiment, the wavefunction favors collapse within the regions covered by the light bands of the interference pattern. Putting this in terms of the consciousness-causes-collapse interpretation, it is somehow in the nature of consciousness itself that the more probable outcomes are preferred over the less probable ones. We can carry this reply over to the case of the moon's position, or any other phenomenon in the universe, saying that outcomes in accordance with classical mechanics are just extremely probable and that any other outcome deviating from this by only minute amounts become extremely improbable. The problem with this reply, however, is that it presupposes other factors besides consciousness involved in the collapse. If consciousness really is the only thing responsible for the collapse, then when no one's looking at it, the moon should proceed in its orbit exactly like a subatomic particle, propagating in various directions like a wave. The probability of where we will find it should be equally similar to that of a particle - perhaps even exhibiting an interference pattern if there so happen to be two gigantic slits and an enormous screen in the cosmos. In other words, we shouldn't always see the moon in locations consistent with classical mechanics - not all the time. So what keeps it from straying from its straight, or rather arched, path across the heavens? There must be other factors. The gravitational pull of the Earth is one that comes to mind. The molecular and subatomic bonds that keep the rocks, dust, and the moon as a whole intact is another. All these things count as other kinds of interactions, some between particles and others between macroscopic material bodies, in which consciousness plays no part. But by this account, the interpretation under consideration ends up sounding more like one of the decoherence interpretations,

20 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

and so, in essence, we leave the consciousness-causes-collapse interpretation when we bring in other factors besides consciousness. At this point, then, we will postpone any further comment until we get to these other interpretations. What about these Many Worlds and Many Minds interpretations? They suffer the same problems as the consciousness-causes-collapse interpretation. The problem, namely, is that we typically experience the world as though classical mechanics were the best description of it. Why don't we ever see the moon in places where it shouldn't be according to the laws of planetary motion? The Many Worlds and Many Minds interpretations say that we should, and for exactly the same reasons as the consciousness-causes-collapse interpretation. If every time we make an observation of some physical object, whether it's a particle or something as big as the moon, we split the universe into as many offspring as their are instances of that object making up its superposition state, then which instance we get paired up with should be just as likely, and seem just as random, as in the consciousness-causescollapse case. The same applies to the Many Minds interpretation, except there would be no universe splitting - just instances of objects being paired with instances of minds. So in a few universes, or a few pairings of mind instances to object instances, we will see the moon in very awkward positions in the sky - awkward by the standards of classical mechanics, that is. Is it just coincidence, then, that we never see this, or anything to do with other observable objects that would be just as awkward? Are we just lucky to be perpetually allocated to universes, or mind-object pairings, that seem to play out in accordance with the conventional laws that seem so natural to our everyday world and intuitively predictable? Well, we could say that any awkward turn of events, such as (say) the moon being around Jupiter, is, although possible, extremely unlikely, and it would only happen once in a trillion universe splittings (or mind-object pairings) that we would witness such a anomalous spectacle. But this is no different than the defense we gave for the consciousness-causes-collapse interpretation, which we thereafter refuted. We refuted it on the grounds that the entire reason these awkward outcomes are so improbable is because something has already restricted the wavefunction - that is, the possible regions in space where physical objects are most likely to be found - to those possibilities that are typical only of classical mechanics. Otherwise, everything should travel as a wave - planets, particles, even people. In other words, the reason we'll never see the moon hanging around Jupiter is because forces like the Earth's gravity or the atomic and subatomic bonds holding the moon together and preventing any part of it from veering away from any other part do indeed cause the wavefunction to collapse. It collapses (or decoheres) independently of our consciousness, any universe splitting, or any mind-object pairing. The collapse/decoherence must be independent of these because the restricted possibilities that such collapse/decoherence results in are what we're given to begin with - that is, the probability of always finding the same electron close to its nucleus, for example, as opposed to dispersing off into space as a wave would, is much greater than it would be if it did disperse as a wave, and this greater probability is given before we have a chance to observe any outcome, and therefore trigger a universal split or a new mind-object pairing. Thus, there has to be more to collapse (or decoherence) than simply what universe, or what pairing, our minds get allocated to. Let's now look at the Bohm Interpretation. Although the Bohm Interpretation does away with superposition, it leaves something to be desired in its claim that it does the same with randomness. It is said that the pilot wave guides the particle in the properties it manifests (such as position, momentum, spin, etc.), but when we inquire further into how the wave does this, we find this account to be vacuous. To claim that a pilot wave "guides" the particle without elaborating on how it does so is no more enlightening than the claim that "some mechanism" determines the outcomes of any quantum mechanical experiment. In other words, the Bohm Interpretation adds nothing informative to quantum mechanics - at least, not where randomness is concerned. Furthermore, Bell's Theorem put a damper on the Bohm Interpretation when it proved that nothing local could account for the randomness of quantum phenomena. If there were any mechanism determining the states resulting from collapse, it would have to be non-local. Therefore, if proponents of the Bohm Interpretation wanted to carry on with their view, they would have to forgo the image of a local pilot wave, and opt for a non-local one. This, in fact, is what happened. Many Bohm proponents adapted their view such that the pilot wave became a "universal wavefunction". It didn't exist local to the particles it determined, but remained in the ubiquitous background. In other words, it was as if the universe itself was guiding all particles. But this is even worse than the local version of the theory, for not only is no light shed on the means by which this wave guides all particles, but one can't even conceptualize it as a wave anymore. It remains a wave in name only - the "universal wavefunction" - but what kind of mechanism this obscure term refers to is anyone's guess. The fact is, anyone can submit the proposal that "something" determines the outcomes of quantum phenomena, local or otherwise, but doing so would miss the point namely, to contribute something informative to the questions surrounding quantum mechanics. So let's move onto single-world decoherence theories. These are actually not bad interpretations. Their greatest feature, in my opinion, is their simplicity. By granting that any particle interaction can decohere the wavefunction, it doesn't complicate the matter by positing extreme forms of superposition, like the ontologically oriented version of the Copenhagen Interpretation. Likewise, it doesn't chop the universe up into several copies, like the Many Worlds interpretation, and this adds to its simplistic character. Superposition remains unaccounted for, however, as does randomness, and we will have to address these if we were to adopt an interpretation like this. Another bonus of these interpretations is that it allows us to imagine the universe as persisting in much the same states as it would under the classical view. These states are not exactly as the classical view would have them, but they are a convenient approximation. There is so much going on in the universe, so many physical interactions. Even distant stars from neighboring galaxies have small effects on each other by way of gravity, solar energy, and perhaps other

21 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

mechanisms. Material objects, even ones as rarified as hydrogen gas, are held together by their atomic bonds, which are just interactions between electrons and protons. All these things count as physical interactions, and according to single-world decoherence interpretations, this means that these interactions are constantly reinforcing, as much as possible, the classical states that our common sense notions are in the habit of holding onto. Therefore, single-world decoherence interpretations don't veer much from these common sense notions, and that adds to its parsimony - a real advantage when they're up against competing interpretations. The Orch OR model is an extended type of single-world decoherence theory. If any particle interaction decoheres the wavefunction, then the brain - a highly condensed and chemically active organ - should have plenty of decoherence going on inside itself. This works well with the Orch OR model since it permits conscious decision making to be associated with these decoherence events. Therefore, if we deemed single-world decoherence interpretations impressive for their simplicity, then as far as they take us, we should deem the Orch OR model in a similar vein. But the Orch OR model takes us beyond decoherence interpretations - into a theory of consciousness and free-will. This comes as a blessing and a curse. It is a blessing in the sense that, as we pointed out above, it accounts for superposition and randomness in the same stroke. It is a curse, however, in the sense that it is a competing theory to ours. But rather than attack the Orch OR model head-on, we will find a way to reconcile it with our theory. In fact, I intend to show, in Determinism and Free-Will how the two theories - ours and the Orch OR model (or rather, the theory of "Quantum Consciousness" as presented by Stuart Hameroff) - can actually complement each other. Needless to say, we will be adopting the single-world decoherence model as our official stance on quantum mechanics. Although we have yet to deal with its shortcomings - the proper conceptualization of superposition and the randomness of decoherence - the proper place to address these issues is in Determinism and Free-Will. Its simplicity is a very strong advantage, making for a very elegant interpretation. The single-world decoherence interpretation is a more recent idea, and so at the time of its advent, quantum physicists were quite used to the concepts of superposition and randomness. These concepts did little, therefore, to take away from its elegance, and so it is not surprising that, aside from the Copenhagen Interpretation, the idea of single-world decoherence is gaining in popularity. It is safe to say that, today, it is fairly mainstream. This is another good reason to opt for this interpretation - that is, it is always safe to go with a model that is held in high regards by a great many professionals in the field. But what about the merits of the other interpretations? As we've seen, the consciousness-causes-collapse and the Bohm Interpretation didn't fare so well in our assessment, and the Many Worlds and Many Minds Interpretations are plagued with the same shortcomings as the first of the former. It might be noteworthy to point out, however, that these shortcomings, or at least a few of them, are only problematic insofar as the interpretation that is plagued by them is judged on its own grounds. That is to say, just as we promised at the beginning of this section, we judged each interpretation, not only on its compatibility with MM-Theory, but on its own grounds as a scientific (or as close to scientific as they get) account of quantum mechanics. We could have, for example, condoned the Bohm Interpretation. It nicely does away with superposition, and where randomness is concerned, although it hardly satisfies to a scientifically/materialistically minded person, it doesn't bother the more metaphysically oriented thinker quite as much. In other words, whereas a non-local account like the "universal wavefunction" is too much new-age mumbo-jumbo to a keen scientist, it doesn't conflict in the least with a more metaphysical view like MM-Theory. We would simply posit that the "universal wavefunction" is a sort of algorithm (mimicking wave mechanics) that the Universal Mind carries out when deciding how to move particles. Technically, even the Many Worlds Interpretation doesn't conflict with MM-Theory. It too does away with randomness, and although superposition and the splitting of universes is something that MM-Theory would have to grapple with, it is not logically inconsistent with it. This is what the principle of the Unassailability of Science is all about. It tells us that no matter what the discoveries of science, MM-Theory claims that those discoveries are physical representations of experiences being had elsewhere in the universe. Superposition is no exception to this, and we will do this principle justice by giving an account of superposition in the paper Determinism and Free-Will. The multiverse, unfortunately, will not be given an equally decisive account, but that's no reason to suppose none could be given. But we have settled on an interpretation that doesn't work well with MM-Theory. Single-world decoherence interpretations leave a lot on our plate, for MM-Theory has yet to account for superposition and randomness, and although the principle of the Unassailability of Science promises us that superposition can be accounted for, randomness is an exception to this. It actually does conflict with our theory. We will deal with this in Determinism and Free-Will, and the fact that we are accepting this burden shows that we have not taken the easy route - we have accommodated the scientific community more than our own theory. Determinism and Free-Will
Introduction Uncertainty and Randomness The Basics Entanglement, Cats, and Other Paradoxes TOP Energy Quanta Interpretations Wave/Particle Duality Evaluations

Quantum Consciousness

The Universal Mind

The Unassailability of Science

22 of 23

4/2/2010 3:46 PM

MM-Theory - Quantum Mechanics

http://www.mm-theory.com/qm/qm.htm

© Copywrite 2008, 2009 Gibran Shah.

23 of 23

4/2/2010 3:46 PM

Appendix
Yet Another Model The diagram on the left shows the currently held model of atomic orbitals. This means that the model Bohr forwarded was, yet again, replaced by a better one. The key difference is in the shape the orbitals take. In the Bohr model, the shapes are similar to those of the Rutherford model - namely, as circular or elliptical paths surrounding the nucleus - but in today's model, these shapes are noticeably different. The basic orbitals (top/right), which correspond to the lowest energy levels, are not so different from the Bohr model except that they take a spherical shape rather than a circular or elliptical one. Other orbitals higher up take on shapes reminiscent of balloons (top/left and bottom/left), toruses (top/left), or kidney beans (bottom/right). Another major difference, which will be explained as we get further into quantum mechanics, is that the electrons that occupy these orbitals are not literally orbiting the nucleus - at least, not in the conventional sense. Rather, they form what some have called an "electron cloud". What this phrase means to convey is that the electrons don't take a definite position within these orbitals rather they take a "fuzzy" position, and fill out these orbitals somewhat analogously to a cloud of gas filling a chamber. Depending on which orbital the electron is in, the shape this "cloud" takes conforms to the shape of the orbital. This may sound confusing to the reader, and at this point, the reader should feel confused. To really grasp what it means for an electron to take on the form of a "cloud", or for its position not to be definite, we need a more in-depth understanding of the nature of quantum mechanics. Hopefully, by the end of this paper, the reader will have such an understanding, and therefore, it might be worth his/her while to return to this sidenote at that point. Buckyballs! The largest thing to ever exhibit the interference pattern in the double-slit experiment is the buckyball. As seen below, a buckyball is a somewhat large molecule made mostly of carbon atoms.

i

Superposition Without Randomness Physicists will tell us that a few particle properties, like mass or charge, never go into superposition states. But technically, this cannot be known. What is known, at the very least, is that if they did go into superposition, the states they collapse into when measured are not random. That is, it's actually quite possible that although an electron's mass and charge, when measured, are always 9.11×10−31kg and -1.6×10-19 coulombs respectively, they nevertheless go into superposition at all other times. When they collapse, they would simply acquire the same value or state every time. We have to distinguish between superposition and randomness they are not the same thing. Although it makes good intuitive sense that superposition ultimately leads to random collapse, this connection is not necessary in a strictly logical sense. The Proper Account of Quantum Tunneling The animation to the right is actually a poor depiction of how quantum tunneling works. The proper account of quantum tunneling has very little to do with "borrowing" energy. The proper account is as follows. When a particle is confined to a very small region, enclosed there by a nearly impenetrable barrier, although its position has been narrowed down to that small region, it still exists in a state of superposition to some degree. The wave that constitutes this superposition state is capable of spanning just beyond the barrier, as shown in figure 5, and this means that there is a minute chance that the particle's position, when measured, will turn up beyond the barrier. Simply put, the particle can end up on the other side of the barrier because its position is not fully determined not because it burst through it.

ii

Figure 5: Quantum Tunneling

One might ask, therefore, why the need exists to bring in additional accounts, such as the borrowing of energy, for a full explanation of quantum tunneling. The answer is that some physicists feel that, although the fundamental concepts of classical mechanics can be abandoned in light of the discoveries of quantum mechanics, this is not true for all classical concepts. That a particle can exist somewhere within a confined space enclosed by a barrier at one point in time and then somewhere outside that space at a later point in time still violates certain principles of classical mechanics - principles that one cannot justify abandoning in virtue of quantum mechanics. Namely, it violates certain conservation laws (of energy and momentum), and quantum mechanics, despite its overturning of classical mechanics, has not ruled these laws out. Therefore, we still need to account for how a particle can go from one side of a barrier to the other without sufficient energy to do so. But the crucial question is just whether extra energy is even needed for quantum tunneling to occur. The indeterminacy of a particle's position is seen, by some, as just an alternate account to the "borrowing" one, and doesn't suffer the insufficiency that others feel the "borrowing" account satisfies. As noted many times already in this paper, these are matters of speculation, and so one is free to choose either interpretation without clashing with scientific fact. Particles Don’t Touch Technically, particles never come in contact with each other - not even in the classical paradigm of physics. Instead, they affect each other with their charges. The fact remains, however, that how these charges affect the particles they subjugate depends greatly on where the particles are relative to each other. The closer they are, the stronger the force of the charge. The more to one side one is relative to the other, the more the other is going to be pushed in the other direction or pulled in the same direction. So if their positions are undetermined, it is hard to fathom how their charges can affect each other in a definite manner. The more undetermined their position, such as in the double-slit experiment, the more unfathomable it is, such as what determined that the particle was going to hit the screen at the precise location it did.

iii