You are on page 1of 12

101seminartopics.

com

Quantum Computing

Abstract
Imagine a computer whose memory is exponentially larger than its apparent physical
size; a computer that can manipulate an exponential set of inputs simultaneously; a
computer that computes in the twilight zone of Hilbert space. You would be thinking of a
quantum computer. Relatively few and simple concepts from quantum mechanics are
needed to make quantum computers a possibility. The subtlety has been in learning to
manipulate these concepts. Is such a computer an inevitability or will it is too difficult to
build?

The subject of quantum computing brings together ideas from classical information
theory, computer science, and quantum physics. This review aims to summarize not just
quantum computing, but the whole subject of quantum information theory. It turns out
that information theory and quantum mechanics fit together very well. In order to explain
their relationship, the review begins with an introduction to classical information theory
and computer science, including Shannon's theorem, error correcting codes, Turing
machines and computational complexity. The principles of quantum mechanics are then
outlined, and the EPR experiment described. The EPR-Bell correlations and quantum
entanglement in general, form the essential new ingredient which distinguishes quantum
from classical information theory, and, arguably, quantum from classical physics. Basic
quantum information ideas are described, including key distribution, teleportation, data
compression, quantum error correction, the universal quantum computer and quantum
algorithms. The common theme of all these ideas is the use of quantum entanglement as a
computational resource. Experimental methods for small quantum processors are briefly
sketched, concentrating on ion traps, high Q cavities, and NMR. The review concludes
with an outline of the main features of quantum information physics, and avenues for
future research.

Introduction
What is quantum computing? It's something that could have been thought up a long time
ago - an idea whose time has come. For any physical theory one can ask: what sort of
machines will do useful computation? or, what sort of processes will count as useful
computational acts? Alan Turing thought about this in 1936 with regard (implicitly) to
classical mechanics, and gave the world the paradigm classical computer: the Turing
machine.

But even in 1936 classical mechanics was known to be false. Work is now under way -
mostly theoretical, but tentatively, hesitantly groping towards the practical - in seeing
what quantum mechanics means for computers and computing.

The basic idea behind quantum computing in the merging of computer science,
information theory and quantum mechanics from classical physics for better efficiency of
operation of systems. Information is the most important element of any system.
101seminartopics.com
Information can be represented in many ways. It can be analyzed provided we know how
it was encoded. For example, the two statements ``the quantum computer is very
interesting'' and ``l'ordinateur quantique est tres interessant'' have something in common,
although they share no words. The thing they have in common is their information
content. Essentially the same information could be expressed in many other ways, for
example by substituting numbers for letters in a scheme such as a -> 97, b -> 98, c -> 99
and so on, in which case the English version of the above statement becomes 116 104
101 32 113 117 97 110 116 117 109... . It is very significant that information can be
expressed in different ways without losing its essential nature, since this leads to the
possibility of the automatic manipulation of information: a machine need only be able to
manipulate quite simple things like integers in order to do surprisingly powerful
information processing, from document preparation to differential calculus, even to
translating between human languages. We are familiar with this now, because of the
ubiquitous computer, but even fifty years ago such a widespread significance of
automated information processing was not foreseen.

However, there is one thing that all ways of expressing information must have in
common: they all use real physical things to do the job. Spoken words are conveyed by
air pressure fluctuations, written ones by arrangements of ink molecules on paper, even
thoughts depend on neurons (Landauer 1991). The rallying cry of the information
physicist is ``no information without physical representation!'' Conversely, the fact that
information is insensitive to exactly how it is expressed, and can be freely translated from
one form to another, makes it an obvious candidate for a fundamentally important role in
physics, like energy and momentum and other such abstractions. However, until the
second half of this century, the precise mathematical treatment of information, especially
information processing, was undiscovered, so the significance of information in physics
was only hinted at in concepts such as entropy in thermodynamics. It now appears that
information may have a much deeper significance. Historically, much of fundamental
physics has been concerned with discovering the fundamental particles of nature and the
equations which describe their motions and interactions. It now appears that a different
programme may be equally important: to discover the ways that nature allows, and
prevents, information to be expressed and manipulated, rather than particles to move. For
example, the best way to state exactly what can and cannot travel faster than light is to
identify information as the speed-limited entity. In quantum mechanics, it is highly
significant that the state vector must not contain, whether explicitly or implicitly, more
information than can meaningfully be associated with a given system. Among other
things this produces the wave function symmetry requirements which lead to Bose
Einstein and Fermi Dirac statistics, the periodic structure of atoms, and so on.

The history of computer technology has involved a sequence of changes from one type of
physical realization to another --- from gears to relays to valves to transistors to
integrated circuits and so on. Today's advanced lithographic techniques can squeeze
fraction of micron wide logic gates and wires onto the surface of silicon chips. Soon they
will yield even smaller parts and inevitably reach a point where logic gates are so small
101seminartopics.com
that they are made out of only a handful of atoms. On the atomic scale matter obeys the
rules of quantum mechanics, which are quite different from the classical rules that
determine the properties of conventional logic gates. So if computers are to become
smaller in the future, new, quantum technology must replace or supplement what we have
now. The point is, however, that quantum technology can offer much more than
cramming more and more bits to silicon and multiplying the clock-speed of
microprocessors. It can support entirely new kind of computation with qualitatively new
algorithms based on quantum principles!

A bit is a fundamental unit of information, classically represented as a 0 or 1 in your


digital computer. Each classical bit is physically realized through a macroscopic physical
system, such as the magnetization on a hard disk or the charge on a capacitor. A
document, for example, comprised of n-characters stored on the hard drive of a typical
computer is accordingly described by a string of 8n zeros and ones. Herein lies a key
difference between your classical computer and a quantum computer. Where a classical
computer obeys the well understood laws of classical physics, a quantum computer is a
device that harnesses physical phenomenon unique to quantum mechanics (especially
quantum interference) to realize a fundamentally new mode of information processing.
In a quantum computer, the fundamental unit of information (called a quantum bit or
qubit), is not binary but rather more quaternary in nature. This qubit property arises as a
direct consequence of its adherence to the laws of quantum mechanics which differ
radically from the laws of classical physics. A qubit can exist not only in a state
corresponding to the logical state 0 or 1 as in a classical bit, but also in states
corresponding to a blend or superposition of these classical states. In other words, a qubit
can exist as a zero, a one, or simultaneously as both 0 and 1, with a numerical coefficient
representing the probability for each state. This may seem counterintuitive because
everyday phenomenon is governed by classical physics, not quantum mechanics -- which
takes over at the atomic level.

To explain what makes quantum computers so different from their classical counterparts
we begin by having a closer look at a basic chunk of information namely one bit. From a
physical point of view a bit is a physical system which can be prepared in one of the two
different states representing two logical values --- no or yes, false or true, or simply 0 or 1
For example, in digital computers, the voltage between the plates in a capacitor
represents a bit of information: a charged capacitor denotes bit value 1 and an uncharged
capacitor bit value 0. One bit of information can be also encoded using two different
polarizations of light or two different electronic states of an atom. However, if we choose
an atom as a physical bit then quantum mechanics tells us that apart from the two distinct
electronic states the atom can be also prepared in a coherent superposition of the two
states. This means that the atom is both in state 0 and state 1.

Experimental validation
101seminartopics.com

In an experiment like that in figure a, where a photon is fired at a half-silvered mirror, it


can be shown that the photon does not actually split by verifying that if one detector
registers a signal, then no other detector does. With this piece of information, one might
think that any given photon travels vertically or horizontally, randomly choosing between
the two paths. However, quantum mechanics predicts that the photon actually travels
both paths simultaneously, collapsing down to one path only upon measurement. This
effect, known as single-particle interference, can be better illustrated in a slightly more
elaborate experiment, outlined in figure b.

Figure b depicts an interesting experiment that demonstrates the phenomenon of single-


particle interference. In this case, experiment shows that the photon always reaches
detector A, never detector B! If a single photon travels vertically and strikes the mirror,
then, by comparison to the experiment in figure a, there should be an equal probability
that the photon will strike either detector A or detector B. The same goes for a photon
traveling down the horizontal path. However, the actual result is drastically different.
The only conceivable conclusion is therefore that the photon somehow traveled both
paths simultaneously; creating interference at the point of intersection that destroyed the
possibility of the signal reaching B. This is known as quantum interference and results
from the superposition of the possible photon states, or potential paths. So although only
a single photon is emitted, it appears as though an identical photon exists and travels the
'path not taken,' only detectable by the interference it causes with the original photon
when their paths come together again. If, for example, either of the paths are blocked
with an absorbing screen, then detector B begins registering hits again just as in the first
experiment! This unique characteristic, among others, makes the current research in
quantum computing not merely a continuation of today's idea of a computer, but rather an
entirely new branch of thought. And it is because quantum computers harness these
special characteristics that give them the potential to be incredibly powerful
computational devices.

The most exciting really new feature of quantum computing is quantum parallelism. A
quantum system is in general not in one "classical state", but in a "quantum state"
consisting (crudely speaking) of a superposition of many classical or classical-like states.
This superposition is not just a figure of speech, covering up our ignorance of which
classical-like state it's "really" in. If that was all the superposition meant, you could drop
all but one of the classical-like states (maybe only later, after you deduced retrospectively
which one was "the right one") and still get the time evolution right. But actually you
101seminartopics.com
need the whole superposition to get the time evolution right. The system really is in some
sense in all the classical-like states at once! If the superposition can be protected from
unwanted entanglement with its environment (known as decoherence), a quantum
computer can output results depending on details of all its classical-like states. This is
quantum parallelism - parallelism on a serial machine. And if that wasn't enough,
machines that would already, in architectural terms, qualify as parallel can benefit from
quantum parallelism too - at which point the mind begins to seriously boggle!

The Potential and Power of Quantum Computing

Let us start by describing the problem at hand: factoring a number N into its prime factors

(e.g., the number 51688 may be decomposed as ). A convenient way


to quantify how quickly a particular algorithm may solve a problem is to ask how the
number of steps to complete the algorithm scales with the size of the ``input'' the
algorithm is fed. For the factoring problem, this input is just the number N we wish to

factor; hence the length of the input is . (The base of the logarithm is determined by
our numbering system. Thus a base of 2 gives the length in binary; a base of 10 in
decimal.) `Reasonable' algorithms are ones which scale as some small-degree polynomial
in the input size (with a degree of perhaps 2 or 3).

On conventional computers the best known factoring algorithm runs in

steps. This algorithm, therefore, scales

exponentially with the input size . For instance, in 1994 a 129 digit number
(known as RSA129) was successfully factored using this algorithm on approximately
1600 workstations scattered around the world; the entire factorization took eight months .
Using this to estimate the prefactor of the above exponential scaling, we find that it
would take roughly 800,000 years to factor a 250 digit number with the same computer
power; similarly, a 1000 digit number would require years (significantly lon ger than
the age of the universe). The difficulty of factoring large numbers is crucial for public-
key cryptosystems, such as ones used by banks. There, such codes rely on the difficulty
of factoring numbers with around 250 digits.

Recently, an algorithm was developed for factoring numbers on a quantum computer

which runs in steps where is small. This is roughly quadratic in the


input size, so factoring a 1000 digit number with such an algorithm would require only a
few million steps. The implication is that public key cryptosystems based on factoring
may be breakable.

To give you an idea of how this exponential improvement might be possible, we review
an elementary quantum mechanical experiment that demonstrates where such power may
lie hidden. The two-slit experiment is prototypic for observing quantum mechanical
behavior: A source emits photons, electrons or other particles that arrive at a pair of slits.
101seminartopics.com
These particles undergo unitary evolution and finally measurement. We see an
interference pattern, with both slits open, which wholly vanishes if either slit is covered.
In some sense, the particles pass through both slits in parallel. If such unitary evolution
were to represent a calculation (or an operation within a calculation) then the quantum
system would be performing computations in parallel. Quantum parallelism comes for
free. The output of this system would be given by the constructive interference among the
parallel computations.

In a traditional computer, information is encoded in a series of bits, and these bits are
manipulated via Boolean logic gates arranged in succession to produce an end result.
Similarly, a quantum computer manipulates qubits by executing a series of quantum
gates, each a unitary transformation acting on a single qubit or pair of qubits. In applying
these gates in succession, a quantum computer can perform a complicated unitary
transformation to a set of qubits in some initial state. The qubits can then be measured,
with this measurement serving as the final computational result. This similarity in
calculation between a classical and quantum computer affords that in theory, a classical
computer can accurately simulate a quantum computer. In other words, a classical
computer would be able to do anything a quantum computer can. So why bother with
quantum computers? Although a classical computer can theoretically simulate a quantum
computer, it is incredibly inefficient, so much so that a classical computer is effectively
incapable of performing many tasks that a quantum computer could perform with ease.
The simulation of a quantum computer on a classical one is a computationally hard
problem because the correlations among quantum bits are qualitatively different from
correlations among classical bits, as first explained by John Bell. Take for example a
system of only a few hundred qubits, this exists in a Hilbert space of dimension ~1090
that in simulation would require a classical computer to work with exponentially large
matrices (to perform calculations on each individual state, which is also represented as a
matrix), meaning it would take an exponentially longer time than even a primitive
quantum computer.
Richard Feynman was among the first to recognize the potential in quantum
superposition for solving such problems much faster. For example, a system of 500
qubits, which is impossible to simulate classically, represents a quantum superposition of
as many as 2500 states. Each state would be classically equivalent to a single list of 500
1's and 0's. Any quantum operation on that system --a particular pulse of radio waves, for
instance, whose action might be to execute a controlled - NOT operation on the 100th and
101st qubits-- would simultaneously operate on all 2500 states. Hence with one fell
swoop, one tick of the computer clock, a quantum operation could compute not just on
one machine state, as serial computers do, but on 2500 machine states at once! Eventually,
however, observing the system would cause it to collapse into a single quantum state
corresponding to a single answer, a single list of 500 1's and 0's, as dictated by the
measurement axiom of quantum mechanics. The reason this is an exciting result is
because this answer, derived from the massive quantum parallelism achieved through
superposition, is the equivalent of performing the same operation on a classical super
computer with ~10150 separate processors (which is of course impossible)!!
Early investigators in this field were naturally excited by the potential of such immense
computing power, and soon after realizing its potential, the hunt was on to find something
101seminartopics.com
interesting for a quantum computer to do. Peter Shor, a research and computer scientist
at AT&T's Bell Laboratories in New Jersey, provided such an application by devising the
first quantum computer algorithm. Shor's algorithm harnesses the power of quantum
superposition to rapidly factor very large numbers (on the order ~10200 digits and greater)
in a matter of seconds. The premier application of a quantum computer capable of
implementing this algorithm lies in the field of encryption, where one common (and best)
encryption code, known as RSA, relies heavily on the difficulty of factoring very large
composite numbers into their primes. A computer which can do this easily is naturally of
great interest to numerous government agencies that use RSA -- previously considered to
be "uncrackable" -- and anyone interested in electronic and financial privacy.
Encryption, however, is only one application of a quantum computer. In addition,
Shor has put together a toolbox of mathematical operations that can only be performed on
a quantum computer, many of which he used in his factorization algorithm. Furthermore,
Feynman asserted that a quantum computer could function as a kind of simulator for
quantum physics, potentially opening the doors to many discoveries in the field.
Currently the power and capability of a quantum computer is primarily theoretical
speculation; the advent of the first fully functional quantum computer will undoubtedly
bring many new and exciting applications.

Now we push the idea of superposition of numbers a bit further. Consider a register
composed of three physical bits. Any classical register of that type can store in a given
moment of time only one out of eight different numbers i.e. the register can be in only
one out of eight possible configurations such as 000, 001, 010, ... 111. A quantum register
composed of three qubits can store in a given moment of time all eight numbers in a
quantum superposition (Fig. 2). This is quite remarkable that all eight numbers are
physically present in the register but it should be no more surprising than a qubit being
both in state 0 and 1 at the same time. If we keep adding qubits to the register we increase
its storage capacity exponentially i.e. three qubits can store 8 different numbers at once,
four qubits can store 16 different numbers at once, and so on; in general L qubits can
store 2L numbers at once. Once the register is prepared in a superposition of different
numbers we can
perform operations
on all of them. For
example, if qubits
are atoms then
suitably tuned laser
pulses affect
atomic electronic
states and evolve
initial super
positions of
encoded numbers
into different super
positions. During
such evolution
each number in the
101seminartopics.com
superposition is affected and as the result we generate a massive parallel computation
albeit in one piece of quantum hardware. This means that a quantum computer can in
only one computational step perform the same mathematical operation on 2L different
input numbers encoded in coherent super positions of L qubits. In order to accomplish the
same task any classical computer has to repeat the same computation 2L times or one has
to use 2L different processors working in parallel. In other words a quantum computer
offers an enormous gain in the use of computational resources such as time and memory.

But this, after all, sounds as yet another purely technological progress. It looks like
classical computers can do the same computations as quantum computers but simply need
more time or more memory. The catch is that classical computers need exponentially
more time or memory to match the power of quantum computers and this is really asking
for too much because an exponential increase is really fast and we run out of available
time or memory very quickly. Let us have a closer look at this issue.

In order to solve a particular problem computers follow a precise set of instructions that
can be mechanically applied to yield the solution to any given instance of the problem. A
specification of this set of instructions is called an algorithm. Examples of algorithms are
the procedures taught in elementary schools for adding and multiplying whole numbers;
when these procedures are mechanically applied, they always yield the correct result for
any pair of whole numbers. Some algorithms are fast (e.g. multiplication) other are very
slow (e.g. factorisation, playing chess). Consider, for example, the following factorisation
problem

? x ? = 29083

How long would it take you, using paper and pencil, to find the two whole numbers
which should be written into the two boxes (the solution is unique)? Probably about one
hour. Solving the reverse problem

127 x 129 =? ,

again using paper and pencil technique, takes less than a minute. All because we know
fast algorithms for multiplication but we do not know equally fast ones for factorisation.
What really counts for a ``fast'' or a ``usable'' algorithm, according to the standard
definition, is not the actual time taken to multiply a particular pairs of number but the fact
that the time does not increase too sharply when we apply the same method to ever larger
numbers. The same standard text-book method of multiplication requires little extra work
when we switch from two three digit numbers to two thirty digits numbers. By contrast,
factoring a thirty digit number using the simplest trial divison method is about 1013 times
more time or memory consuming than factoring a three digit number. The use of
computational resources is enormous when we keep increasing the number of digits. The
largest number that has been factorised as a mathematical challenge, i.e. a number whose
factors were secretly chosen by mathematicians in order to present a challenge to other
mathematicians, had 129 digits. No one can even conceive of how one might factorize
101seminartopics.com
say thousand-digit numbers; the computation would take much more that the estimated
age of the universe.

Skipping details of the computational complexity we only mention that computer


scientists have a rigorous way of defining what makes an algorithm fast (and usable) or
slow (and unusable). For an algorithm to be fast, the time it takes to execute the algorithm
must increase no faster than a polynomial function of the size of the input. Informally
think about the input size as the total number of bits needed to specify the input to the
problem, for example, the number of bits needed to encode the number we want to
factorize. If the best algorithm we know for a particular problem has the execution time
(viewed as a function of the size of the input) bounded by a polynomial then we say that
the problem belongs to class P. Problems outside class P are known as hard problems.
Thus we say, for example, that multiplication is in P whereas factorizations is not in P
and that is why it is a hard problem. Hard does not mean ``impossible to solve'' or ``non-
computable'' --- factorizations is perfectly computable using a classical computer,
however, the physical resources needed to factor a large number are such that for all
practical purposes, it can be regarded as intractable.

It worth pointing out that computer scientist has carefully constructed the definitions of
efficient and inefficient algorithms trying to avoid any reference to a physical hardware.
According to the above definition factorizations is a hard problem for any classical
computer regardless its make and the clock-speed. Have a look at Fig.3 and compare a
modern computer with its ancestor of the nineteenth century, the Babbage differential
engine. The technological gap is obvious and yet the Babbage engine can perform the
same computations as the modern digital computer. Moreover, factoring is equally
difficult both for the Babbage engine and top-of-the-line connection machine; the
execution time grows exponentially with the size of the number in both cases. Thus
purely technological progress can only increase the computational speed by a fixed
multiplicative factor which does not help to change the exponential dependence between
the size of the input and the execution time. Such change requires inventing new, better
algorithms. Although quantum computation requires new quantum technology its real
power lies in new quantum algorithms which allow exploiting quantum superposition that
can contain an exponential number of different terms. Quantum computers can be
programmed in a qualitatively new way. For example, a quantum program can
incorporate instructions such as `... and now take a superposition of all numbers from the
previous operations...'; this instruction is meaningless for any classical data processing
device but makes lots of sense to a quantum computer. As the result we can construct
new algorithms for solving problems, some of which can turn difficult mathematical
problems, such as factorizations, into easy ones!

Brief history
The idea of a computational device based on quantum mechanics was first explored in the
1970's and early 1980's by physicists and computer scientists such as Charles H. Bennett
of the IBM Thomas J. Watson Research Center, Paul A. Benioff of Argonne National
Laboratory in Illinois, David Deutsch of the University of Oxford, and the late Richard P.
Feynman of the California Institute of Technology (Caltech). The idea emerged when
101seminartopics.com
scientists were pondering the fundamental limits of computation. They understood that if
technology continued to abide by Moore's Law, then the continually shrinking size of
circuitry packed onto silicon chips would eventually reach a point where individual
elements would be no larger than a few atoms. Here a problem arose because at the
atomic scale the physical laws that govern the behavior and properties of the circuit are
inherently quantum mechanical in nature, not classical. This then raised the question of
whether a new kind of computer could be devised based on the principles of quantum
physics.
Feynman was among the first to attempt to provide an answer to this question by
producing an abstract model in 1982 that showed how a quantum system could be used to
do computations. He also explained how such a machine would be able to act as a
simulator for quantum physics. In other words, a physicist would have the ability to
carry out experiments in quantum physics inside a quantum mechanical computer.
Later, in 1985, Deutsch realized that Feynman's assertion could eventually lead to a
general purpose quantum computer and published a crucial theoretical paper showing that
any physical process, in principle, could be modeled perfectly by a quantum computer.
Thus, a quantum computer would have capabilities far beyond those of any traditional
classical computer. After Deutsch published this paper, the search began to find
interesting applications for such a machine.
Unfortunately, all that could be found were a few rather contrived mathematical
problems, until Shor circulated in 1994 a preprint of a paper in which he set out a method
for using quantum computers to crack an important problem in number theory, namely
factorization. He showed how an ensemble of mathematical operations, designed
specifically for a quantum computer, could be organized to enable a such a machine to
factor huge numbers extremely rapidly, much faster than is possible on conventional
computers. With this breakthrough, quantum computing transformed from a mere
academic curiosity directly into a national and world interest.

Obstacles and Research

The field of quantum information processing has made numerous promising


advancements since its conception, including the building of two- and three-qubit
quantum computers capable of some simple arithmetic and data sorting. However, a few
potentially large obstacles still remain that prevent us from "just building one," or more
precisely, building a quantum computer that can rival today's modern digital computer.
Among these difficulties, error correction, decoherence, and hardware architecture are
probably the most formidable. Error correction is rather self explanatory, but what errors
need correction? The answer is primarily those errors that arise as a direct result of
decoherence, or the tendency of a quantum computer to decay from a given quantum
state into an incoherent state as it interacts, or entangles, with the state of the
environment. These interactions between the environment and qubits are unavoidable,
and induce the breakdown of information stored in the quantum computer, and thus errors
in computation. Before any quantum computer will be capable of solving hard problems,
research must devise a way to maintain decoherence and other potential sources of error
at an acceptable level. Thanks to the theory (and now reality) of quantum error
101seminartopics.com
correction, first proposed in 1995 and continually developed since, small scale quantum
computers have been built and the prospects of large quantum computers are looking up.
Probably the most important idea in this field is the application of error correction in
phase coherence as a means to extract information and reduce error in a quantum system
without actually measuring that system. In 1998, researches at Los Alamos National
Laboratory and MIT led by Raymond Laflamme managed to spread a single bit of
quantum information (qubit) across three nuclear spins in each molecule of a liquid
solution of alanine or trichloroethylene molecules. They accomplished this using the
techniques of nuclear magnetic resonance (NMR). This experiment is significant because
spreading out the information actually made it harder to corrupt. Quantum mechanics
tells us that directly measuring the state of a qubit invariably destroys the superposition of
states in which it exists, forcing it to become either a 0 or 1. The technique of spreading
out the information allows researchers to utilize the property of entanglement to study the
interactions between states as an indirect method for analyzing the quantum information.
Rather than a direct measurement, the group compared the spins to see if any new
differences arose between them without learning the information itself. This technique
gave them the ability to detect and fix errors in a qubit's phase coherence, and thus
maintain a higher level of coherence in the quantum system. This milestone has provided
argument against skeptics, and hope for believers. Currently, research in quantum error
correction continues with groups at Caltech (Preskill, Kimble), Microsoft, Los Alamos,
and elsewhere.
At this point, only a few of the benefits of quantum computation and quantum
computers are readily obvious, but before more possibilities are uncovered theory must
be put to the test. In order to do this, devices capable of quantum computation must be
constructed. Quantum computing hardware is, however, still in its infancy. As a result
of several significant experiments, nuclear magnetic resonance (NMR) has become the
most popular component in quantum hardware architecture. Only within the past year, a
group from Los Alamos National Laboratory and MIT constructed the first experimental
demonstrations of a quantum computer using nuclear magnetic resonance (NMR)
technology. Currently, research is underway to discover methods for battling the
destructive effects of decoherence, to develop optimal hardware architecture for
designing and building a quantum computer, and to further uncover quantum algorithms
to utilize the immense computing power available in these devices. Naturally this pursuit
is intimately related to quantum error correction codes and quantum algorithms, so a
number of groups are doing simultaneous research in a number of these fields. To date,
designs have involved ion traps, cavity quantum electrodynamics (QED), and NMR.
Though these devices have had mild success in performing interesting experiments, the
technologies each have serious limitations. Ion trap computers are limited in speed by
the vibration frequency of the modes in the trap. NMR devices have an exponential
attenuation of signal to noise as the number of qubits in a system increases. Cavity QED
is slightly more promising; however, it still has only been demonstrated with a few
qubits. Seth Lloyd of MIT is currently a prominent researcher in quantum hardware.
The future of quantum computer hardware architecture is likely to be very different from
what we know today; however, the current research has helped to provide insight as to
what obstacles the future will hold for these devices.
101seminartopics.com

Future Outlook

At present, quantum computers and quantum information technology remains in its


pioneering stage. At this very moment obstacles are being surmounted that will provide
the knowledge needed to thrust quantum computers up to their rightful position as the
fastest computational machines in existence. Error correction has made promising
progress to date, nearing a point now where we may have the tools required to build a
computer robust enough to adequately withstand the effects of decoherence. Quantum
hardware, on the other hand, remains an emerging field, but the work done thus far
suggests that it will only be a matter time before we have devices large enough to test
Shor's and other quantum algorithms. Thereby, quantum computers will emerge as the
superior computational devices at the very least, and perhaps one day make today's
modern computer obsolete. Quantum computation has its origins in highly specialized
fields of theoretical physics, but its future undoubtedly lies in the profound effect it will
have on the lives of all mankind.

You might also like