Professional Documents
Culture Documents
Textbook Adventures in Computer Science From Classical Bits To Quantum Bits 1St Edition Vicente Moret Bonillo Auth Ebook All Chapter PDF
Textbook Adventures in Computer Science From Classical Bits To Quantum Bits 1St Edition Vicente Moret Bonillo Auth Ebook All Chapter PDF
https://textbookfull.com/product/bits-on-chips-harry-veendrick/
https://textbookfull.com/product/introduction-to-computing-
systems-from-bits-gates-to-c-beyond-3rd-edition-yale-patt/
https://textbookfull.com/product/from-classical-to-quantum-
fields-1st-edition-laurent-baulieu/
https://textbookfull.com/product/the-power-of-fifty-bits-the-new-
science-of-turning-good-intentions-into-positive-results-nease/
Classical and Quantum Dynamics From Classical Paths to
Path Integrals Fourth Edition Dittrich
https://textbookfull.com/product/classical-and-quantum-dynamics-
from-classical-paths-to-path-integrals-fourth-edition-dittrich/
https://textbookfull.com/product/classical-and-quantum-dynamics-
from-classical-paths-to-path-integrals-5th-edition-walter-
dittrich/
https://textbookfull.com/product/classical-and-quantum-dynamics-
from-classical-paths-to-path-integrals-4th-edition-walter-
dittrich/
https://textbookfull.com/product/classical-and-quantum-dynamics-
from-classical-paths-to-path-integrals-6th-edition-walter-
dittrich/
https://textbookfull.com/product/classical-and-quantum-dynamics-
from-classical-paths-to-path-integrals-6th-edition-dittrich-w/
Vicente Moret-Bonillo
Adventures
in Computer
Science
From Classical Bits to Quantum Bits
Adventures in Computer Science
Vicente Moret-Bonillo
Adventures in
Computer Science
From Classical Bits to Quantum Bits
Vicente Moret-Bonillo
Departamento de Computación
Universidad de A Coru~na
A Coru~na, Spain
Would you tell me, please, which way I ought to go from here? said Alice. That depends a
good deal on where you want to get to, said the Cat. I don’t much care where, said Alice.
Then it doesn’t matter which way you go, said the Cat.
Lewis Carroll
vii
viii Preface
Just to reiterate—although this book does not contain anything particularly new,
it is to be hoped that the reader will find novelty in the way the material is presented.
Enjoy!
A Coru~
na, Spain Vicente Moret-Bonillo
2017
Contents
xi
xii Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Chapter 1
The Universe of Binary Numbers
Before we start reflecting on difficult problems, we will review certain more or less
trivial concepts that are related to the peculiar entities and techniques already
implemented in our computers so that they can compute. Our computers are based
on binary logic, which uses the concept of bit to represent something that is true
(denoted by |1〉) or false (denoted by |0〉). As for the strange bar-and-angle-bracket
symbol, its exact meaning will become clear later. For the moment accept that if
the state of something is true, then the representation of that state is |1〉, and if the
state of something is false, then the representation of that state is |0〉. But let we
pose two questions:
• What kind of a thing is a bit?
• Is there a formal definition for the concept of bit?
X fPð AÞ ¼ Pð BÞ g
That is to say, we denote by X the circumstance that events A and B are equally
likely. According to this, the official definition of bit could be translated as follows:
In this expression, {A,B/X} means A or B given X, and, from our point of view,
the problem of the official definition lies in the term ‘election’. The interpretation of
this term might be confusing.
The definition given by the “Diccionario de Computación (In Spanish)”
published by McGraw-Hill in 1991 establishes that a bit is “the unit of information
equivalent to a binary decision.” This definition is almost the same as the previous
definition, with “election,” however, replaced by the stronger term “decision.” This
definition seems to be also ambiguous and imprecise.
A second meaning of the term given in the same dictionary establishes that “a bit
is a non-dimensional unit of the capacity of storage that expresses the capacity of
storage as the base 2 logarithm of X, X being the number of possible states of the
device”. This statement goes directly to the domain of axiomatic definitions. But. . .
what “device” is referred to here? And. . . why “base 2” and not, for example, “base-
5”? The reason for this base will be discussed later on in the text.
Let us leave aside academic sources of definitions for the moment. The author of
this text did once an experiment consisting of asking some of his closest col-
leagues—all professionals in the computer sciences—for their own definitions of
bit. Their definitions are as follows:
• A mathematical representation of the two possible states of a switch that uses a
base 2 number system.
• A minimum unit of information of an alphabet with just two symbols.
• The smallest unit of information in a machine.
1.1 Looking for the Bit 3
• A binary digit.
• A unit of numbering that can take the values 0 or 1.
• The minimum quantity of information that can be stored and transmitted by a
computer system.
The full list of definitions, included the academic ones, can be classified into
three different categories:
(a) Those that emphasize conceptual nuances.
(b) Those that focus on units of information.
(c) Those that highlight the binary nature of the bit.
It is clear that the bit exists by definition, exactly for the same reason that the
color red is red because scientists decided that, in the visible spectrum of light, the
700–635 nm wavelength interval corresponds exactly to the color red. But do we
really need scientists to identify colors? After all, ordinary people recognized colors
long before scientists formally defined them.
Following this rather crazy line of reasoning, there must be something essential
and intuitive in colors that means that they can be recognized without needing a
definition or some kind of formal characterization. And this is exactly what we are
going to explore regarding the bit. In particular we will try to find answers to the
following questions:
(a) Is there something apart from its own definition that justifies the concept of bit?
(b) Is there something that allows us to identify the bit as a basic unit of
information?
(c) Is there something that allows us to establish the binary nature of the bit?
But first let us digress briefly. In almost all scientific problems it is possible to
reason following one of two different strategies:
• From data to conclusions. In artificial intelligence, this is data-driven reasoning.
• From a hypothesis to data that confirm the hypothesis. In artificial intelligence,
this is backward reasoning.
In both cases knowledge is necessary, and the only difference is in how this
knowledge is applied.
In data-driven reasoning, if we know that our knowledge base Θ includes five
rules (or chunks of knowledge) as our axioms:
Θ ¼ fR 1 ; R2 ; R3 ; R4 ; R5 g
R1 Axiom1 : IF A THEN B
R2 Axiom2 : IF B AND C THEN D
4 1 The Universe of Binary Numbers
and if we also know that the set of data Δ for our problem is
Δ ¼ fA; C; E; Fg
But now we have to stop since we have no further knowledge that will yield new
information. This is the way data-driven reasoning works.
If, on the other hand, we want to apply a backward-reasoning process, we first
need a working hypothesis, for example D, and then have to use our axioms to look
for information that will confirm or reject this hypothesis.
Using the same example as above, to confirm D we need to use Axiom2 (because
D is in the conclusion part of the axiom). Since all the axioms are true, we require
the condition component of Axiom2 also to be true. But for B and C in the condition
component, C but not B is in Δ, and so we need B in order to confirm our initial
working hypothesis D. B therefore has to be considered as a new working hypoth-
esis. In order to confirm B, we have to use Axiom1, which needs A in its condition
component. Since A is in Δ, then we can apply Axiom1 to deduce B. Once we know
that B is true, then we can apply Axiom2 and deduce D. This is how backward
reasoning works.
Note that the set of axioms Θ and the initial set of data Δ are exactly the same in
both cases, yet the final result is different. Thus, denoting by Ψ the information
obtained after reasoning with our data and axioms, the results are
The difference between the two processes is not related to the actual information
used, but to the way in which this information is used. Something similar occurs in
physics with electromagnetism. We can begin with Coulomb’s law and arrive at
Maxwell’s equations, or we can start with Maxwell’s equations and arrive at
Coulomb’s law. Again, the way we apply our knowledge is different.
And what link is there between the above arguments and the concept of bit?
None, in fact. The idea was to highlight how information and knowledge can be
1.1 Looking for the Bit 5
used in many different ways to produce different results (if you have trouble
believing that, ask a politician!).
Returning now to the bit, let us explore some ideas. Suppose that we do not know
what exactly a bit is, that is to say, we have no definition. Could we justify its
existence or even find some kind of proof to demonstrate that there is a formal
definition?
Assume that we have a problem about which, initially, we know almost nothing
other than
1. We know we have a problem.
2. The problem may have a solution.
If there is a solution, it has to be one of the solutions si in a given space S of
possible solutions.
Since we do not know the solution to our problem, all si 2 S are equally likely
(that is to say, any possible solution may be the real solution.) What we can do is
gradually seek out information that can be used to discard some of the initial
options in S. The more relevant the information collected and applied to our
problem, the smaller the remaining space of possible solutions. This situation is
illustrated in Fig. 1.1.
We will now formalize the problem in mathematical terms. Let N be the a priori
number of equally likely possible solutions to our problem that can be represented
with n symbols. We will use information to reduce the size of N, assuming all
information is relevant. Therefore, the more information we use, the more options
can be discarded, thereby reducing the size of N. We define the quantity of
information Ψ using Claude Shannon’s formula:
Ψ ¼ k loge ðN Þ
INFORMATION
SPACE OF SPACE OF
SPACE OF POSSIBLE SOLUTIONS POSSIBLE POSSIBLE SOLUTION
SOLUTIONS SOLUTIONS
systems (or spaces of different solutions) with N1 and N2 equally probable events,
respectively. If we consider the system as a whole, the entire space of solutions will
be
N ¼ N1 N2
The situation is similar to what happens when we have two different and
independent sets of elements:
Although not strictly necessary, we will assume that each element in Ω1 and Ω2
has a given probability of occurring:
Ψ ¼ k loge ðN Þ
then
Ψ ¼ k loge ðN Þ ¼ k loge ðN 1 N 2 Þ
¼ k loge ðN 1 Þ þ k loge ðN 2 Þ ¼ Ψ1 þ Ψ2
And that’s it. Thanks to the logarithm we were able to build something coherent
and nicely structured. We will go a step further in our thinking. Above we said “Let
N be the a priori number of equally likely possible solutions to our problem that can
be represented with n symbols.” But if we have n symbols then how many equally
likely states can be represented?
The answer is evident: N ¼ 2n. Thus, if n ¼ 3 (for example, A, B, C such that A,
B, C 2 {0, 1}), then the N equally likely states are those reflected in Table 1.1.
Thus, if we take N ¼ 2n to be equivalent to the quantity of information in a given
system, then
1
kloge ð2Þ ¼ 1 ! k ¼
loge ð2Þ
Going back to Ψ ¼ k loge(N ), with some further work we obtain the following
result:
loge ðN Þ
Ψ ¼ k loge ðN Þ ¼ ¼ log2 ðN Þ ¼ n
loge ð2Þ
And here we have the formal definition of the bit! If the reader does not believe
the mathematical equality
loge ðN Þ
¼ log2 ðN Þ
loge ð2Þ
loge ðN Þ
A¼ ! loge ðN Þ ¼ A loge ð2Þ
loge ð2Þ
Then
How can we use a single bit? What is it about bits that justifies their use? Both
questions are academic, of course, since we already know that bits perform their job
in computers, and computers do quite a lot of things. However, it is clear by now
that a bit is nothing more than a binary digit that is expressed in the base 2 number
system. We can also do the same things with bits that can be done with any other
number written in any other number base. The reason for using bits is fundamen-
tally practical; among many other things, they enable fast computation. In any case,
it is easy to change from one to another number base. In our examples and
discussion, we will focus mainly, although not exclusively, on whole numbers
(integers).
Base 10 to Base 2
To convert a base 10 integer to a base 2 integer, we first divide the base 10 number
by two. The remainder of this quotient is referred to as the least significant bit. We
divide the resulting integer by two successively until the quotient becomes zero.
This is illustrated below with the example of the base 10 integer 131, which we
want to represent in base 2:
Now, starting with the most significant bit and ending with the least significant
bit, we can write the remainders to obtain
ð131Þ10 ¼ ð10000011Þ2
1 0 0 0 0 0 1 1
128 ¼ 27). However, we still need 23 to obtain 151 (because 151 128 ¼ 23). This
value will be achieved by distributing more numbers 1 and 0 among the powers of
two, in such a way that the sum yields the result we are seeking. In the example, the
correct powers of two are 4, 2, 1 and 0, and the resulting numbers are 16, 4, 2 and
1, respectively. In other words
ð151Þ10 ¼ 1 27 þ 0 26 þ 0 25 þ 1 24 þ 0 23
þ1 22 þ 1 21 þ 1 20 ¼ ð10010111Þ2
0:3125 2 ¼ 0:625 ! 0
0:6250 2 ¼ 1:250 ! 1
0:2500 2 ¼ 0:500 ! 0
0:5000 2 ¼ 1:000 ! 1
Base 2 to Base 10
If we want to do the reverse and convert an integer from base 2 to base 10, we do the
following:
Beginning on the right side of the binary number, multiply each bit by 2 and then
raise the result to the consecutive power, according to the relative position of the
corresponding bit and beginning with the power of 0. After completing the multi-
plications, add all the partial results to obtain the number in base 10.
By way of an example,
Therefore
ð110101Þ2 ¼ ð53Þ10
10 1 The Universe of Binary Numbers
The procedure is the same for a non-integer, except that we need to take into
account that, because 0 is to the left of the decimal, the digits to the right are raised
to negative powers. For example,
Note that we can work with many different number bases, although computer
science, for practical and historical reasons, works with base 2, base 8 or base 16.
To conclude this section, Table 1.2 shows conversions between the main number
bases.
We are now going to look at bits from a totally different perspective. Imagine we
have the circuits illustrated in Figs. 1.2 and 1.3 and Table 1.3.
According to the circuit analogy, bits can be represented as follows:
1
Bit 0
0
1.3 Single Bits Represented with Stickers 11
0
Bit 1
1
Let us try to explain what we did. Up to now, we have considered the concept of
bit from a very static point of view:
1. A bit is 1 if something is true.
2. A bit is 0 if something is false.
But bits need to be implemented in some physical device, for example, in
circuits as depicted above. If the circuit is ON the sticker is happy. Conversely, if
the circuit is OFF then the sticker is sad. However, the ON or OFF state applies to
the whole circuit; in other words, it is not possible for half the circuit to be in the ON
12 1 The Universe of Binary Numbers
state and the other half to be in the OFF state. Let us put two marks, A and B, in the
circuit in such a way that A is before the sticker and B is after the sticker. A and
B must always be in the same state, independently of the state of the whole circuit.
In other words
If A ¼ 0 and B ¼ 0 ! Bit is 0 ! Circuit OFF
If A ¼ 1 and B ¼ 1 ! Bit is 1 ! Circuit ON
If A ¼ 0 and B ¼ 1 ! Illogical
If A ¼ 1 and B ¼ 0 ! Illogical
Now, looking at Table 1.3 and remembering that A is located before the sticker,
and B is located after the sticker, the following cases are represented:
Case 1: Circuit is ON There is a Bit 1
A ¼ 1 and B ¼ 0 is false
A ¼ 1 and B ¼ 1 is true
Case 2: Circuit is OFF There is a Bit 0
A ¼ 0 and B ¼ 0 is true
A ¼ 0 and B ¼ 1 is false
The above cases can clearly be represented as a matrix. From Table 1.3 we can
verify that this is true, because if a bit is represented by a column matrix then
1
Bit is 0 j0i when, given A ¼ 0, then B ¼ 0 )
0
0
Bit is 1 j1i when, given A ¼ 1, then B ¼ 1 )
1
Up to now we have learned something about bits and about how number bases can
be changed in order to be able to work with bits. Now we take things further and
discuss binary logic, without which current computers could not work. Binary logic
was developed at the beginning of the nineteenth century by the mathematician
George Boole in order to investigate the fundamental laws of human reasoning.
In binary logic, variables can only have two values, traditionally designated as
true and false, and usually represented as 1 and 0, respectively. At a given moment,
the same variable can only be in one of these states. This is why binary logic
handles logic states, not real quantities. In other words, 0 and 1, even though they
are numbers, do not represent numerical quantities. They are, rather, symbols of
two different states that cannot coexist at the same time (at least in a classical
system; matters are different in quantum systems).
In binary logic systems, variables are represented in base 2. The reason is almost
trivial, since the establishment of a direct relationship between the numerical values
and their corresponding logic states is immediate. Nevertheless, base 2 (or any
another number base) and binary logic are totally different concepts. This is one of
the reasons why, for the moment, we will use the following notation:
An important feature of logic values is that they allow logic operations. A logic
operation assigns a true or false value to a combination of conditions for one or
more factors. The factors in a classical logic operation can only be true or false, and
consequently the result of a logic operation can also only be true or false. Table 1.4
depicts some of these logic operations.
Let us now experiment with these logic operations. Let R be the result of some
logic operation and let x, y, z. . . be the variables involved in the logic operation.
Binary Equality
The result for R after applying binary equality to a variable x is very simple:
If x is true, then R is true.
If x is false, then R is false.
If we use the particular notation introduced at the beginning of this chapter,
Table 1.5 is the truth table that illustrates binary equality.
To visualize how binary equality works, suppose that we have a car with an
automatic light detector: when it is dark the car lights turn on, and when it is not
dark the car lights turn off. The logic representation of this example is thus
Binary Negation
The car example can also be used to illustrate binary negation; we only need to
change ‘darkness’ to ‘brightness’. Thus
Binary Disjunction
If x is true, or y is true, or both x and y are true, then R is true, otherwise R is false.
1.4 Binary Logic 15
Table 1.7 Truth table for |x〉 |y〉 |R〉 ¼ |x〉 _ |y〉
binary disjunction
|0〉 |0〉 |0〉
|0〉 |1〉 |1〉
|1〉 |0〉 |1〉
|1〉 |1〉 |1〉
Table 1.8 Truth table for |x〉 |y〉 |R〉 ¼ |x〉 |y〉
binary exclusive disjunction
|0〉 |0〉 |0〉
|0〉 |1〉 |1〉
|1〉 |0〉 |1〉
|1〉 |1〉 |0〉
Since the logic operation involves two variables there are four possible combi-
nations. Table 1.8 depicts the truth table for binary exclusive disjunction.
To illustrate again using the car example, we are on a long trip and decide to
listen to some music to make our trip less boring. The car has both a radio and a CD
player and we can choose between the radio or an Eric Clapton CD. We cannot
connect both simultaneously since we would only hear what would sound like a
swarm of crickets. Thus
Table 1.9 Truth table for |x〉 |y〉 |R〉 ¼ |x〉 # |y〉
binary conjoint negation
|0〉 |0〉 |1〉
|0〉 |1〉 |0〉
|1〉 |0〉 |0〉
|1〉 |1〉 |0〉
Table 1.10 Truth table for |x〉 |y〉 |R〉 ¼ |x〉 ^ |y〉
binary conjunction
|0〉 |0〉 |0〉
|0〉 |1〉 |0〉
|1〉 |0〉 |0〉
|1〉 |1〉 |1〉
The results for binary conjoint negation are shown in Table 1.9.
Again we use our car to illustrate. We are driving through the desert of Arizona.
It is three o’clock in the afternoon and a merciless sun is shining in a cloudless sky.
The temperature outside the car is 110 F (43 C). Disastrously, both a conventional
electric fan we have installed in our car and the air conditioning stop working. Thus
Binary Conjunction
Binary conjunction is performed using the AND operator (symbol ^). The result
R of binary conjunction applied to two variables x and y is the following:
Again we have two variables with four possible combinations. Table 1.10 shows
the truth table for binary conjunction.
Let us go back to the previously mentioned example of a toxic-substance test
while driving. After the breathalyzer test, the police officer can record the results as
follows:
1.4 Binary Logic 17
Table 1.11 Truth table for |x〉 |y〉 |R〉 ¼ |x〉 " |y〉
binary exclusion
|0〉 |0〉 |1〉
|0〉 |1〉 |1〉
|1〉 |0〉 |1〉
|1〉 |1〉 |0〉
Binary Exclusion
Binary exclusion, the negation of binary conjunction, is performed using the NAND
operator (symbol "). The result R of binary exclusion applied to two variables x and
y is as follows:
The two variables again mean four possible combinations. Table 1.11 depicts the
results for binary exclusion.
We have been driving for a long time and are tired so we decide that now is a
good moment to stop at a gas station to eventually refuel or to add water to the
radiator.
Simple binary implication, performed by the operator IMP (symbol !), is a rather
different logic operation from those described above, as it relies greatly on its own
definition, as follows:
While at Vienna he met the great pianists and played far better
than any of them. No one played with such expression, with such
power or seemed worthy even to compete with him. Mozart and
others had been charming players and composers, but Beethoven
was powerful and deep, even most humorous when he wanted to be.
He worked well during these years, and with his usual extreme
care changed and rechanged the themes he found in his little sketch
books into which, from boyhood he had put down his musical ideas.
Those marvelous sketch books! What an example they are! They
show infinite patience and “an infinite capacity for taking pains”
which has been given by George Eliot as a definition of genius.
The Three Periods
If you have ever seen a sculptor modeling in clay you know that his
great problem is to keep it from drying, because only in the moist
state can it be moulded into shape. In the same way, we have seen in
following the growth of music, that no matter how beautiful a style of
composition is, as soon as it becomes set in form, or in other words
as soon as it hardens, it changes. Let us look back to the period of the
madrigal. You remember that the early madrigals were of rare beauty
but later the composers became complicated and mechanical in their
work and the beauty and freshness of their compositions were lost.
The people who felt this, reached out for new forms of expression
and we see the opera with its arias and recitatives as a result. The
great innovator Monteverde, broke this spell of the old polyphonic
form, which, like the sculptor’s clay, had stiffened and dried.
The same thing happened after Bach brought the suite and fugue
to their highest. The people again needed something new, and
another form grew out of the suite, the sonata of Philip Emanuel
Bach, Haydn and Mozart. The works of these men formed the Classic
Period which reached its greatest height with the colossus,
Beethoven. As we told you, he used the form inherited from Haydn
and Mozart, but added much of a peculiar power which expressed
himself. But again the clay hardened! Times and people changed,
poetry, science and philosophy led the way to more personal and
shorter forms of expression. Up to Bach’s time, music, outside of the
folk-song, had not been used to express personal feeling; the art was
too young and had grown up in the Church which taught the denial
of self-expression.
In the same way, the paintings up to the time of the 16th century
did not express personal feelings and happenings, but were only
allowed to be of religious subjects, for the decorating of churches and
cathedrals.
Beethoven, besides being the peak of the classic writers, pointed
the way for the music of personal expression, not mere graceful
expression as was the fashion, which was called the “Romantic
School” because he was big enough to combine the sonata form of
classic mould with the delicacy, humor, pathos, nobility and singing
beauty for which the people of his day yearned.
This led again to the crashing of the large and dried forms made
perfect by Beethoven and we see him as the bridge which leads to
Mendelssohn, Chopin, Schubert and Schumann and we see them
expressing in shorter form every possible human mood.
Beethoven was great enough to bring music to maturity so that it
expressed not only forms of life, but life itself.
How and what did he do? First, he became master of the piano and
could from childhood sit down and make marvelous improvisations.
He studied all forms of music, counterpoint, harmony, and
orchestration. At first he followed the old forms, as we see in the first
two symphonies. In the third symphony, the Eroica, he changed
from the minuet (a relic of the old dance suite) to the scherzo, an
enlarged form of the minuet with more chance for musical
expression,—the minuet grown up. In sonatas like The Pathetique,
he used an introduction and often enlarged the coda or ending, to
such an extent that it seems like an added movement, so rich was he
in power in working over a theme into beautiful musical speech.
Later we see him abandoning set forms and writing the Waldstein
Sonata in free and beautiful ways. Even the earlier sonatas like The
Moonlight and its sister, Opus 27, No. 2, are written so freely that
they are called Fantasy Sonatas, so full of free, flowing melody has
the sonata become under his hand.
His work becomes so lofty and so grand, whether in humorous or
in serious vein, that when we compare his compositions to those of
other men, he seems like one of the loftiest mountain peaks in the
world, reaching into the heavens, yet with its base firmly standing in
the midst of men.
A Composer of Instrumental Music
And I know not if, save in this, such gift be allowed to man,
That out of three sounds he frame, not a fourth sound, but a star.
We feel so familiar with the Pianoforte that we call it piano for short
and almost forget that it is dignified by the longer name. We forget
too, that Scarlatti, Rameau and Bach played not on the piano but on
its ancestors, and that Byrd, Bull and Gibbon did not write their
lovely dance suites for the instrument on which we play them today.
The Pianoforte’s family tree has three distinct branches,—strings,
sounding board and hammers. First we know the piano is a stringed
instrument, although it hides its chief characteristic, not under a
bushel, but behind a casing of wood.
Where Stringed Instruments Came From