You are on page 1of 53

Adventures in Computer Science From

Classical Bits to Quantum Bits 1st


Edition Vicente Moret-Bonillo (Auth.)
Visit to download the full and correct content document:
https://textbookfull.com/product/adventures-in-computer-science-from-classical-bits-to
-quantum-bits-1st-edition-vicente-moret-bonillo-auth/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Bits on Chips Harry Veendrick

https://textbookfull.com/product/bits-on-chips-harry-veendrick/

Introduction to Computing Systems: From Bits & Gates to


C & Beyond 3rd Edition Yale Patt

https://textbookfull.com/product/introduction-to-computing-
systems-from-bits-gates-to-c-beyond-3rd-edition-yale-patt/

From Classical to Quantum Fields 1st Edition Laurent


Baulieu

https://textbookfull.com/product/from-classical-to-quantum-
fields-1st-edition-laurent-baulieu/

The power of fifty bits the new science of turning good


intentions into positive results Nease

https://textbookfull.com/product/the-power-of-fifty-bits-the-new-
science-of-turning-good-intentions-into-positive-results-nease/
Classical and Quantum Dynamics From Classical Paths to
Path Integrals Fourth Edition Dittrich

https://textbookfull.com/product/classical-and-quantum-dynamics-
from-classical-paths-to-path-integrals-fourth-edition-dittrich/

Classical and Quantum Dynamics From Classical Paths to


Path Integrals 5th Edition Walter Dittrich

https://textbookfull.com/product/classical-and-quantum-dynamics-
from-classical-paths-to-path-integrals-5th-edition-walter-
dittrich/

Classical and Quantum Dynamics: From Classical Paths to


Path Integrals 4th Edition Walter Dittrich

https://textbookfull.com/product/classical-and-quantum-dynamics-
from-classical-paths-to-path-integrals-4th-edition-walter-
dittrich/

Classical and Quantum Dynamics -- From Classical Paths


to Path Integrals 6th Edition Walter Dittrich

https://textbookfull.com/product/classical-and-quantum-dynamics-
from-classical-paths-to-path-integrals-6th-edition-walter-
dittrich/

Classical and quantum dynamics From classical paths to


path integrals 6th Edition Dittrich W.

https://textbookfull.com/product/classical-and-quantum-dynamics-
from-classical-paths-to-path-integrals-6th-edition-dittrich-w/
Vicente Moret-Bonillo

Adventures
in Computer
Science
From Classical Bits to Quantum Bits
Adventures in Computer Science
Vicente Moret-Bonillo

Adventures in
Computer Science
From Classical Bits to Quantum Bits
Vicente Moret-Bonillo
Departamento de Computación
Universidad de A Coru~na
A Coru~na, Spain

ISBN 978-3-319-64806-4 ISBN 978-3-319-64807-1 (eBook)


DOI 10.1007/978-3-319-64807-1

Library of Congress Control Number: 2017948673

© Springer International Publishing AG 2017


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with
regard to jurisdictional claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To my family
Preface

Would you tell me, please, which way I ought to go from here? said Alice. That depends a
good deal on where you want to get to, said the Cat. I don’t much care where, said Alice.
Then it doesn’t matter which way you go, said the Cat.
Lewis Carroll

In a long academic career teaching different subjects in the fields of chemistry,


physics, and computer science, I noticed how many of my students finish their
studies knowing how to do things, although sometimes they do not understand the
basic underlying concepts that permit generalization and reasoning by analogy. In
fact, I am convinced that, perhaps, our teaching efforts focus excessively on
specialization. I fear that educators (and I belong to this exciting community)
very frequently overlook some very interesting questions that deserve deep and
abstract thinking. In other words, we center more on problem solving than on the
problem itself. We also assume that our students are capable of thinking by
themselves, and they are, of course, but it is also true that not all of them have
the same skills when trying to find the solution to a problem that has not previously
been explained. However, these skills can be fostered by the teacher and acquired
by the student, and I believe that at least a part of this training is our responsibility
as educators. With these ideas in mind, I decided to write this book. It is an almost
speculative text in which I discuss different areas of science from a computer
science perspective. This fact will be more evident in some chapters of the book
than in others.
The main focus of the text is the basic unit of information and the way in which
our understanding of this basic unit of information has evolved over time. Do not
expect to find anything new in this book; you may even consider the way I have
chosen to explain things to be extravagant. This could be due to the fact that I was
inspired by my readings of two brilliant scientists and communicators: Richard
P. Feynman and George Gamow. I hope that the final outcome will be to pique the
reader’s curiosity and lead him or her to think with the sole objective of better
understanding our world. As I used to say, and according to Richard P. Feynman:
“All right, I have not been the first, but at least I understand it.”

vii
viii Preface

The scientific material of this book covers concepts related to information,


classical computing, logic, reversible computing, quantum mechanics, quantum
computing, thermodynamics, and something of artificial intelligence and even
biology, all approached from the point of view of the computer sciences. In some
chapters, the discussion is extremely informal, even irreverent, but I think that this
way of presenting difficult concepts may stimulate the curiosity of the reader. In
fact, it was my own doctoral students who suggested I write a book based on my
notes for the doctoral course on “Physical Models in Advanced Computing” that I
teach, as I had done previously with the notes of an undergraduate course on
“Artificial Intelligence.” Perhaps this is why the book’s layout may seem to be
rather unusual, although, in the opinion of the author, this is not entirely a bad thing,
as it means the reader can pick and choose chapters in no particular order and still
achieve a reasonable understanding of the material.
The book can be used by graduate, postgraduate, and even doctoral students,
although I suggest that it would most benefit graduate students—and people
suffering from insomnia! The apparent disorganization of this book is not
indeliberate. In fact, it took me a long time to properly disorganize the text so as
to encourage the reader to embark on reasoning by analogy and also to force them to
verify for themselves some of the stranger ideas and approaches described in
the book.
The material in this book has been organized into eight chapters. Chapter 1
focuses on “classical bits.” Chapter 2 explains “reversibility.” Chapter 3 discusses
“reversible architectures.” Chapter 4 describes basic “quantum mechanics princi-
ples.” Chapter 5 draws on the material presented in Chaps. 2–4 in order to introduce
the “quantum computing paradigm.” Chapter 6 discusses “Feynman’s universal
quantum machine” in depth. Chapter 7 explores key “quantum algorithms.” Finally,
Chap. 8 poses questions for reflection and discussion. Each chapter concludes with
a summary, a glossary of terms, and an explanation of the notation used. Before the
bibliography, Appendix A provides some mathematical background, and Appendix
B brings together the terms used in each individual chapter in a glossary.
The book begins by asking the following question: what is a bit? This apparently
trivial concept is not, in fact, so trivial and we spend several pages thinking about
it. We continue with some fundamental and elementary computations with bits in,
for example, binary arithmetic and binary logic and describe the implementation of
a number of classical operations by means of logic gates. We then introduce the
concept of reversible computing in terms of (1) a detailed description of conven-
tional reversible logic gates, (2) a redesign of conventional logic gates so that they
are reversible, (3) the construction of a simple reversible adder, (4) a fun example of
reversibility, and (5) a brief analysis of the requirements imposed by reversibility.
Next comes a more or less formal description of what we call reversible architec-
tures, and we analyze two examples in depth. In one example, a great deal of
disorder is generated, and in the other, none. This issue of disorder has important
implications for certain phenomena related to the fundamental and basic energy
required for an abstract computation. The discussion of reversibility draws greatly
Preface ix

on the Feynman Lectures on Computation, which, of course, is cited in the


Bibliography.
Next, we try to establish a relationship between three essential questions that
justify quantum approaches in the computer sciences: (1) the energy required to
perform a real-life computation, (2) the size of current processors, and (3) the
reversibility of quantum operations. We then offer a historical perspective on the
antecedents and basic concepts of quantum mechanics, the conundrum implied by
Heisenberg’s uncertainty principle for the concept of measurement, and the
description of quantum states by means of Schr€odinger’s equation. This all lays
the groundwork for what comes later in the book.
Based on the above concepts, we establish the conditions that justify the use of
quantum techniques for certain kinds of computational tasks. Next, we use formal
descriptions and formal argumentations—rather than a traditional approach—to
introduce key quantum mechanical concepts and approaches. Nevertheless, the
mathematical load is minimized as much as possible, and we adopt the axiomatic
formulation of quantum mechanics. After a review of certain fundamental prelim-
inary questions, we proceed directly to a description of the quantum unit of
information, the definition of certain essential elements in quantum computational
approaches, and the introduction of possible physical representations of quantum
computational systems. We also describe some fundamental properties of quantum
operators, explore the concept of quantum information, and suggest how to build
systems capable of handling several quantum bits (or qubits). Finally, we study the
problem of the collapse of information associated with the measurement of qubits
as a consequence of Heisenberg’s uncertainty principle.
The rest of the book is formally different. Whereas the earlier chapters navigated
the universe of concepts and ideas, we now enter the world of practical issues and
describe the universal quantum architecture proposed by Feynman in detail. We
begin with a discussion of a kind of peculiar matrices and show how these rather
curious matrices can be used to represent the behavior of reversible logic gates. We
also explore the Hamiltonian operator proposed by Feynman for quantum compu-
tation. The detailed approach followed here is illustrated with the original Feynman
example, analyzed in depth from different points of view.
Having explored the basic concepts and a universal architecture, we next
consider some remarkable quantum algorithms. The approach is based on “Quan-
tum Circuit Theory,” which, in our opinion, is more easily understood than, for
example, the “Adiabatic Computation” approach. We begin with Deutsch’s algo-
rithm and its versions, analyzed from several different points of view. We then
generalize this algorithm and introduce the Deutsch-Jozsa algorithm. We next
explore the algorithm proposed by Simon, developing an alternative, more trans-
parent version. We conclude our description of quantum algorithms with a discus-
sion of quantum teleportation, using spies to illustrate it. We close the circle with
something akin to pure speculation, based on developments in 2014 and 2015 in
relation to attempts to develop quantum computers. We also refer briefly to
thermodynamics, biology, and artificial intelligence and conclude with some
thoughts on the material presented in the book.
x Preface

Just to reiterate—although this book does not contain anything particularly new,
it is to be hoped that the reader will find novelty in the way the material is presented.
Enjoy!

A Coru~
na, Spain Vicente Moret-Bonillo
2017
Contents

1 The Universe of Binary Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 Looking for the Bit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Number Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Single Bits Represented with Stickers . . . . . . . . . . . . . . . . . . . . . 10
1.4 Binary Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5 Logic Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6 Some Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.7 A Peculiar Way to Compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.8 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.9 Glossary of Terms and Notation Used in This Chapter . . . . . . . . . 25
2 Back and Forth Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.1 Introducing Reversibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 The Toffoli Gate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3 The Fredkin Gate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.4 Building Reversible Gates from Reversible Gates . . . . . . . . . . . . 33
2.5 Adding in a Reversible Way . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.6 Back to Billiards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.7 A Simple Analysis of Reversibility . . . . . . . . . . . . . . . . . . . . . . . 36
2.8 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.9 Glossary of Terms and Notation Used in This Chapter . . . . . . . . . 37
3 Reversible Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.1 Basic Reversible Logic Gate Architectures . . . . . . . . . . . . . . . . . 40
3.2 The Reversible Full Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.3 Architecture of the General-Purpose Reversible Computer . . . . . . 47
3.4 Energy Repercussions of Reversibility . . . . . . . . . . . . . . . . . . . . 49
3.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.6 Glossary of Terms and Notation Used in This Chapter . . . . . . . . . 50

xi
xii Contents

4 The Principles of Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . . 53


4.1 History and Basic Concepts of Quantum Mechanics . . . . . . . . . . 53
4.2 Heisenberg’s Uncertainty Principle . . . . . . . . . . . . . . . . . . . . . . . 56
4.3 Schr€odinger’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.4 Schr€odinger’s Equation Revisited . . . . . . . . . . . . . . . . . . . . . . . . 64
4.5 The Postulates of Quantum Mechanics . . . . . . . . . . . . . . . . . . . . 66
4.6 Some Quantum Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.8 Glossary of Terms and Notation Used in This Chapter . . . . . . . . . 72
5 Introducing Quantum Computing . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1 A Brief Overview of Quantum Computing . . . . . . . . . . . . . . . . . 75
5.2 The Formalism of Quantum Computing . . . . . . . . . . . . . . . . . . . 81
5.3 Systems of Qubits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.4 Qubits and Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.5 Putting It All Together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.6 Constructing Algorithms with Qubits . . . . . . . . . . . . . . . . . . . . . 98
5.7 Summary of the Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.8 Glossary of Terms and Notation Used in This Chapter . . . . . . . . . 116
6 Feynman’s Quantum Computer Machine . . . . . . . . . . . . . . . . . . . . 119
6.1 Playing with Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.2 Quantum Computer Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.3 The Quantum Computer at Work . . . . . . . . . . . . . . . . . . . . . . . . 130
6.4 Setting Up the Quantum Computer . . . . . . . . . . . . . . . . . . . . . . . 131
6.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.6 Glossary of Terms and Notation Used in This Chapter . . . . . . . . . 133
7 Some Quantum Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.1 Deutsch’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
7.2 The Deutsch-Jozsa Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
7.3 Simon’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7.4 Quantum Teleportation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
7.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
7.6 Glossary of Terms and Notation Used in This Chapter . . . . . . . . . 163
8 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
8.1 What Are We Talking About? . . . . . . . . . . . . . . . . . . . . . . . . . . 165
8.2 The Situation in 2014 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.3 Human Brains and Computers . . . . . . . . . . . . . . . . . . . . . . . . . . 167
8.4 Artificial Intelligence, Parallelism and Energy Efficiency . . . . . . . 168
8.5 Energy Restrictions on Information Processing . . . . . . . . . . . . . . 169
8.6 Some Things to Ponder on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Contents xiii

8.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176


8.8 Glossary of Terms and Notation Used in This Chapter . . . . . . . . . 176

Appendix A: Mathematical Background . . . . . . . . . . . . . . . . . . . . . . . 179


A.1. Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
A.2. Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
A.3. Hermitian and Unitary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 184

Appendix B: Glossary of Terms Used in the Book . . . . . . . . . . . . . . . . 187

Suggested Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Chapter 1
The Universe of Binary Numbers

A large part of mathematics which becomes useful developed


with absolutely no desire to be useful, and in a situation
where nobody could possibly know in what area it would
become useful; and there were no general indications that it
ever would be so. By and large it is uniformly true in
mathematics that there is a time lapse between a
mathematical discovery and the moment when it is useful;
and that this lapse of time can be anything from 30 to
100 years, in some cases even more; and that the whole
system seems to function without any direction, without any
reference to usefulness, and without any desire to do things
which are useful.
John von Neumann

Before we start reflecting on difficult problems, we will review certain more or less
trivial concepts that are related to the peculiar entities and techniques already
implemented in our computers so that they can compute. Our computers are based
on binary logic, which uses the concept of bit to represent something that is true
(denoted by |1〉) or false (denoted by |0〉). As for the strange bar-and-angle-bracket
symbol, its exact meaning will become clear later. For the moment accept that if
the state of something is true, then the representation of that state is |1〉, and if the
state of something is false, then the representation of that state is |0〉. But let we
pose two questions:
• What kind of a thing is a bit?
• Is there a formal definition for the concept of bit?

© Springer International Publishing AG 2017 1


V. Moret-Bonillo, Adventures in Computer Science,
DOI 10.1007/978-3-319-64807-1_1
2 1 The Universe of Binary Numbers

1.1 Looking for the Bit

A common definition of the term bit is “a unit of measure of information by means


of which the election between two equally likely possibilities can be represented.”
This definition is strictly true but it seems (at least to me) to be rather dense. We will
spend some lines analyzing this definition:
• “Measure of information. . .” (There are no problems with this statement)
• “Election between two equally likely possibilities. . .” (We can go deeper into
this last statement)
The meaning of the phrase “Two equally likely possibilities. . .” is quite obvious.
For example, if we have A and we have B, and only have A and B, then the
probability of A, P(A), is equal to the probability of B, P(B). Assume the following
notation:

X  fPð AÞ ¼ Pð BÞ g

That is to say, we denote by X the circumstance that events A and B are equally
likely. According to this, the official definition of bit could be translated as follows:

BIT  Election fA; B=Xg

In this expression, {A,B/X} means A or B given X, and, from our point of view,
the problem of the official definition lies in the term ‘election’. The interpretation of
this term might be confusing.
The definition given by the “Diccionario de Computación (In Spanish)”
published by McGraw-Hill in 1991 establishes that a bit is “the unit of information
equivalent to a binary decision.” This definition is almost the same as the previous
definition, with “election,” however, replaced by the stronger term “decision.” This
definition seems to be also ambiguous and imprecise.
A second meaning of the term given in the same dictionary establishes that “a bit
is a non-dimensional unit of the capacity of storage that expresses the capacity of
storage as the base 2 logarithm of X, X being the number of possible states of the
device”. This statement goes directly to the domain of axiomatic definitions. But. . .
what “device” is referred to here? And. . . why “base 2” and not, for example, “base-
5”? The reason for this base will be discussed later on in the text.
Let us leave aside academic sources of definitions for the moment. The author of
this text did once an experiment consisting of asking some of his closest col-
leagues—all professionals in the computer sciences—for their own definitions of
bit. Their definitions are as follows:
• A mathematical representation of the two possible states of a switch that uses a
base 2 number system.
• A minimum unit of information of an alphabet with just two symbols.
• The smallest unit of information in a machine.
1.1 Looking for the Bit 3

• A binary digit.
• A unit of numbering that can take the values 0 or 1.
• The minimum quantity of information that can be stored and transmitted by a
computer system.
The full list of definitions, included the academic ones, can be classified into
three different categories:
(a) Those that emphasize conceptual nuances.
(b) Those that focus on units of information.
(c) Those that highlight the binary nature of the bit.
It is clear that the bit exists by definition, exactly for the same reason that the
color red is red because scientists decided that, in the visible spectrum of light, the
700–635 nm wavelength interval corresponds exactly to the color red. But do we
really need scientists to identify colors? After all, ordinary people recognized colors
long before scientists formally defined them.
Following this rather crazy line of reasoning, there must be something essential
and intuitive in colors that means that they can be recognized without needing a
definition or some kind of formal characterization. And this is exactly what we are
going to explore regarding the bit. In particular we will try to find answers to the
following questions:
(a) Is there something apart from its own definition that justifies the concept of bit?
(b) Is there something that allows us to identify the bit as a basic unit of
information?
(c) Is there something that allows us to establish the binary nature of the bit?
But first let us digress briefly. In almost all scientific problems it is possible to
reason following one of two different strategies:
• From data to conclusions. In artificial intelligence, this is data-driven reasoning.
• From a hypothesis to data that confirm the hypothesis. In artificial intelligence,
this is backward reasoning.
In both cases knowledge is necessary, and the only difference is in how this
knowledge is applied.
In data-driven reasoning, if we know that our knowledge base Θ includes five
rules (or chunks of knowledge) as our axioms:

Θ ¼ fR 1 ; R2 ; R3 ; R4 ; R5 g

and if we know that these rules are

R1  Axiom1 : IF A THEN B
R2  Axiom2 : IF B AND C THEN D
4 1 The Universe of Binary Numbers

R3  Axiom3 : IF D AND E AND F THEN H


R4  Axiom4 : IF A AND Z THEN V
R5  Axiom5 : IF V AND X AND G THEN W

and if we also know that the set of data Δ for our problem is

Δ ¼ fA; C; E; Fg

then we can conclude B, D and H by directly applying Axiom1, Axiom2 and


Axiom3—in that order—to our set of data Δ. Thus

Axiom1 ðΔÞ ! Δ0 ¼ fA; C; E; F; Bg


00
Axiom2 ðΔ0 Þ ! Δ ¼ fA; C; E; F; B; Dg
 00  000
Axiom3 Δ ! Δ ¼ fA; C; E; F; B; D; Hg

But now we have to stop since we have no further knowledge that will yield new
information. This is the way data-driven reasoning works.
If, on the other hand, we want to apply a backward-reasoning process, we first
need a working hypothesis, for example D, and then have to use our axioms to look
for information that will confirm or reject this hypothesis.
Using the same example as above, to confirm D we need to use Axiom2 (because
D is in the conclusion part of the axiom). Since all the axioms are true, we require
the condition component of Axiom2 also to be true. But for B and C in the condition
component, C but not B is in Δ, and so we need B in order to confirm our initial
working hypothesis D. B therefore has to be considered as a new working hypoth-
esis. In order to confirm B, we have to use Axiom1, which needs A in its condition
component. Since A is in Δ, then we can apply Axiom1 to deduce B. Once we know
that B is true, then we can apply Axiom2 and deduce D. This is how backward
reasoning works.
Note that the set of axioms Θ and the initial set of data Δ are exactly the same in
both cases, yet the final result is different. Thus, denoting by Ψ the information
obtained after reasoning with our data and axioms, the results are

ΨDATA DRIVEN REASONING ¼ fA; C; E; F; B; D; H g


ΨBACKWARD REASONING ¼ fA; C; E; F; B; Dg

The difference between the two processes is not related to the actual information
used, but to the way in which this information is used. Something similar occurs in
physics with electromagnetism. We can begin with Coulomb’s law and arrive at
Maxwell’s equations, or we can start with Maxwell’s equations and arrive at
Coulomb’s law. Again, the way we apply our knowledge is different.
And what link is there between the above arguments and the concept of bit?
None, in fact. The idea was to highlight how information and knowledge can be
1.1 Looking for the Bit 5

used in many different ways to produce different results (if you have trouble
believing that, ask a politician!).
Returning now to the bit, let us explore some ideas. Suppose that we do not know
what exactly a bit is, that is to say, we have no definition. Could we justify its
existence or even find some kind of proof to demonstrate that there is a formal
definition?
Assume that we have a problem about which, initially, we know almost nothing
other than
1. We know we have a problem.
2. The problem may have a solution.
If there is a solution, it has to be one of the solutions si in a given space S of
possible solutions.
Since we do not know the solution to our problem, all si 2 S are equally likely
(that is to say, any possible solution may be the real solution.) What we can do is
gradually seek out information that can be used to discard some of the initial
options in S. The more relevant the information collected and applied to our
problem, the smaller the remaining space of possible solutions. This situation is
illustrated in Fig. 1.1.
We will now formalize the problem in mathematical terms. Let N be the a priori
number of equally likely possible solutions to our problem that can be represented
with n symbols. We will use information to reduce the size of N, assuming all
information is relevant. Therefore, the more information we use, the more options
can be discarded, thereby reducing the size of N. We define the quantity of
information Ψ using Claude Shannon’s formula:

Ψ ¼ k loge ðN Þ

where k is a constant whose value is determined by the unit of information used.


This is an assumption almost as strong as the definition without justification of the
bit, but we merely want to obtain results by means of a different way of thinking.
Let us analyze what has been just stated. In the definition of the quantity of
information Ψ , using the logarithm can be justified by considering two independent

INFORMATION

SPACE OF SPACE OF
SPACE OF POSSIBLE SOLUTIONS POSSIBLE POSSIBLE SOLUTION
SOLUTIONS SOLUTIONS

Fig. 1.1 Increasingly smaller spaces of possible solutions


6 1 The Universe of Binary Numbers

systems (or spaces of different solutions) with N1 and N2 equally probable events,
respectively. If we consider the system as a whole, the entire space of solutions will
be

N ¼ N1  N2

The situation is similar to what happens when we have two different and
independent sets of elements:

Ω1 ¼ fa; b; c; dg; Ω2 ¼ fx; y; z; t; vg

Although not strictly necessary, we will assume that each element in Ω1 and Ω2
has a given probability of occurring:

PðaÞ, PðbÞ, PðcÞ, Pðd Þ for the elements in Ω1


PðxÞ, PðyÞ, PðzÞ, PðtÞ, PðvÞ for the elements in Ω2

The joint probability of (a, z), for example, will thus be

Pða; zÞ ¼ PðaÞ  PðzÞ

We go back to the justification of the logarithm, which requires us to make


{N ¼ N1  N2} compatible with the fact that information is additive (that is, the
quantity of information increases as new information appears). Thus, since

Ψ ¼ k loge ðN Þ

then

Ψ ¼ k loge ðN Þ ¼ k loge ðN 1  N 2 Þ
¼ k loge ðN 1 Þ þ k loge ðN 2 Þ ¼ Ψ1 þ Ψ2

And that’s it. Thanks to the logarithm we were able to build something coherent
and nicely structured. We will go a step further in our thinking. Above we said “Let
N be the a priori number of equally likely possible solutions to our problem that can
be represented with n symbols.” But if we have n symbols then how many equally
likely states can be represented?
The answer is evident: N ¼ 2n. Thus, if n ¼ 3 (for example, A, B, C such that A,
B, C 2 {0, 1}), then the N equally likely states are those reflected in Table 1.1.
Thus, if we take N ¼ 2n to be equivalent to the quantity of information in a given
system, then

Ψ ¼ kloge ðN Þ ¼ kloge ð2n Þ ¼ k  n  loge ð2Þ ¼ n½k loge ð2Þ


1.1 Looking for the Bit 7

Table 1.1 Eight equally A B C


likely states that can be
1 0 0 0
represented with three
symbols 2 0 0 1
3 0 1 0
4 0 1 1
5 1 0 0
6 1 0 1
7 1 1 0
8 1 1 1

Simplifying this equation in order to yield Ψ ¼ n (remember that we have


defined k as a constant whose value is determined by the unit of information
used), then

1
kloge ð2Þ ¼ 1 ! k ¼
loge ð2Þ

Going back to Ψ ¼ k loge(N ), with some further work we obtain the following
result:

loge ðN Þ
Ψ ¼ k loge ðN Þ ¼ ¼ log2 ðN Þ ¼ n
loge ð2Þ

And here we have the formal definition of the bit! If the reader does not believe
the mathematical equality

loge ðN Þ
¼ log2 ðN Þ
loge ð2Þ

then we can try this. Assume that

loge ðN Þ
A¼ ! loge ðN Þ ¼ A  loge ð2Þ
loge ð2Þ

Then

N ¼ 2A ! log2 ðN Þ ¼ A  log2 ð2Þ ¼ A


8 1 The Universe of Binary Numbers

1.2 Number Bases

How can we use a single bit? What is it about bits that justifies their use? Both
questions are academic, of course, since we already know that bits perform their job
in computers, and computers do quite a lot of things. However, it is clear by now
that a bit is nothing more than a binary digit that is expressed in the base 2 number
system. We can also do the same things with bits that can be done with any other
number written in any other number base. The reason for using bits is fundamen-
tally practical; among many other things, they enable fast computation. In any case,
it is easy to change from one to another number base. In our examples and
discussion, we will focus mainly, although not exclusively, on whole numbers
(integers).
Base 10 to Base 2

To convert a base 10 integer to a base 2 integer, we first divide the base 10 number
by two. The remainder of this quotient is referred to as the least significant bit. We
divide the resulting integer by two successively until the quotient becomes zero.
This is illustrated below with the example of the base 10 integer 131, which we
want to represent in base 2:

131  2 ¼ 65 and remainder ¼ 1


65  2 ¼ 32 and remainder ¼ 1
32  2 ¼ 16 and remainder ¼ 0
16  2 ¼ 8 and remainder ¼ 0
8  2 ¼ 4 and remainder ¼ 0
4  2 ¼ 2 and remainder ¼ 0
2  2 ¼ 1 and remainder ¼ 0
1  2 ¼ 0 and remainder ¼ 1

Now, starting with the most significant bit and ending with the least significant
bit, we can write the remainders to obtain

ð131Þ10 ¼ ð10000011Þ2

The eight-bit representation of the number 131 in base 2 is thus

1 0 0 0 0 0 1 1

Another conversion procedure is to use the numbers 1 and 0 with successive


powers of two (that is, 20, 21, 22 etc.) so that the resulting sum is the number we
want to convert. For example, to represent the base 10 number 151 in base 2, we
first have to look for the nearest power of two below 151, in this case, 7 (since
1.2 Number Bases 9

128 ¼ 27). However, we still need 23 to obtain 151 (because 151  128 ¼ 23). This
value will be achieved by distributing more numbers 1 and 0 among the powers of
two, in such a way that the sum yields the result we are seeking. In the example, the
correct powers of two are 4, 2, 1 and 0, and the resulting numbers are 16, 4, 2 and
1, respectively. In other words

ð151Þ10 ¼ 1  27 þ 0  26 þ 0  25 þ 1  24 þ 0  23
þ1  22 þ 1  21 þ 1  20 ¼ ð10010111Þ2

The procedure for transforming a base 10 non-integer between 0 and 1 into


binary is as follows:
1. Multiply the number by two; if the integer part is greater than 0, the
corresponding bit will be 1 and otherwise it will be 0.
2. Discard the integer part and recursively perform 1. with the non-integer part just
obtained. When there are only 0s to the right of the decimal point, stop.
3. Finally, arrange the resulting numbers in the same order in which they were
obtained (note that we have to take into account the minor issue that some
numbers can be transformed into periodic digits).
By way of an example, the conversion (0.3125)10 ¼ (0.0101)2 will look like this:

0:3125  2 ¼ 0:625 ! 0
0:6250  2 ¼ 1:250 ! 1
0:2500  2 ¼ 0:500 ! 0
0:5000  2 ¼ 1:000 ! 1

Base 2 to Base 10

If we want to do the reverse and convert an integer from base 2 to base 10, we do the
following:
Beginning on the right side of the binary number, multiply each bit by 2 and then
raise the result to the consecutive power, according to the relative position of the
corresponding bit and beginning with the power of 0. After completing the multi-
plications, add all the partial results to obtain the number in base 10.
By way of an example,

ð110101Þ2 ¼ ð1  2Þ0 þ ð0  2Þ1 þ ð1  2Þ2 þ ð0  2Þ3 þ ð1  2Þ4 þ ð1  2Þ5


¼ 1 þ 0 þ 4 þ 0 þ 16 þ 32 ¼ 53

Therefore

ð110101Þ2 ¼ ð53Þ10
10 1 The Universe of Binary Numbers

Table 1.2 Number Base 10 Base 2 Base 8 Base 16


equivalences in base 10, base
0 0 0 0
2, base 8 and base 16
1 1 1 1
2 10 2 2
3 11 3 3
4 100 4 4
5 101 5 5
6 110 6 6
7 111 7 7
8 1000 10 8
9 1001 11 9
10 1010 12 A
11 1011 13 B
12 1100 14 C
13 1101 15 D
14 1110 16 E
15 1111 17 F
16 10000 20 10

The procedure is the same for a non-integer, except that we need to take into
account that, because 0 is to the left of the decimal, the digits to the right are raised
to negative powers. For example,

ð110:101Þ2 ¼ ð1  2Þ2 þ ð1  2Þ1 þ ð0  2Þ0 þ ð1  2Þ1 þ ð0  2Þ2


þ ð1  2Þ3 ¼ ð6:625Þ10

Note that we can work with many different number bases, although computer
science, for practical and historical reasons, works with base 2, base 8 or base 16.
To conclude this section, Table 1.2 shows conversions between the main number
bases.

1.3 Single Bits Represented with Stickers

We are now going to look at bits from a totally different perspective. Imagine we
have the circuits illustrated in Figs. 1.2 and 1.3 and Table 1.3.
According to the circuit analogy, bits can be represented as follows:
 
1
Bit 0 
0
1.3 Single Bits Represented with Stickers 11

Fig. 1.2 An OFF circuit


with a sticker

Fig. 1.3 An ON circuit


with a sticker

Table 1.3 Tabular OFF circuit ON circuit


representation of Figs. 1.2 A¼0 A¼1
and 1.3
OFF circuit TRUE FALSE
B¼0 1 0
ON circuit FALSE TRUE
B¼1 0 1

 
0
Bit 1 
1

Let us try to explain what we did. Up to now, we have considered the concept of
bit from a very static point of view:
1. A bit is 1 if something is true.
2. A bit is 0 if something is false.
But bits need to be implemented in some physical device, for example, in
circuits as depicted above. If the circuit is ON the sticker is happy. Conversely, if
the circuit is OFF then the sticker is sad. However, the ON or OFF state applies to
the whole circuit; in other words, it is not possible for half the circuit to be in the ON
12 1 The Universe of Binary Numbers

state and the other half to be in the OFF state. Let us put two marks, A and B, in the
circuit in such a way that A is before the sticker and B is after the sticker. A and
B must always be in the same state, independently of the state of the whole circuit.
In other words
If A ¼ 0 and B ¼ 0 ! Bit is 0 ! Circuit OFF
If A ¼ 1 and B ¼ 1 ! Bit is 1 ! Circuit ON
If A ¼ 0 and B ¼ 1 ! Illogical
If A ¼ 1 and B ¼ 0 ! Illogical
Now, looking at Table 1.3 and remembering that A is located before the sticker,
and B is located after the sticker, the following cases are represented:
Case 1: Circuit is ON  There is a Bit 1
A ¼ 1 and B ¼ 0 is false
A ¼ 1 and B ¼ 1 is true
Case 2: Circuit is OFF  There is a Bit 0
A ¼ 0 and B ¼ 0 is true
A ¼ 0 and B ¼ 1 is false
The above cases can clearly be represented as a matrix. From Table 1.3 we can
verify that this is true, because if a bit is represented by a column matrix then
 
1
Bit is 0  j0i when, given A ¼ 0, then B ¼ 0 )
0
 
0
Bit is 1  j1i when, given A ¼ 1, then B ¼ 1 )
1

This notation—somewhat similar to Paul Dirac’s proposed bra-ket notation for


the quantum arena—is particularly useful for representing systems with two possi-
ble states. We need to remember that single bits, whether single classical bits or
single quantum bits, are units of information that need to be stored in physical
systems, for example, in a circuit (in the case of a classical system) or in the spin of
an electron (in the case of a quantum system). Thus

Classical system ) jstate of a circuiti ¼ jOFFi ¼ j0i


Classical system ) jstate of a circuiti ¼ jONi ¼ j1i
Quantum system ) jspin of an electroni ¼ j"i ¼ j0i
Quantum system ) jspin of an electroni ¼ j#i ¼ j1i

This represents an interesting way to introduce bra-ket notation from a classical


perspective.
1.4 Binary Logic 13

1.4 Binary Logic

Up to now we have learned something about bits and about how number bases can
be changed in order to be able to work with bits. Now we take things further and
discuss binary logic, without which current computers could not work. Binary logic
was developed at the beginning of the nineteenth century by the mathematician
George Boole in order to investigate the fundamental laws of human reasoning.
In binary logic, variables can only have two values, traditionally designated as
true and false, and usually represented as 1 and 0, respectively. At a given moment,
the same variable can only be in one of these states. This is why binary logic
handles logic states, not real quantities. In other words, 0 and 1, even though they
are numbers, do not represent numerical quantities. They are, rather, symbols of
two different states that cannot coexist at the same time (at least in a classical
system; matters are different in quantum systems).
In binary logic systems, variables are represented in base 2. The reason is almost
trivial, since the establishment of a direct relationship between the numerical values
and their corresponding logic states is immediate. Nevertheless, base 2 (or any
another number base) and binary logic are totally different concepts. This is one of
the reasons why, for the moment, we will use the following notation:

jstate of somethingi ¼ jfalsei ! j0i


jstate of somethingi ¼ jtruei ! j1i

An important feature of logic values is that they allow logic operations. A logic
operation assigns a true or false value to a combination of conditions for one or
more factors. The factors in a classical logic operation can only be true or false, and
consequently the result of a logic operation can also only be true or false. Table 1.4
depicts some of these logic operations.
Let us now experiment with these logic operations. Let R be the result of some
logic operation and let x, y, z. . . be the variables involved in the logic operation.

Table 1.4 Some logic operations


Operation Operator Symbol Comments
Tautology T T Always true
Exclusion NAND " Incompatibility
Simple implication IMP ! Condition
Negation NOT Ø Changes the logic state
Double implication XNOR $ Equivalence
Disjunction OR _ One has to be true
L
Exclusive disjunction XOR Exactly one has to be true
Conjunction AND ^ All have to be true
Conjoint negation NOR # All have to be false
Contradiction F F Always false
14 1 The Universe of Binary Numbers

Table 1.5 Truth table for |x〉 |R〉 ¼ |x〉


binary equality
|1〉 |1〉
|0〉 |0〉

Table 1.6 Truth table for |x〉 |R〉 ¼ Ø |x〉


binary negation
|1〉 |0〉
|0〉 |1〉

Binary Equality

The result for R after applying binary equality to a variable x is very simple:
If x is true, then R is true.
If x is false, then R is false.
If we use the particular notation introduced at the beginning of this chapter,
Table 1.5 is the truth table that illustrates binary equality.
To visualize how binary equality works, suppose that we have a car with an
automatic light detector: when it is dark the car lights turn on, and when it is not
dark the car lights turn off. The logic representation of this example is thus

jDarknessi ¼ j1i ) jLightsi ¼ j1i


jDarknessi ¼ j0i ) jLightsi ¼ j0i

Binary Negation

Binary negation is obtained by applying the NOT operator (symbol Ø) to the


variable x that we want to negate. The net result is a change in the logic state of
the variable, as shown in Table 1.6.
If x is true, then R is false.
If x is false, then R is true.

The car example can also be used to illustrate binary negation; we only need to
change ‘darkness’ to ‘brightness’. Thus

jBrightnessi ¼ j1i ) jLightsi ¼ j0i


jBrightnessi ¼ j0i ) jLightsi ¼ j1i

Binary Disjunction

Binary disjunction is obtained by applying the OR operator (symbol _) to the x and


y variables that we want to evaluate to obtain the result R.

If x is true, or y is true, or both x and y are true, then R is true, otherwise R is false.
1.4 Binary Logic 15

Table 1.7 Truth table for |x〉 |y〉 |R〉 ¼ |x〉 _ |y〉
binary disjunction
|0〉 |0〉 |0〉
|0〉 |1〉 |1〉
|1〉 |0〉 |1〉
|1〉 |1〉 |1〉

Table 1.8 Truth table for |x〉 |y〉 |R〉 ¼ |x〉  |y〉
binary exclusive disjunction
|0〉 |0〉 |0〉
|0〉 |1〉 |1〉
|1〉 |0〉 |1〉
|1〉 |1〉 |0〉

A logic operation involving two variables has four possible combinations.


Results for binary disjunction are shown in Table 1.7.
To illustrate using the car example, let us assume we are driving our car, when
we are stopped for a routine alcohol and drug test. The police officer makes us blow
through a breathalyzer, which sounds an alarm if we are under the influence of toxic
substances. In this case, the binary disjunction would work as follows:

j Drugs〉 ¼j 1〉 and j Alcohol〉 ¼j 1〉 )j Alarm〉 ¼j 1〉


j Drugs〉 ¼j 0〉 and j Alcohol〉 ¼j 1〉 )j Alarm〉 ¼j 1〉
j Drugs〉 ¼j 1〉 and j Alcohol〉 ¼j 0〉 )j Alarm〉 ¼j 1〉
j Drugs〉 ¼j 0〉 and j Alcohol〉 ¼j 0〉 )j Alarm〉 ¼j 0〉

Binary Exclusive Disjunction

Binary exclusive disjunction is obtained by applying the XOR operator (symbol )


to the variables x and y that we want to evaluate in order to obtain the result R.

If x is true, or y is true, as long as x and y are not true simultaneously, then R is


true, otherwise R is false.

Since the logic operation involves two variables there are four possible combi-
nations. Table 1.8 depicts the truth table for binary exclusive disjunction.
To illustrate again using the car example, we are on a long trip and decide to
listen to some music to make our trip less boring. The car has both a radio and a CD
player and we can choose between the radio or an Eric Clapton CD. We cannot
connect both simultaneously since we would only hear what would sound like a
swarm of crickets. Thus

j Radio〉 ¼j 1〉 and j Eric Clapton CD〉 ¼j 1〉 )j Music〉 ¼j 0〉


j Radio〉 ¼j 0〉 and j Eric Clapton CD〉 ¼j 1〉 )j Music〉 ¼j 1〉
j Radio〉 ¼j 1〉 and j Eric Clapton CD〉 ¼j 0〉 )j Music〉 ¼j 1〉
j Radio〉 ¼j 0〉 and j Eric Clapton CD〉 ¼j 0〉 )j Music〉 ¼j 0〉
16 1 The Universe of Binary Numbers

Table 1.9 Truth table for |x〉 |y〉 |R〉 ¼ |x〉 # |y〉
binary conjoint negation
|0〉 |0〉 |1〉
|0〉 |1〉 |0〉
|1〉 |0〉 |0〉
|1〉 |1〉 |0〉

Table 1.10 Truth table for |x〉 |y〉 |R〉 ¼ |x〉 ^ |y〉
binary conjunction
|0〉 |0〉 |0〉
|0〉 |1〉 |0〉
|1〉 |0〉 |0〉
|1〉 |1〉 |1〉

Binary Conjoint Negation

Binary conjoint negation is the negation of binary disjunction, an operation


performed using the NOR operator (symbol #). The result R of binary conjoint
negation of two variables x and y is the following:

If x is false and y is false, then R is true, otherwise R is false.

The results for binary conjoint negation are shown in Table 1.9.
Again we use our car to illustrate. We are driving through the desert of Arizona.
It is three o’clock in the afternoon and a merciless sun is shining in a cloudless sky.
The temperature outside the car is 110  F (43  C). Disastrously, both a conventional
electric fan we have installed in our car and the air conditioning stop working. Thus

j Fan〉 ¼j 1〉 and j Air conditioning〉 ¼j 1〉 )j Heat〉 ¼j 0〉


j Fan〉 ¼j 0〉 and j Air conditioning〉 ¼j 1〉 )j Heat〉 ¼j 0〉
j Fan〉 ¼j 1〉 and j Air conditioning〉 ¼j 0〉 )j Heat〉 ¼j 0〉
j Fan〉 ¼j 0〉 and j Air conditioning〉 ¼j 0〉 )j Heat〉 ¼j 1〉

Binary Conjunction

Binary conjunction is performed using the AND operator (symbol ^). The result
R of binary conjunction applied to two variables x and y is the following:

If x is true and y is true, then R is true, otherwise R is false.

Again we have two variables with four possible combinations. Table 1.10 shows
the truth table for binary conjunction.
Let us go back to the previously mentioned example of a toxic-substance test
while driving. After the breathalyzer test, the police officer can record the results as
follows:
1.4 Binary Logic 17

Table 1.11 Truth table for |x〉 |y〉 |R〉 ¼ |x〉 " |y〉
binary exclusion
|0〉 |0〉 |1〉
|0〉 |1〉 |1〉
|1〉 |0〉 |1〉
|1〉 |1〉 |0〉

j No drugs〉 ¼j 1〉 and j No alcohol〉 ¼j 1〉 )j Negative test〉 ¼j 1〉


j No drugs〉 ¼j 0〉 and j No alcohol〉 ¼j 1〉 )j Negative test〉 ¼j 0〉
j No drugs〉 ¼j 1〉 and j No alcohol〉 ¼j 0〉 )j Negative test〉 ¼j 0〉
j No drugs〉 ¼j 0〉 and j No alcohol〉 ¼j 0〉 )j Negative test〉 ¼j 0〉

Binary Exclusion

Binary exclusion, the negation of binary conjunction, is performed using the NAND
operator (symbol "). The result R of binary exclusion applied to two variables x and
y is as follows:

If x is false, or if y is false, or if both x and y are false, then R is true, otherwise


R is false.

The two variables again mean four possible combinations. Table 1.11 depicts the
results for binary exclusion.
We have been driving for a long time and are tired so we decide that now is a
good moment to stop at a gas station to eventually refuel or to add water to the
radiator.

j Petrol〉 ¼j 0〉 and j Radiator〉 ¼j 0〉 )j Fill〉 ¼j 1〉


j Petrol〉 ¼j 0〉 and j Radiator〉 ¼j 1〉 )j Fill〉 ¼j 1〉
j Petrol〉 ¼j 1〉 and j Radiator〉 ¼j 0〉 )j Fill〉 ¼j 1〉
j Petrol〉 ¼j 1〉 and j Radiator〉 ¼j 1〉 )j Fill〉 ¼j 0〉

Simple Binary Implication

Simple binary implication, performed by the operator IMP (symbol !), is a rather
different logic operation from those described above, as it relies greatly on its own
definition, as follows:

IMP : x ! y  NOT ðxÞ OR ðyÞ

Therefore, the result R of the implication (x ! y) is true if x is false, and


otherwise equal to the state of y.
To understand simple binary implication we need to consider the logical expres-
sion as a whole: what is true or false is the logical expression itself. The implication
relies on the eventuality of a possible causal relation, as depicted in Table 1.12.
Another random document with
no related content on Scribd:
recalled to Bonn where his life was much saddened by the deaths of
his mother and his little sister.
At this time he made the acquaintance of the von Bruening family,
—mother, three boys and a girl, whose friendship was one of the
inspiring events of his boyhood. He gave lessons to Eleanore and to a
brother, and was a close friend to them all. Here he was introduced
to the marvels of literature, which proved to be a lifelong love and a
solace for the sad hours after he became deaf. He also accompanied
the von Bruenings on holidays in the country, and through them met
Count Waldstein, a young noble and amateur musician, who was
most enthusiastic over Beethoven’s budding talent. Through Count
Waldstein he was brought to the attention of the Elector of Bonn,
who gave the young musician a place as viola player in the orchestra
of his national theatre. Here he made several lifelong friends,—Franz
Ries, who probably taught him to play the violin and viola, the two
Rombergs, Simrock and Stumpff. His old teacher Neefe, was pianist
and stage manager in the theatre.
Now his home became most unhappy because of his father’s
drunkenness and bad habits. The Court, however, in 1799, looked
after Beethoven and saw that part of his father’s salary was paid to
him to help him care for the family. In addition to this the money he
earned by playing and by giving lessons enabled him to support his
brothers and sister.
He Meets Papa Haydn

When Papa Haydn passed through Bonn on his way to London,


Beethoven went to visit him, and brought with him, instead of candy
or flowers, a cantata which he had written for the occasion. Haydn
was delighted with him and offered to teach him if he would go to
Vienna. So, in 1792, on the advice of Count Waldstein, we see him
again in Vienna, studying counterpoint with Haydn. At first he
frankly imitated his master, and although he leaned more toward
Mozart’s colorfulness of style than Haydn’s, from the older composer
he learned how to treat and develop themes, and how to write for the
orchestra.
When Haydn left Vienna for his second visit to England,
Beethoven studied with Albrechtsberger, also with Schenck, Salieri
and Förster. Although he was an amazing student his teachers were
afraid of and for him, for his ideas were ahead of his day. They failed
to see in him the great pathfinder, and naturally thought he was a
dangerous radical or “red” as we would say.
Beethoven’s Friendships

The story of Beethoven’s life is a story of a few faithful friendships.


He was not befriended for his personal beauty, but for his inner
beauty. His head was too big for his body, he did not care what sort
of clothes he wore, nor did he have any regard for conventions,
fashions or great personages. He was a real democrat and cared
nothing for titles and the things smaller men respect. Once
Beethoven’s brother called on him and left his card upon which was
written, next his name, “Man of Property.” Beethoven in return sent
his card on which he wrote, “Man of Brains.”
Thinking that Napoleon was going to free mankind, he dedicated
the Eroica, the third symphony, to him. But when he heard that
Napoleon had set himself up as Emperor, in a violent rage, he
trampled on the dedication page.
One day he and Goethe were walking along the street when the
King passed by. Goethe stood aside with uncovered head but
Beethoven refused to alter his path for royalty and kept on his hat,
for he felt on an equality with every man and probably a little
superior. But he lost his friendship with Goethe because of his many
failures to conform to customs.
At twenty-seven Beethoven began to grow deaf. It made him very
morose and unhappy. In 1800 he wrote to his friend Wegeler, the
husband of Eleanore von Bruening, “My hearing during the last three
years has become gradually worse. I can say with truth that my life is
very wretched. For nearly two years past I have avoided all society
because I find it impossible to say to people ‘I am deaf.’ In any other
profession this might be tolerable but in mine, such a condition is
truly frightful.”
Beethoven was forceful and noble in spirit, quick tempered,
absent-minded, gruff, and cared little for manners and customs
except to be honest and good. But although he was absent-minded he
never neglected his work or his obligations to any man, and his
compositions show the greatest care and thought. He worked a piece
over and over before it was finished and not, like Mozart, did it
bubble from him whole and perfect.
He was too high-strung and impatient to teach much and
Ferdinand Ries, the son of Franz, and Czerny seem to be his only
well-known pupils. But he taught many amateurs among the
nobility, which probably accounts for many of his romances. In later
years, he withdrew unto himself and became irritable and suspicious
of everybody, both because of his deafness and the misery his family
caused him.
Yet this great man, tortured with suspicions and doubt, and
storming often against his handicap, always stood upright and
straight and never did anything dishonorable or mean. In fact, he
was a very moral man, who lived and composed according to the
dictates of his soul and never wrote to please or to win favor.
He made valuable friends among music lovers and patrons such as
Prince and Princess Lichnowsky, Prince Lobkowitz, Count
Rasomouwsky, Empress Maria Theresa and others, to whom he
dedicated many of his great works. This he did only as a mark of his
friendship rather than for gain.
He was clumsy and awkward and had bad manners and a quick
temper, and he had a heavy shock of black hair, that was always in
disorder, but the soul of the man shone out from his eyes and his
smile lit up his face. Although he is said to have been unkempt, he
was exceedingly clean, for when he was composing he would often
interrupt his work to wash.
When the Leonore overture was being rehearsed, one of the three
bassoon players was missing. Prince Lobkowitz, a friend of
Beethoven, jokingly tried to relieve his mind by saying, “It doesn’t
make any difference, the first and second bassoon are here, don’t
mind the third.” Beethoven nearly pranced with rage, and reaching
the street later, where the Prince lived, he crossed the square to the
gates of the Palace and stopped to shout at the entrance, “Donkey of
a Lobkowitz!” and then passed on, raving to himself. But there was a
warm, sweet streak in his nature for his friends loved him dearly, and
he was very good to his nephew Carl, who lied to him and deceived
him. Carl added to Beethoven’s unhappiness, for when he was lonely
and in need of him, Carl never would come to him unless for money.
Beethoven had a high regard for women and loved Countess
Guicciardi, who refused many times to marry him, but he dedicated
The Moonlight Sonata and some of his songs to her.
We see his great heart broken by his nephew, we see his sad letters
begging him to come and take pity on his loneliness, we see him
struggle to make money for him; and all Carl did was to accept all
and give nothing. Finally this ungrateful boy was expelled from
college because he failed in his examinations. This was such a
disgrace that he attempted to commit suicide. As this was also looked
upon as a crime he was given twenty-four hours to leave Vienna and
so enlisted in the army. Nevertheless Beethoven made Carl his sole
heir. Doesn’t this show him to be a really great person?
Beethoven the Pianist

While at Vienna he met the great pianists and played far better
than any of them. No one played with such expression, with such
power or seemed worthy even to compete with him. Mozart and
others had been charming players and composers, but Beethoven
was powerful and deep, even most humorous when he wanted to be.
He worked well during these years, and with his usual extreme
care changed and rechanged the themes he found in his little sketch
books into which, from boyhood he had put down his musical ideas.
Those marvelous sketch books! What an example they are! They
show infinite patience and “an infinite capacity for taking pains”
which has been given by George Eliot as a definition of genius.
The Three Periods

At his first appearance as a pianist in Vienna he played his own C


major Concerto in 1795. From 1795 to 1803 he wrote all the works
from opus 1 to 50. In these were included symphonies 1 and 2, the
first three piano concertos, and many sonatas for piano, trios and
quartets, a septet and other less important works.
This is the first period of Beethoven’s life. His second period in
which his deafness grew worse and caused him real physical illness,
extended to 1815—in this the trouble with his nephew and the deceit
of his two brothers preyed on his mind, to such an extent, that he
became irascible and unapproachable. His lodgings were the scene of
distressing upheavals and Beethoven was like a storm-beaten
mountain!
For consolation, he turned to his music, and in the storm and
stress he wrote the noble opera Fidelio, and the third symphony,
Eroica, concertos, sonatas and many other things.
Someone once asked him, “Why don’t you write opera?” He
replied, “Give me a libretto noble enough for my music.” Evidently
this is the reason why he wrote only one opera. We find another
example of his patience and self-criticism, as he wrote four overtures
for Fidelio. Three of them are called Leonore overtures and one
Fidelio. The third Leonore seems to be the favorite, and is often
played.
By 1822, the beginning of the third period, the great music maker
was stone deaf! Yet he wrote the magnificent Mass in D and his last
symphony, the Ninth, with the “Hymn of Joy,” two of the great
masterpieces of the world, although he was unable to hear one note
of what he had composed as he could not hear his beloved violin even
when he held it close to his ears.
Imagine Beethoven—stone deaf, attending a performance of the
Ninth Symphony in a great hall—not knowing that it had had a
triumphal success until one of the soloists turned him around to see
the enthusiastic faces and the hands clapping and arms waving, for
he could hear not a sound! He who had built such beautiful things for
us to hear, knew them only in his mind!
Beethoven was a great lover of nature. He used to stroll with his
head down and his hands behind his back, clasping his note book in
which he jotted down the new ideas as they came to him. He wrote to
a friend, “I wander about here with music paper among the hills and
dales and valleys and scribble a bit; no man on earth could love the
country as I do.”
Beethoven Makes Music Grow

If you have ever seen a sculptor modeling in clay you know that his
great problem is to keep it from drying, because only in the moist
state can it be moulded into shape. In the same way, we have seen in
following the growth of music, that no matter how beautiful a style of
composition is, as soon as it becomes set in form, or in other words
as soon as it hardens, it changes. Let us look back to the period of the
madrigal. You remember that the early madrigals were of rare beauty
but later the composers became complicated and mechanical in their
work and the beauty and freshness of their compositions were lost.
The people who felt this, reached out for new forms of expression
and we see the opera with its arias and recitatives as a result. The
great innovator Monteverde, broke this spell of the old polyphonic
form, which, like the sculptor’s clay, had stiffened and dried.
The same thing happened after Bach brought the suite and fugue
to their highest. The people again needed something new, and
another form grew out of the suite, the sonata of Philip Emanuel
Bach, Haydn and Mozart. The works of these men formed the Classic
Period which reached its greatest height with the colossus,
Beethoven. As we told you, he used the form inherited from Haydn
and Mozart, but added much of a peculiar power which expressed
himself. But again the clay hardened! Times and people changed,
poetry, science and philosophy led the way to more personal and
shorter forms of expression. Up to Bach’s time, music, outside of the
folk-song, had not been used to express personal feeling; the art was
too young and had grown up in the Church which taught the denial
of self-expression.
In the same way, the paintings up to the time of the 16th century
did not express personal feelings and happenings, but were only
allowed to be of religious subjects, for the decorating of churches and
cathedrals.
Beethoven, besides being the peak of the classic writers, pointed
the way for the music of personal expression, not mere graceful
expression as was the fashion, which was called the “Romantic
School” because he was big enough to combine the sonata form of
classic mould with the delicacy, humor, pathos, nobility and singing
beauty for which the people of his day yearned.
This led again to the crashing of the large and dried forms made
perfect by Beethoven and we see him as the bridge which leads to
Mendelssohn, Chopin, Schubert and Schumann and we see them
expressing in shorter form every possible human mood.
Beethoven was great enough to bring music to maturity so that it
expressed not only forms of life, but life itself.
How and what did he do? First, he became master of the piano and
could from childhood sit down and make marvelous improvisations.
He studied all forms of music, counterpoint, harmony, and
orchestration. At first he followed the old forms, as we see in the first
two symphonies. In the third symphony, the Eroica, he changed
from the minuet (a relic of the old dance suite) to the scherzo, an
enlarged form of the minuet with more chance for musical
expression,—the minuet grown up. In sonatas like The Pathetique,
he used an introduction and often enlarged the coda or ending, to
such an extent that it seems like an added movement, so rich was he
in power in working over a theme into beautiful musical speech.
Later we see him abandoning set forms and writing the Waldstein
Sonata in free and beautiful ways. Even the earlier sonatas like The
Moonlight and its sister, Opus 27, No. 2, are written so freely that
they are called Fantasy Sonatas, so full of free, flowing melody has
the sonata become under his hand.
His work becomes so lofty and so grand, whether in humorous or
in serious vein, that when we compare his compositions to those of
other men, he seems like one of the loftiest mountain peaks in the
world, reaching into the heavens, yet with its base firmly standing in
the midst of men.
A Composer of Instrumental Music

Beethoven was distinctly a composer of instrumental music,


although he wrote the opera Fidelio, also the Ninth Symphony in
which he made great innovations in symphonic form and introduced
the Choral.
Up to this time, composers in the Classic School had paid more
attention to the voice and to the soloists in the concertos than to the
orchestra. Thus we see men like Mozart leaving a space toward the
end of the movement in a concerto for the soloist to make up his own
closing salute to his audience before the orchestra ended the piece.
These cadenzas became acrobatic feats in which the players wrote
the most difficult “show-off” music. Beethoven, with his love for the
orchestra and his feeling that the soloist and the orchestra should
make one complete unit, wrote the cadenza himself and thereby
made the composition one beautiful whole rather than a sandwich of
the composer, soloist and composer again.
Fancy all this from a man who, when he multiplied 14 × 26 had to
add fourteen twenty-sixes in a column! We saw this column of
figures written on a manuscript of Beethoven’s in an interesting
collection, and the story goes that Beethoven tried to verify a bill that
was brought to him in the midst of a morning of hard work at his
composing.
Besides his symphonies, concertos and sonatas in which are light
moods, dark moods, gay and sad moods, spiritual heights and
depths, filling hearers with all beauty of emotion,—he wrote gay little
witty things, like the German Dances, The Fury over the Loss of a
Penny (which is really funny), four overtures, many English, Scotch,
Irish, Welsh and Italian folk-song settings. He also wrote one
oratorio called The Mount of Olives, two masses, one of which is the
magnificent Missa Solemnis, one concerto for the violin that is the
masterpiece of its kind, and the one grand opera Fidelio.
Thus we have told you about the bridge to the “Romantic
Movement” which will follow in the next chapter.
Beethoven could have said with Robert Browning’s “Abt Vogler”
Ye know why the forms are fair, ye hear how the tale is told;
It is all triumphant art, but art in obedience to laws....

And I know not if, save in this, such gift be allowed to man,
That out of three sounds he frame, not a fourth sound, but a star.

Johann Sebastian Bach.


George Frederick Handel.
Franz Josef Haydn.
Carl Maria von Weber.

Three Classic Composers and an Early Romantic Composer.


The Piano and Its Grand-parents.

Courtesy of Morris Steinert & Sons,


Company
CHAPTER XXII
The Pianoforte Grows Up—The Ancestry of the Pianoforte

The Ancestry of the Pianoforte

We feel so familiar with the Pianoforte that we call it piano for short
and almost forget that it is dignified by the longer name. We forget
too, that Scarlatti, Rameau and Bach played not on the piano but on
its ancestors, and that Byrd, Bull and Gibbon did not write their
lovely dance suites for the instrument on which we play them today.
The Pianoforte’s family tree has three distinct branches,—strings,
sounding board and hammers. First we know the piano is a stringed
instrument, although it hides its chief characteristic, not under a
bushel, but behind a casing of wood.
Where Stringed Instruments Came From

We have seen the stringed instrument developed from the bow


when primitive man winged his arrow in the hunt, and heard its
twang. Later desiring fuller tone, the sounding board grew, when
early peoples sank bow-like instruments and reeds into a gourd
which increased and reflected the sound as the metal reflector
behind a light intensifies it.
Strings to produce sound, must be rubbed, like the bow drawn
across violin strings, plucked as the mandolin or the harp is plucked,
or struck with a hammer as was the dulcimer.
In the ancient times there were two instruments much alike, the
psaltery and the dulcimer, both with a triangular or rectangular
sounding box across which are stretched strings of wire or gut
fastened to tuning pins. The difference between these two “relatives”
is that the psaltery is plucked with fingers or a plectrum, and the
dulcimer is struck with hammers. So the psaltery is the grandfather
of the virginal, spinet, clavecin, and harpsichord, while the dulcimer
is the remote ancestor of the pianoforte.
The first record we find of a dulcimer is a stone picture near
Nineveh, of an Assyrian king in 667 B.C., celebrating a triumphal
procession. This dulcimer, suspended from the neck of the player, is
being struck with a stick in his right hand, while his left palm on the
string checks the tone. Here we have the first stringed instrument
which was hammered and muffled, two important elements in the
piano.
In Persia the dulcimer was called the santir and is still used under
different names in the Orient and other places. In Greece and other
countries it was called the psalterion, and in Italy, the dolcimelo.
Later, the Germans had a sort of dulcimer called the Hackbrett,
probably because it was “hacked” as the butcher hacks meat! We see
the dulcimer in many shapes according to the fancy of the people
who use it. The word comes from dulce—the Latin for “sweet” and
melos—the Greek for “melody.”
As people grew wiser and more musical, they padded their
hammers or mallets; this gave the idea for the padded hammer of the
piano for checking the tone as our Ninevehan did with his left palm.
Should you ever listen to a gypsy band, you will hear the dulcimer
or cembalo.
The Keyboard

The third element in the making of the piano is the keyboard.


It is evident that the piano keyboard and the organ keyboard are
practically the same. The water organs of the Greeks and Romans
had keyboards, but as the Christian Church forbade the use of organs
as sacrilegious, keyboards were lost for almost a thousand years.
The keyboard seems to have developed from the Greek monochord
used in the Middle Ages to give the pitch in convent singing. It was
tuned with a movable bridge or fret pushed back and forth under the
strings and fingers. First it was stretched with weights hung at one
end. It was a simple matter to add strings to produce more tones,
later tuning pins were added and finally a keyboard. This was the
whole principle of the clavichord. (We might say that the monochord
and dulcimer are the Adam and Eve of the pianoforte family.)

You might also like