You are on page 1of 17

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/314134091

What Is Boolean Logic and How It Works

Chapter · March 2017


DOI: 10.1007/978-3-319-53417-6_6

CITATION READS
1 5,468

2 authors:

Bahman Zohuri Masoud J Moghaddam


University of New Mexico GAE INC
457 PUBLICATIONS   1,025 CITATIONS    29 PUBLICATIONS   49 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Our Daily life dependency driven by renewable and nonrenewable source of energy. View project

I am working on cloud computation and mathematical modeling of Fuzzy logic in cloud data warehousing and datamining View project

All content following this page was uploaded by Bahman Zohuri on 07 September 2018.

The user has requested enhancement of the downloaded file.


Chapter 6
What Is Boolean Logic and How It Works

If you want to understand the answer to this question down at the very core, the first
thing you need to understand is something called Boolean logic. Boolean logic,
originally developed by George Boole in the mid-1800s, allows quite a few unex-
pected things to be mapped into bits and bytes. The great thing about Boolean logic
is that, once you get the hang of things, Boolean logic (or at least the parts you need
in order to understand the operations of computers) is outrageously simple. In this
chapter, we will first discuss simple logic “gates” and then see how to combine them
into something useful. A contemporary of Charles Babbage, whom he briefly met,
Boole is, these days, credited as being the “forefather of the information age.” An
Englishman by birth, in 1849, he became the first professor of mathematics in
Ireland new Queen’s College (now University College) Cork. He died at the age of
49 in 1846, and his work might never have had an impact on computer science with-
out somebody like Claude Shannon, who 70 years later recognized the relevance for
engineering of Boole’s symbolic logic. As a result, Boole’s thinking has become the
practical foundation of digital circuit design and the theoretical grounding of the
digital age.

6.1 Introduction

Boolean logic, originally developed by George Boole in the mid-1800s, allows


quite a few unexpected things to be mapped into bits and bytes. What George Boole
did to be recognized as the father of modern information technology was to come
up with an idea that was at the same time revolutionary and simple. Boole’s work
certainly started modern logic off on the right road, but it certainly was not anything
to do with the laws of thought. The fact of the matter is that even today, we have no
clear idea what laws govern thought, and if we did, the whole subject of artificial
intelligence would be a closed one. How Boolean logic works, and what it is, can be

© Springer International Publishing AG 2017 183


B. Zohuri, M. Moghaddam, Business Resilience System (BRS): Driven Through
Boolean, Fuzzy Logics and Cloud Computation,
DOI 10.1007/978-3-319-53417-6_6
184 6 What Is Boolean Logic and How It Works

simply defined by asking yourself, how a computer can do some logical things and
tasks like balancing a checkbook, or play chess, or spell-check a document? These
are things that, just a few decades ago, only humans could do.
Now computers do them with apparent ease. How can a “chip” made up of sili-
con and wires do something that seems like it requires human thought?
Computers and logic have an inseparable relationship. They are now, but at the
start, things were much hazier. The first computers were conceived as automatic
arithmetic engines, and while their creators were aware that logic had something to
do with it all, they were not 100% clear as to the how or why. Even today, we tend
to be over simplistic about logic, and its role in computation and understanding the
world and George Boole the man who started it all off was a bit over the top with
the title of this chapter on the subject.
Boolean logic is very easy to explain and to understand:
• You start off with the idea that some statement P is either true or false, it can’t be
anything in between (this called the law of the excluded middle).
• Then, you can form other statements, which are true or false, by combining these
initial statements together using the fundamental operators AND, OR, and NOT.
Exactly what a fundamental operator is forms an interesting question in its own
right—something we will return to later when we ask how few logical operators do
we actually need?
The way that all this works more or less fits in with the way that we used these
terms in English.
For example, if P is true, then NOT(P) is false, so if “today is Monday” is true,
then “Not (today is Monday)” is false. We often translate the logical expression into
English as “today is Not Monday,” and this makes it easier to see that it is false if
today is indeed Monday.
Well, this is the problem with this sort of discussion. It very quickly becomes
convoluted and difficult to follow, and this is part of the power of Boolean logic. You
can write down arguments clearly in symbolic form.
The great thing about Boolean logic is that, once you get the hang of things,
Boolean logic (or at least the parts you need in order to understand the operations of
computers) is outrageously simple.
In this chapter, we will first discuss what bits and bytes are and what we mean by
these values, and then we talk about simple logic “gates” and then see how to com-
bine them into something useful.

6.2 How Bits and Bytes Work

If you have used a computer for more than 5 min, then you have heard the words bits
and bytes. Both RAM and hard disk capacities are measured in bytes, as are file
sizes when you examine them in a file viewer. Figure 6.1 shows the logic for com-
puter that works based on binary of 0s and 1s.
6.2 How Bits and Bytes Work 185

Fig. 6.1 Fundamental of


computer logic

You might hear an advertisement that says, “This computer has a 32-bit Pentium
processor with 64 MG of RAM and 2.1 GB of hard disk space”; these are the infor-
mation that we need to be a bit knowledgeable about them. In this section, we will
discuss bits and bytes so that you have a complete understanding.
In order to have a better grasp and handle on computer logic of bits and bytes as
well as understanding of how they work, we refer to the work by Marshall Brain
technical site, where he is describing “How Bits and Bytes Work” [1].
The easiest way to understand bits is to compare them to something you know:
digits and decimal numbers.
A digit is a single place that can hold numerical values between zero and nine.
Digits are normally combined together in groups to create larger numbers. For
example, 6357 have four digits. It is understood that in the number 6357, the 7 is
filling the “1s place,” while the 5 is filling the 10s place, the 3 is filling the 100s
place, and the 6 is filling the 1000s place. Therefore, you could express things this
way if you wanted to be explicit:

(6 × 10 ) + (3 × 10 ) + (5 × 10 ) + (7 × 10 ) = 6000 + 300 + 50 + 7 = 6357


3 2 1 0

What you can see from this expression is that each digit is a placeholder for the
next higher power of 10, starting in the first digit with 10 raised to the power of 0.
That should all feel comfortable—we work with decimal digits every day. The
neat thing about number systems is that there is nothing that forces you to have ten
different values in a digit. Our base-10 number system likely grew up because we
have ten fingers, but if we happened to evolve to have eight fingers instead, we
would probably have a base-8 number system. You can have base-anything number
systems. In fact, there are many good reasons to use different bases in different
situations.
Computers happen to operate using the base-2 number system, also known as the
binary number system (just like the base-10 number system, it is known as the
decimal number system). Find out why and how that works in the next subsection.
186 6 What Is Boolean Logic and How It Works

6.2.1 The Base-2 System and the 8-Bit Byte

The reason computers use the base-2 system is because it makes it a lot easier to
implement them with the current electronic technology. You could wire up and build
computers that operate in base-10, but they would be fiendishly expensive right
now. On the other hand, base-2 computers are relatively cheap.
Therefore, computers use binary numbers and thus use binary digits in place of
decimal digits. The word bit is a shortening of the words binary digit. Whereas deci-
mal digits have ten possible values ranging from 0 to 9, bits have only two possible
values: 0 and 1. Therefore, a binary number is composed of only 0s and 1s, like this:
1011. How do you figure out what the value of the binary number 1011 is? You do
it in the same way we did it above for 6357, but you use a base of 2 instead of a base
of 10. So:

(1 × 2 ) + (0 × 2 ) + (1 × 2 ) + (1 × 2 ) = 8 + 0 + 2 + 1 = 11
3 2 1 0

You can see that in binary numbers, each bit holds the value of increasing powers
of 2. That makes counting in binary pretty easy. Starting at 0 and going through 20,
counting in decimal and binary looks like the presentation in Table 6.1.
When you look at this sequence, 0 and 1 are the same for decimal and binary
number systems. At the number 2, you see carrying first take place in the binary

Table 6.1 0–20 0 = 0


1 = 1
2 = 10
3 = 11
4 = 100
5 = 101
6 = 110
7 = 111
8 = 1000
9 = 1001
10 = 1010
11 = 1011
12 = 1100
13 = 1101
14 = 1110
15 = 1111
16 = 10000
17 = 10001
18 = 10010
19 = 10011
20 = 10100
6.2 How Bits and Bytes Work 187

Table 6.2 8 bits in byte 0 = 00000000


1 = 00000001
2 = 00000010

254 = 11111110
255 = 11111111

system. If a bit is 1 and you add 1 to it, the bit becomes 0 and the next bit becomes
1. In the transition from 15 to 16, this effect rolls over through 4 bits, turning 1111
into 10000.
Bits are rarely seen alone in computers. They are usually bundled together into
8-bit collections, and these collections are called bytes. Why are there 8 bits in a
byte? A similar question is, “Why are there 12 eggs in a dozen?” The 8-bit byte is
something that people settled on through trial and error over the past 50 years. With
8 bits in a byte, one can represent 256 values ranging from 0 to 255, as it is shown
in Table 6.2.
Next, we will look at one way that bytes are used.

6.2.2 The Standard ASCII Character Set

Bytes are frequently used to hold individual characters in a text document. In the
ASCII character set, each binary value between 0 and 127 is given a specific char-
acter. Most computers extend the ASCII character set to use the full range of 256
characters available in a byte. The upper 128 characters handle special things like
accented characters from common foreign languages.
You can see the 127 standard ASCII codes below. Computers store text docu-
ments, both on disk and in memory, using these codes. For example, if you use
Notepad in Windows 95/98 or higher version to create a text file containing the
words, “Four score and seven years ago,” Notepad would use 1 byte of memory per
character (including 1 byte for each space character between the words—ASCII
character 32). When Notepad stores the sentence in a file on disk, the file will also
contain 1 byte per character and per space.
Try this experiment: Open up a new file in Notepad and insert the sentence “Four
score and seven years ago” in it. Save the file to disk under the name getty.txt. Then
use the explorer and look at the size of the file. You will find that the file has a size
of 30 bytes on disk: 1 byte for each character. If you add another word to the end of
the sentence and re-save it, the file size will jump to the appropriate number of
bytes. Each character consumes a byte.
If you were to look at the file as a computer looks at it, you would find that each
byte contains not a letter but a number—the number is the ASCII code correspond-
ing to the character (see Table 6.3). So on disk, the numbers for the file look like
this:
188 6 What Is Boolean Logic and How It Works

Table 6.3 ASCII code corresponding to binary code


F o u r a n d s e v e n
70 111 117 114 32 97 110 100 32 115 101 118 101 110

Table 6.4 Byte prefixes and Kilo (K) 210 = 1024


binary mathematic units
Mega 220 = 1,048,576
(M)
Giga (G) 230 = 1,073,741,824
Tera (T) 240 = 1,099,511,627,776
Peta (P) 250 = 1,125,899,906,842,624
Exa (E) 260 = 1,152,921,504,606,846,976
Zetta (Z) 270 = 1,180,591,620,717,411,303,424
Yotta (Y) 280 = 1,208,925,819,614,629,174,706,176

By looking in the ASCII table, you can see a one-to-one correspondence between
each character and the ASCII code used. Note the use of 32 for a space—32 is the
ASCII code for a space. We could expand these decimal numbers out to binary
numbers (so 32 = 00100000) if we wanted to be technically correct—that is how the
computer really deals with things.
The first 32 values (0 through 31) are codes for things like carriage return and
line feed. The space character is the 33rd value, followed by punctuation, digits,
uppercase characters, and lowercase characters. To see all 127 values, check out
Unicode.org’s chart.
We will learn about byte prefixes and binary math next.

6.2.3 Byte Prefixes and Binary Math

When you start talking about lots of bytes, you get into prefixes like kilo, mega, and
giga, as in kilobyte, megabyte, and gigabyte (also shortened to K, M, and G, as in
Kbytes, Mbytes, and Gbytes, or KB, MB, and GB). The following table shows the
binary multipliers. See Table 6.4.
You can see in this chart that kilo is about a thousand, mega is about a million,
giga is about a billion, and so on. So when someone says, “This computer has a 2
gig hard drive,” what he or she means is that the hard drive stores 2 GB, or approxi-
mately 2 billion bytes, or exactly 2,147,483,648 bytes.
Terabyte databases are fairly common these days, and there are probably a few
petabyte databases floating around the Pentagon or Homeland Security and intelli-
gent communities by now. Our futuristic Business Resilience System (BRS) needs
to be structured and designed based on these sizes of data for handing all types of
events, from e-commerce to banking, from power plants to renewable energy and
sustaining demand on grids for energy consumptions, and so on.
6.3 Logical Gates 189

Binary math works just like decimal math, except that the value of each bit can
be only 0 or 1. To get a feel for binary math, let us start with decimal addition and
see how it works. Assume that we want to add 452 and 751:

452
+751

1203

To add these two numbers together, you start at the right: 2 + 1 = 3. No problem.
Next, 5 + 5 = 10, so you save the zero and carry the 1 over to the next place. Next,
4 + 7 + 1 (because of the carry) = 12, so you save the 2 and carry the 1. Finally,
0 + 0 + 1 = 1. So the answer is 1203.
Binary addition works exactly the same way:

010
+111

1001

Starting at the right, 0 + 1 = 1 for the first digit. No carrying there. You have
1 + 1 = 10 for the second digit, so save the 0 and carry the 1. For the third digit,
0 + 1 + 1 = 10, so save the zero and carry the 1. For the last digit, 0 + 0 + 1 = 1. So
the answer is 1001. If you translate everything over to decimal, you can see it is
correct: 2 + 7 = 9.
To sum up, here is what we’ve learned about bits and bytes:
• Bits are binary digits. A bit can hold the value 0 or 1.
• Bytes are made up of 8 bits each.
• Binary math works just like decimal math, but each bit can have a value of only
0 or 1.
There really is nothing more to it—bits and bytes are that simple.
For more information on bits, bytes, and related topics, check out Marshall Brain
and the links in his web site [1].

6.3 Logical Gates

Boolean logic affects how computers operate, and it is illustrated holistically here as
Fig. 6.2, which uses bunch of gate arrays.
Several arrays of gates do exist in any computer that you need to know about and
learn their operational functions. The simplest possible one is called an “inverter,”
or a NOT gate. This gate takes 1 bit as input and produces output as its opposite. The
logic table is Table 6.5.
190 6 What Is Boolean Logic and How It Works

Fig. 6.2 Computer logical


gate arrays

6.4 Simple Adders

In the article on bits and bytes, you learned about binary addition. In this section,
you will learn how you can create a circuit capable of binary addition using the
gates described in the previous section.
Let us start with a single-bit adder. Let us say that you have a project where you
need to add single bits together and get the answer. The way you would start design-
ing a circuit for that is to first look at all of the logical combinations. You might do
that by looking at the following four sums:
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 10
That looks fine until you get to 1 + 1. In that case, you have that pesky carry bit
to worry about. If you don’t care about carrying (because this is, after all, a 1-bit
addition problem), then you can see that you can solve this problem with an XOR
gate. But if you do care, then you might rewrite your equations to always include 2
bits of output like this:
0 + 0 = 00
0 + 1 = 01
1 + 0 = 01
1 + 1 = 10
From these equations, you can form the logic table:
1-bit adder with carry-out

A B Q CO
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
6.4 Simple Adders 191

Table 6.5 The logic table


The NOT gate has one input called A and one output called Q (“Q” is
used for the output because if you used “O,” you would easily
confuse it with zero). The table shows how the gate behaves. When
you apply a 0 to A, Q produces a 1. When you apply a 1 to A, Q
NOT gate produces a 0. Simple
A Q
0 1
1 0
The AND gate performs a logical “and” operation on two inputs, A
and B:
A B Q
0 0 0
AND gate
0 1 0
1 0 0
1 1 1
The idea behind an AND gate is “If A AND B are both 1, then Q
should be 1.” You can see the behavior in the logic table for the gate.
You read this table row by row like this:
A B Q
0 0 0 If A is 0 AND B is 0, Q is 0
0 1 0 If A is 0 AND B is 1, Q is 0
1 0 0 If A is 1 AND B is 0, Q is 0
1 1 1 If A is 1 AND B is 1, Q is 1
The next gate is an OR gate. Its basic idea is, “If A is 1 OR B is 1 (or
both are 1), then Q is 1”
A B Q
0 0 0
OR gate 0 1 1
1 0 1
1 1 1
Those are the three basic gates (that’s one way to count them). It is
quite common to recognize two others as well: the NAND and the
NOR gate. These two gates are simply combinations of an AND or an
OR gate with a NOT gate. If you include these two gates, then the
NAND gate count rises to five. Here’s the basic operation of NAND and NOR
gates—you can see they are simply inversions of AND and OR gates:
NAND gate
A B Q
0 0 1
0 1 1
1 0 1
1 1 0
(continued)
192 6 What Is Boolean Logic and How It Works

Table 6.5 (continued)


NOR gate
A B Q
0 0 1
0 1 0
NOR gate
1 0 0
1 1 0
The final two gates that are sometimes added to the list are the XOR
and XNOR gates, also known as “exclusive or” and “exclusive nor”
gates, respectively. Here are their tables
XOR gate
A B Q
0 0 0
0 1 1
1 0 1
1 1 0
XNOR gate
A B Q
0 0 1
0 1 0
1 0 0
1 1 1
The idea behind an XOR gate is “If either A OR B is 1, but NOT
both, Q is 1.” The reason why XOR might not be included in a list of
gates is that you can implement it easily using the original three gates
listed
XOR gate XOR gate
A B Q
0 0 0
0 1 1
1 0 1
1 1 0
If you try all four different patterns for A and B and trace them
through the circuit, you will find that Q behaves like an XOR gate.
Since there is a well-understood symbol for XOR gates, it is
generally easier to think of XOR as a “standard gate” and use it in the
XNOR gate same way as AND and OR in circuit diagrams
XNOR gate
A B Q
0 0 1
0 1 0
1 0 0
1 1 1
6.5 Full Adders 193

By looking at this table, you can see that you can implement Q with an XOR gate
and CO (carry-out) with an AND gate. Simple.
What if you want to add two 8-bit bytes together? This becomes slightly harder.
The easiest solution is to modularize the problem into reusable components and
then replicate components. In this case, we need to create only one component: a
full binary adder.
The difference between a full adder and the previous adder we looked at is that a
full adder accepts an A and a B input plus a carry-in (CI) input. Once we have a full
adder, then we can string eight of them together to create a byte-wide adder and
cascade the carry bit from one adder to the next.
In the next section, we will look at how a full adder is implemented into a circuit
[1].

6.5 Full Adders

The logic table for a full adder is slightly more complicated than the tables we have
used before, because now we have 3 input bits. It looks like this:
1-bit full adder with carry-in and carry-out

CI A B Q CO
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1

There are many different ways that you might implement this table. I am going
to present one method here that has the benefit of being easy to understand. If you
look at the Q bit, you can see that the top 4 bits are behaving like an XOR gate with
respect to A and B, while the bottom 4 bits are behaving like an XNOR gate with
respect to A and B. Similarly, the top 4 bits of CO are behaving like an AND gate
with respect to A and B, and the bottom 4 bits behave like an OR gate. Taking these
facts, the following circuit implements a full adder: see Fig. 6.3.
194 6 What Is Boolean Logic and How It Works

Fig. 6.3 Full adders can be implemented in a wide variety of ways [1]

This definitely is not the most efficient way to implement a full adder, but it is
extremely easy to understand and trace through the logic using this method. If you
are so inclined, see what you can do to implement this logic with fewer gates.
Now we have a piece of functionality called a “full adder.” What a computer
engineer then does is “black-box” it so that he or she can stop worrying about the
details of the component. A black box for a full adder would look like this:

With that black box, it is now easy to draw a 4-bit full adder.
6.6 Truth Tables 195

In this diagram, the carry-out from each bit feeds directly into the carry-in of the
next bit over. A 0 is hardwired into the initial carry-in bit. If you input two 4-bit
numbers on the A and B lines, you will get the 4-bit sum out on the Q lines, plus 1
additional bit for the final carry-out. You can see that this chain can extend as far as
you like, through 8, 16, or 32 bits if desired.
The 4-bit adder we just created is called a ripple-carry adder. It gets that name
because the carry bits “ripple” from one adder to the next. This implementation has
the advantage of simplicity but the disadvantage of speed problems. In a real circuit,
gates take time to switch states (the time is in the order of nanoseconds, but in high-
speed computers, nanoseconds matter). Therefore, 32-bit or 64-bit ripple-carry
adders might take 100–200 ns to settle into their final sum, because of carry ripple.
For this reason, engineers have created more advanced adders called carry-
lookahead adders. The number of gates required to implement carry-lookahead is
large, but the settling time for the adder is much better.
There are more to logical gate, and they can be found on the web site of Marshall
Brain [1].

6.6 Truth Tables

The rules for combining expressions are usually written down as tables listing all of
the possible outcomes. These are called truth tables, and for the three fundamental
operators, these are:

P Q P AND Q
F F F
F T F
T F F
T T T
196 6 What Is Boolean Logic and How It Works

P Q P OR Q
F F F
F T T
T F T
T T T

P NOT P
F T
T F

Notice that while the Boolean AND is the same as the English use of the term, the
Boolean OR is a little different.
When you are asked would you like “coffee OR tea,” you are not expected to say
yes to both!
In the Boolean case, however, “OR” most certainly includes both. When P is true
and Q is true, the combined expression (P OR Q) is also true.
There is a Boolean operator that corresponds to the English use of the term “OR,”
and it is called the “exclusive or” written as EOR or XOR. Its truth table is:

P Q P XOR Q
F F F
F T T
T F T
T T F

This one really would stop you having both the tea and the coffee at the same time
(notice the last line is true XOR true = false).

6.6.1 Practical Truth Tables

All this seems very easy, but what value has it?
It most certainly isn’t a model for everyday reasoning except at the most trivial
“coffee or tea” level.
We do use Boolean logic in our thinking; well politicians probably don’t but
that’s another story, but only at the most trivially obvious level.
However, if you start to design machines that have to respond to the outside
world in even a reasonably complex way, then you quickly discover that Boolean
logic is a great help.
For example, suppose you want to build a security system which only works at
night and responds to a door being opened. If you have a light sensor, you can treat
this as giving off a signal that indicates the truth of the statement:
6.7 Summary 197

P = It is daytime.
Clearly NOT(P) is true when it is nighttime, and we have our first practical use
for Boolean logic!
What we really want is something that works out the truth of the statement:
R = Burglary in progress
from P and
Q = Window open
A little raw thought soon gives the solution that
R = NOT(P) AND Q
That is, the truth of “Burglary in progress” is given by the following truth table:

P Q NOT(P) NOT(P) AND Q


F F T F
F T T T
T F F F
T T F F

From this, you should be able to see that the alarm only goes off when it is night-
time and a window opens.

6.7 Summary

The term “Boolean,” often encountered when doing searches on the Web (and
sometimes spelled “boolean”), refers to a system of logical thought developed by
the English mathematician and computer pioneer George Boole (1815–1864). In
Boolean searching, an “AND” operator between two words or other values (e.g.,
“pear AND apple”) means one is searching for documents containing both of the
words or values, not just one of them. An “OR” operator between two words or
other values (e.g., “pear OR apple”) means one is searching for documents contain-
ing either of the words.
In computer operation with binary values, Boolean logic can be used to describe
electromagnetically charged memory locations or circuit states that are either charged
(1 or true) or not charged (0 or false). The computer can use an AND gate or an OR
gate operation to obtain a result that can be used for further processing. The following
table shows the results from applying AND and OR operations to two compared states:

0 AND 1 AND 1 AND 1 = 1


0 = 0 0 = 0
0 OR 0 = 0 0 OR 1 = 1 1 OR 1 = 1
198 6 What Is Boolean Logic and How It Works

For a summary of logic operations in computers, see logic gate in above sections
of this chapter.

Reference

1. Brain, M. How bits and bytes work. Retrieved from http://computer.howstuffworks.com/bytes.


htm.

View publication stats

You might also like