You are on page 1of 37

Logic gate

A logic gate performs a logical operation on one or more logic inputs and produces a
single logic output. The logic is called Boolean logic and is most commonly found in
digital circuits. Logic gates are primarily implemented electronically using diodes or
transistors, but can also be constructed using electromagnetic relays (relay logic), fluidic
logic, pneumatic logic, optics, molecules, or even mechanical elements.

Truth table
Main article: Truth table

A truth table is a table that describes the behaviour of a logic gate or any combination of
logic gates. It lists the value of the output for every possible combination of the inputs
and can be used to simplify the number of logic gates and level of nesting in an electronic
circuit. In general the truth table does not lead to an efficient implementation; a
minimization procedure, using Karnaugh maps, the Quine–McCluskey algorithm or an
heuristic algorithm is required for reducing the circuit complexity.

Background
Main article: Logic family

The simplest form of electronic logic is diode logic. This allows AND and OR gates to be
built, but not inverters, and so is an incomplete form of logic. Further, without some kind
of amplification it is not possible to have such basic logic operations cascaded as required
for more complex logic functions. To build a functionally complete logic system, relays,
valves (vacuum tubes), or transistors can be used. The simplest family of logic gates
using bipolar transistors is called resistor-transistor logic (RTL). Unlike diode logic gates,
RTL gates can be cascaded indefinitely to produce more complex logic functions. These
gates were used in early integrated circuits. For higher speed, the resistors used in RTL
were replaced by diodes, leading to diode-transistor logic (DTL). Transistor-transistor
logic (TTL) then supplanted DTL with the observation that one transistor could do the
job of two diodes even more quickly, using only half the space. In virtually every type of
contemporary chip implementation of digital systems, the bipolar transistors have been
replaced by complementary field-effect transistors (MOSFETs) to reduce size and power
consumption still further, thereby resulting in complementary metal–oxide–
semiconductor (CMOS) logic.

For small-scale logic, designers now use prefabricated logic gates from families of
devices such as the TTL 7400 series by Texas Instruments and the CMOS 4000 series by
RCA, and their more recent descendants. Increasingly, these fixed-function logic gates
are being replaced by programmable logic devices, which allow designers to pack a large
number of mixed logic gates into a single integrated circuit. The field-programmable
nature of programmable logic devices such as FPGAs has removed the 'hard' property of
hardware; it is now possible to change the logic design of a hardware system by
reprogramming some of its components, thus allowing the features or function of a
hardware implementation of a logic system to be changed.

Electronic logic gates differ significantly from their relay-and-switch equivalents. They
are much faster, consume much less power, and are much smaller (all by a factor of a
million or more in most cases). Also, there is a fundamental structural difference. The
switch circuit creates a continuous metallic path for current to flow (in either direction)
between its input and its output. The semiconductor logic gate, on the other hand, acts as
a high-gain voltage amplifier, which sinks a tiny current at its input and produces a low-
impedance voltage at its output. It is not possible for current to flow between the output
and the input of a semiconductor logic gate.

Another important advantage of standardised integrated circuit logic families, such as the
7400 and 4000 families, is that they can be cascaded. This means that the output of one
gate can be wired to the inputs of one or several other gates, and so on. Systems with
varying degrees of complexity can be built without great concern of the designer for the
internal workings of the gates, provided the limitations of each integrated circuit are
considered.

The output of one gate can only drive a finite number of inputs to other gates, a number
called the 'fanout limit'. Also, there is always a delay, called the 'propagation delay', from
a change in input of a gate to the corresponding change in its output. When gates are
cascaded, the total propagation delay is approximately the sum of the individual delays,
an effect which can become a problem in high-speed circuits. Additional delay can be
caused when a large number of inputs are connected to an output, due to the distributed
capacitance of all the inputs and wiring and the finite amount of current that each output
can provide.

Logic gates
All other types of Boolean logic gates (i.e., AND, OR, NOT, XOR, XNOR) can be
created from a suitable network of NAND gates. Similarly all gates can be created from a
network of NOR gates. Historically, NAND gates were easier to construct from MOS
technology and thus NAND gates served as the first pillar of Boolean logic in electronic
computation.

For an input of 2 variables, there are 16 possible boolean algebraic functions. These 16
functions are enumerated below, together with their outputs for each combination of
inputs variables.

Venn Diagrams for Logic Gates

A 0 0 1 1
INPUT Meaning
B 0 1 0 1

OUTPUT FALSE 0 0 0 0 Whatever A and B, the output is false. Contradiction.

A AND B 0 0 0 1 Output is true if and only if (iff) both A and B are


true.

A B 0 0 1 0 A doesn't imply B. True iff A but not B.

A 0 0 1 1 True whenever A is true.

A B 0 1 0 0 A is not implied by B. True iff not A but B.

B 0 1 0 1 True whenever B is true.

A XOR B 0 1 1 0 True iff A is not equal to B.

A OR B 0 1 1 1 True iff A is true, or B is true, or both.

A NOR B 1 0 0 0 True iff neither A nor B.

A XNOR B 1 0 0 1 True iff A is equal to B.

NOT B 1 0 1 0 True iff B is false.

A is implied by B. False if not A but B, otherwise


A B 1 0 1 1
true.

NOT A 1 1 0 0 True iff A is false.

A B 1 1 0 1 A implies B. False if A but not B, otherwise true.

A NAND B 1 1 1 0 A and B are not both true.

TRUE 1 1 1 1 Whatever A and B, the output is true. Tautology.


The four functions denoted by arrows are the logical implication functions. These
functions are generally less common, and are usually not implemented directly as logic
gates, but rather built out of gates like AND and OR.

Symbols

A synchronous 4-bit up/down decade counter symbol (74LS192) in accordance with


ANSI/IEEE Std. 91-1984 and IEC Publication 60617-12.

There are two sets of symbols in common use, both now defined by ANSI/IEEE Std 91-
1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based on
traditional schematics, is used for simple drawings, and derives from MIL-STD-806 of
the 1950s and 1960s. It is sometimes unofficially described as "military", reflecting its
origin. The "rectangular shape" set, based on IEC 60617-12, has rectangular outlines for
all types of gate, and allows representation of a much wider range of devices than is
possible with the traditional symbols. The IEC's system has been adopted by other
standards, such as EN 60617-12:1999 in Europe and BS EN 60617-12:1999 in the United
Kingdom.

The goal of IEEE Std 91-1984 was to provide a uniform method of describing the
complex logic functions of digital circuits with schematic symbols. These functions were
more complex than simple AND and OR gates. They could be medium scale circuits such
as a 4-bit counter to a large scale circuits such as a microprocessor. IEC 617-12 and its
successor IEC 60617-2 do not include the "distinctive shape" symbols. [1] These are,
however, included in ANSI/IEEE 91 (and 91a) with this note: "The distinctive-shape
symbol is, according to IEC Publication 617, Part 12, not preferred, but is not considered
to be in contradiction to that standard." This compromise was reached between the
respective IEEE and IEC working groups to permit the IEEE and IEC standards to be in
mutual compliance with one another.

In the 1980s, schematics were the predominant method to design both circuit boards and
custom ICs known as gate arrays. Today custom ICs and the field-programmable gate
array are typically designed with Hardware Description Languages (HDL) such as
Verilog or VHDL.
Boolean
Rectangular
Type Distinctive shape algebra Truth table
shape
between A & B
INPUT OUTPUT
A B A AND B
0 0 0
AND 0 1 0
1 0 0
1 1 1

INPUT OUTPUT
A B A OR B
0 0 0
OR A+B 0 1 1
1 0 1
1 1 1

INPUT OUTPUT
A NOT A
NOT 0 1
1 0

In electronics a NOT gate is more commonly called an inverter. The circle on the symbol
is called a bubble, and is used in logic diagrams to indicate a logical inversion between the
external logic state and the internal logic state (1 to 0 or vice versa). On a circuit diagram
it must be accompanied by a statement asserting that the positive logic convention or
negative logic convention is being used (high voltage level = 1 or high voltage level = 0),
respectively). The wedge is used in circuit diagrams to directly indicate an active-low
(high voltage level = 0) input or output without requiring a uniform convention
throughout the circuit diagram. This is called Direct Polarity Indication. See IEEE Std
91/91A and IEC 60617-12. Both the bubble and the wedge can be used on distinctive-
shape and rectangular-shape symbols on circuit diagrams, depending on the logic
convention used. On pure logic diagrams, only the bubble is meaningful.
INPUT OUTPUT
A B A NAND B
0 0 1
NAND 0 1 1
1 0 1
1 1 0
INPUT OUTPUT
A B A NOR B
0 0 1
NOR 0 1 0
1 0 0
1 1 0

INPUT OUTPUT
A B A XOR B
0 0 0
XOR 0 1 1
1 0 1
1 1 0

INPUT OUTPUT
A B A XNOR B
0 0 1
XNOR or 0 1 0
1 0 0
1 1 1

Charles Sanders Peirce (1880) showed that NAND gates alone (or alternatively NOR
gates alone) can be used to reproduce the functions of all the other logic gates, but his
work on it was unpublished until 1935. The first published proof was by Henry M.
Sheffer in 1913.

The 7400 chip, containing four NANDs. The two additional pins supply power (+5 V)
and connect the ground.
Two more gates are the exclusive-OR or XOR function and its inverse, exclusive-NOR or
XNOR. The two input Exclusive-OR is true only when the two input values are different,
false if they are equal, regardless of the value. If there are more than two inputs, the gate
generates a true at its output if the number of trues at its input is odd ([2]). In practice,
these gates are built from combinations of simpler logic gates.

De Morgan equivalent symbols


By use of De Morgan's theorem, an AND gate can be turned into an OR gate by inverting
the sense of the logic at its inputs and outputs. This leads to a separate set of symbols
with inverted inputs and the opposite core symbol. These symbols can make circuit
diagrams for circuits using active low signals much clearer and help to show accidental
connection of an active high output to an active low input or vice-versa.

Symbolically, a NAND gate can also be shown using the OR shape with bubbles on its
inputs, and a NOR gate can be shown as an AND gate with bubbles on its inputs. The
bubble signifies a logic inversion. This reflects the equivalency due to De Morgans law,
but it also allows a diagram to be read more easily, or a circuit to be mapped onto
available physical gates in packages easily, since any circuit node that has bubbles at both
ends can be replaced by a simple bubble-less connection and a suitable change of gate. If
the NAND is drawn as OR with input bubbles, and a NOR as AND with input bubbles,
this gate substitution occurs automatically in the diagram (effectively, bubbles "cancel").
This is commonly seen in real logic diagrams - thus the reader must not get into the habit
of associating the shapes exclusively as OR or AND shapes, but also take into account
the bubbles at both inputs and outputs in order to determine the "true" logic function
indicated.

All logic relations can be realized by using NAND gates (this can also be done using
NOR gates). De Morgan's theorem is most commonly used to transform all logic gates to
NAND gates or NOR gates. This is done mainly since it is easy to buy logic gates in bulk
and because many electronics labs stock only NAND and NOR gates.

Karnaugh map
An example Karnaugh map

The Karnaugh map (K-map for short), Maurice Karnaugh's 1953 refinement of Edward
Veitch's 1952 Veitch diagram, is a method to simplify Boolean algebra expressions. The
Karnaugh map reduces the need for extensive calculations by taking advantage of
humans' pattern-recognition capability, permitting the rapid identification and elimination
of potential race conditions.

In a Karnaugh map the boolean variables are transferred (generally from a truth table)
and ordered according to the principles of Gray code in which only one variable changes
in between squares. Once the table is generated and the output possibilities are
transcribed, the data is arranged into the largest possible groups containing 2n cells
(n=0,1,2,3...)[1] and the minterm is generated through the axiom laws of boolean algebra.

Example
Karnaugh maps are used to facilitate the simplification of Boolean algebra functions. The
following is an unsimplified Boolean Algebra function with Boolean variables A, B, C,
D, and their inverses. They can be represented in two different functions:

• f(A,B,C,D) = ∑(6,8,9,10,11,12,13,14) Note: The values inside ∑ are the


minterms to map (i.e. which rows have output 1 in the truth table).

Truth table

Using the defined minterms, the truth table can be created:

# A B C D f(A,B,C,D)
0 0 0 0 0 0
1 0 0 0 1 0
2 0 0 1 0 0
3 0 0 1 1 0
4 0 1 0 0 0
5 0 1 0 1 0
6 0 1 1 0 1
7 0 1 1 1 0
8 1 0 0 0 1
9 1 0 0 1 1
10 1 0 1 0 1
11 1 0 1 1 1
12 1 1 0 0 1
13 1 1 0 1 1
14 1 1 1 0 1
15 1 1 1 1 0

Karnaugh map

K-map showing minterms and boxes covering the desired minterms. The brown region is
an overlapping of the red (square) and green regions.

The input variables can be combined in 16 different ways, so the Karnaugh map has 16
positions, and therefore is arranged in a 4 × 4 grid.

The binary digits in the map represent the function's output for any given combination of
inputs. So 0 is written in the upper leftmost corner of the map because ƒ = 0 when A = 0,
B = 0, C = 0, D = 0. Similarly we mark the bottom right corner as 1 because A = 1, B = 0,
C = 1, D = 0 gives ƒ = 1. Note that the values are ordered in a Gray code, so that
precisely one variable changes between any pair of adjacent cells.

After the Karnaugh map has been constructed the next task is to find the minimal terms to
use in the final expression. These terms are found by encircling groups of 1s in the map.
The groups must be rectangular and must have an area that is a power of two
(i.e. 1, 2, 4, 8…). The rectangles should be as large as possible without containing any 0s.
The optimal groupings in this map are marked by the green, red and blue lines. Note that
groups may overlap. In this example, the red and green groups overlap. The red group is
a 2 × 2 square, the green group is a 4 × 1 rectangle, and the overlap area is indicated in
brown.

The grid is toroidally connected, which means that the rectangular groups can wrap
around edges, so is a valid term, although not part of the minimal set—this covers
Minterms 8, 10, 12, and 14.

Perhaps the hardest-to-visualize wrap-around term is which covers the four corners—
this covers minterms 0, 2, 8, 10.

Solution

Once the Karnaugh Map has been constructed and the groups derived, the solution can be
found by eliminating extra variables within groups using the axiom laws of boolean
algebra. It can be implied that rather than eliminating the variables that change within a
grouping, the minimal function can be derived by noting which variables stay the same.

For the Red grouping:

• The variable A maintains the same state (1) in the whole encircling, therefore it
should be included in the term for the red encircling.
• Variable B does not maintain the same state (it shifts from 1 to 0), and should
therefore be excluded.
• C does not change: it is always 0. Because C is 0, it has to be negated before it is
included (thus, ).
• D changes, so it is excluded as well.

Thus the first term in the Boolean sum-of-products expression is

For the Green grouping we see that A and B maintain the same state, but C and D change.
B is 0 and has to be negated before it can be included. Thus the second term is

In the same way, the Blue grouping gives the term

The solutions of each grouping are combined into:

Inverse

The inverse of a function is solved in the same way by grouping the 0s instead.
The three terms to cover the inverse are all shown with grey boxes with different colored
borders:

• brown—
• gold—
• blue—BCD

This yields the inverse:

Through the use of De Morgan's laws, the product of sums can be determined:

Don't cares

The minterm 15 is dropped and replaced as a don't care, this removes the green term
completely but restricts the blue inverse term

Karnaugh maps also allow easy minimizations of functions whose truth tables include
"don't care" conditions (that is, sets of inputs for which the designer doesn't care what the
output is) because "don't care" conditions can be included in a ring to make it larger.
They are usually indicated on the map with a dash or X.

The example to the right is the same above example but with minterm 15 dropped and
replaced as a don't care. This allows the red term to expand all the way down and, thus,
removes the green term completely.

This yields the new minimum equation:


Note that the first term is just A not . In this case, the don't care has dropped a term
(the green); simplified another (the red); and removed the race hazard (the yellow as
shown in a following section).

Also, since the inverse case no longer has to cover minterm 15, minterm 7 can be covered

with rather than with similar gains.

Race hazards
Elimination

Above k-map with the term added to avoid race hazards

Karnaugh maps are useful for detecting and eliminating race hazards. They are very easy
to spot using a Karnaugh map, because a race condition may exist when moving between
any pair of adjacent, but disjointed, regions circled on the map.

• In the above example, a potential race condition exists when C is 1 and D is 0, A


is 1, and B changes from 1 to 0 (moving from the blue state to the green state).
For this case, the output is defined to remain unchanged at 1, but because this
transition is not covered by a specific term in the equation, a potential for a glitch
(a momentary transition of the output to 0) exists.
• There is a second glitch in the same example that is more difficult to spot: when D
is 0 and A and B are both 1, with C changing from 1 to 0 (moving from the blue
state to the red state). In this case the glitch wraps around from the top of the map
to the bottom.

Whether these glitches do occur depends on the physical nature of the implementation,
and whether we need to worry about it depends on the application.
In this case, an additional term of would eliminate the potential race hazard,
bridging between the green and blue output states or blue and red output states: this is
shown as the yellow region.

The term is redundant in terms of the static logic of the system, but such redundant, or
consensus terms, are often needed to assure race-free dynamic performance.

Similarly, an additional term of must be added to the inverse to eliminate another


potential race hazard. Applying De Morgan's laws creates another product of sums

expression for F, but with a new factor of .

2-variable map examples

The following are all the possible 2-variable, 2 × 2 Karnaugh maps. Listed with each is
the minterms as a function of ∑() and the race hazard free (see previous section)
minimum equation.

∑(0); K = 0 ∑(1); K = A′B′ ∑(2); K = AB′ ∑(3); K = A′B

∑(4); K = AB ∑(1,2); K = B′ ∑(1,3); K = A′ ∑(1,4); K = A′B′ +


AB

∑(2,3); K = AB′ + A ∑(2,4); K = A ∑(3,4); K = B ∑(1,2,3); K = A′ + B′


′B

∑(1,2,4); K = A + B′ ∑
B
(1,3,4); K = A′ + ∑(2,3,4); K = A +
B
∑(1,2,3,4); K = 1

Boolean algebra (logic)


Boolean algebra (or Boolean logic) is a logical calculus of truth values, developed by
George Boole in the 1840s. It resembles the algebra of real numbers, but with the
numeric operations of multiplication xy, addition x + y, and negation −x replaced by the
respective logical operations of conjunction x∧y, disjunction x∨y, and negation ¬x. The
Boolean operations are these and all other operations that can be built from these, such as
x∧(y∨z). These turn out to coincide with the set of all operations on the set {0,1} that
take only finitely many arguments; there are 22n such operations when there are n
arguments.

The laws of Boolean algebra can be defined axiomatically as certain equations called
axioms together with their logical consequences called theorems, or semantically as those
equations that are true for every possible assignment of 0 or 1 to their variables. The
axiomatic approach is sound and complete in the sense that it proves respectively neither
more nor fewer laws than the semantic approach.

Values
Boolean algebra is the algebra of two values. These are usually taken to be 0 and 1, as we
shall do here, although F and T, false and true, etc. are also in common use. For the
purpose of understanding Boolean algebra any Boolean domain of two values will do.

Regardless of nomenclature, the values are customarily thought of as essentially logical


in character and are therefore referred to as truth values, in contrast to the natural
numbers or the reals which are considered numerical values. On the other hand the
algebra of the integers modulo 2, while ostensibly just as numeric as the integers
themselves, was shown to constitute exactly Boolean algebra, originally by I.I. Zhegalkin
in 1927 and rediscovered independently in the west by Marshall Stone in 1936. So in fact
there is some ambiguity in the true nature of Boolean algebra: it can be viewed as either
logical or numeric in character.

More generally Boolean algebra is the algebra of values from any Boolean algebra as a
model of the laws of Boolean algebra. For example the bit vectors of a given length, as
with say 32-bit computer words, can be combined with Boolean operations in the same
way as individual bits, thereby forming a 232-element Boolean algebra under those
operations. Any such combination applies the same Boolean operation to all bits
simultaneously. This passage from the Boolean algebra of 0 and 1 to these more general
Boolean algebras is the Boolean counterpart of the passage from the algebra of the ring of
integers to the algebra of commutative rings in general. The two-element Boolean algebra
is the prototypical Boolean algebra in the same sense as the ring of integers is the
prototypical commutative ring. Boolean logic as the subject matter of this article is
independent of the choice of Boolean algebra (the same equations hold of every
nontrivial Boolean algebra); hence, there is no need here to consider any Boolean algebra
other than the two-element one. The article on Boolean algebra (structure) treats Boolean
algebras themselves.

Operations
Basic operations

After values, the next ingredient of any algebraic system is its operations. Whereas
elementary algebra is based on numeric operations multiplication xy, addition x + y, and
negation −x, Boolean algebra is customarily based on logical counterparts to those
operations, namely conjunction x∧y (AND), disjunction x∨y (OR), and complement or
negation ¬x (NOT). In electronics, the AND is represented as a multiplication, the OR is
represented as an addition, and the NOT is represented with an overbar: x ∧ y and x ∨ y,
therefore, become xy and x + y.

Conjunction is the closest of these three to its numerical counterpart, in fact on 0 and 1 it
is multiplication. As a logical operation the conjunction of two propositions is true when
both propositions are true, and otherwise is false. The first column of Figure 1 below
tabulates the values of x∧y for the four possible valuations for x and y; such a tabulation
is traditionally called a truth table.

Disjunction, in the second column of the figures, works almost like addition, with one
exception: the disjunction of 1 and 1 is neither 2 nor 0 but 1. Thus the disjunction of two
propositions is false when both propositions are false, and otherwise is true. This is just
the definition of conjunction with true and false interchanged everywhere; because of this
we say that disjunction is the dual of conjunction.

Logical negation however does not work like numerical negation at all. Instead it
corresponds to incrementation: ¬x = x+1 mod 2. Yet it shares in common with numerical
negation the property that applying it twice returns the original value: ¬¬x = x, just as −
(−x) = x. An operation with this property is called an involution. The set {0,1} has two
permutations, both involutary, namely the identity, no movement, corresponding to
numerical negation mod 2 (since +1 = −1 mod 2), and SWAP, corresponding to logical
negation. Using negation we can formalize the notion that conjunction is dual to
disjunction via De Morgan's laws, ¬(x∧y) = ¬x ∨ ¬y and ¬(x∨y) = ¬x ∧ ¬y. These can
also be construed as definitions of conjunction in terms of disjunction and vice versa:
x∧y = ¬(¬x ∨ ¬y) and x∨y = ¬(¬x ∧ ¬y).
Various representations of Boolean operations

Figure 2 shows the symbols used in digital electronics for conjunction and disjunction;
the input ports are on the left and the signals flow through to the output port on the right.
Inverters negating the input signals on the way in, or the output signals on the way out,
are represented as circles on the port to be inverted.

Derived operations

Other Boolean operations are derivable from these by composition. For example
implication x→y (IMP), in the third column of the figures, is a binary operation which is
false when x is true and y is false, and true otherwise. It can be expressed as x→y = ¬x∨y
(the OR-gate of Figure 2 with the x input inverted), or equivalently ¬(x∧¬y) (its De
Morgan equivalent in Figure 3). In logic this operation is called material implication, to
distinguish it from related but non-Boolean logical concepts such as entailment and
relevant implication. The idea is that an implication x→y is by default true (the weaker
truth value in the sense that false implies true but not vice versa) unless its premise or
antecedent x holds, in which case the truth of the implication is that of its conclusion or
consequent y.

Although disjunction is not the exact counterpart of numerical addition, Boolean algebra
nonetheless does have an exact counterpart, called exclusive-or (XOR) or parity, x⊕y. As
shown in the fourth column of the figures, the exclusive-or of two propositions is true just
when exactly one of the propositions is true; equivalently when an odd number of the
propositions is true, whence the name "parity". Exclusive-or is the operation of addition
mod 2. The exclusive-or of any value with itself vanishes, x⊕x = 0, since the arguments
have an even number of whatever value x has. Its digital electronics symbol is shown in
Figure 2, being a hybrid of the disjunction symbol and the equality symbol. The latter
reflects the fact that the negation (which is also the dual) of XOR, ¬(x⊕y), is logical
equivalence, EQV, being true just when x and y are equal, either both true or both false.
XOR and EQV are the only binary Boolean operations that are commutative and whose
truth tables have equally many 0s and 1s. Exclusive-or together with conjunction
constitute yet another complete basis for Boolean algebra, with the Boolean operations
reformulated as the Zhegalkin polynomials.

Another example is Sheffer stroke, x|y, the NAND gate in digital electronics, which is
false when both arguments are true, and true otherwise. NAND is definable by
composition of negation with conjunction as x |y = ¬(x∧y). It does not have its own
schematic symbol as it is easily represented as an AND gate with an inverted output.
Unlike conjunction and disjunction, NAND is a binary operation that can be used to
obtain negation, via the definition ¬x = x|x. With negation in hand one can then in turn
define conjunction in terms of NAND via x∧y = ¬(x|y), from which all other Boolean
operations of nonzero arity can then be obtained. NOR, ¬(x∨y), as the evident dual of
NAND serves this purpose equally well. This universal character of NAND and NOR
makes them a popular choice for gate arrays, integrated circuits with multiple general-
purpose gates.

The above-mentioned duality of conjunction and disjunction is exposed further by De


Morgan's laws, ¬(x∧y) = ¬x∨¬y and ¬(x∨y) = ¬x∧¬y. Figure 3 illustrates De Morgan's
laws by giving for each gate its De Morgan dual, converted back to the original operation
with inverters on both inputs and the outputs. In the case of implication, taking the form
of an OR-gate with one inverter on disjunction, that inverter is cancelled by the second
inverter that would have gone there. The De Morgan dual of XOR is just XOR with an
inverter on the output (there is no separate symbol); as with implication, putting inverters
on all three ports cancels the dual's output inverter. More generally, changing an odd
number of inverters on an XOR gate produces the dual gate, an even number leaves the
gate's functionality unchanged.

As with all the other laws in this section, De Morgan's laws may be verified case by case
for each of the 2n possible valuations of the n variables occurring in the law, here two
variables and hence 22 = 4 valuations. De Morgan's laws play a role in putting Boolean
terms in certain normal forms, one of which we will encounter later in the section on
soundness and completeness.

Figure 4 illustrates the corresponding Venn diagrams for each of the four operations
presented in Figures 1-3. The interior (respectively exterior) of each circle represents the
value true (respectively false) for the corresponding input, x or y. The convention
followed here is to represent the true or 1 outputs as dark regions and false as light, but
the reverse convention is also sometimes used.
All Boolean operations

There are infinitely many expressions that can be built from two variables using the
above operations, suggesting great expressiveness. Yet a straightforward counting
argument shows that only 16 distinct binary operations on two values are possible. Any
given binary operation is determined by its output values for each possible combination
of input values. The two arguments have 2 × 2 = 4 possible combinations of values
between them, and there are 24 = 16 ways of assigning an output value to each of these
four input values. The choice of one of these 16 assignments then determines the
operation; so all together there are only 16 distinct binary operations.

The 16 binary Boolean operations can be organized as follows:

Two constant operations, 0 and 1.

Four operations dependent on one variable, namely x, ¬x, y, and ¬y, whose truth tables
amount to two juxtaposed rectangles, one containing two 1s and the other two 0s.

Two operations with a "checkerboard" truth table, namely XOR and EQV.

Four operations are obtained from disjunction with some subset of its inputs negated,
namely x∨y, x→y, y→x, and x|y; their truth tables contain a single 0.

The final four come from the same treatment applied to conjunction, having a single 1 in
their truth tables.

10 of the 16 operations depend on both variables; all are representable schematically as


an AND-gate, an OR-gate, or an XOR-gate, with one port optionally inverted. For the
AND and OR gates the location of each inverter matters, for the XOR gate it does not,
only whether there is an even or odd number of inverters.

Operations of other arities are possible. For example the ternary counterpart of
disjunction can be obtained as (x∨y)∨z. In general an n-ary operation, one having n
inputs, has 2n possible valuations of those inputs. An operation has two possibilities for
each of these, whence there exist 22n n-ary Boolean operations. For example, there are 232
= 4,294,967,296 operations with 5 inputs.

Although Boolean algebra confines attention to operations that return a single bit, the
concept generalizes to operations that take n bits in and return x bits instead of one bit.
Digital circuit designers draw such operations as suitably shaped boxes with n wires
entering on the left and m wires exiting on the right. Such multi-output operations can be
understood simply as m n-ary operations. The operation count must then be raised to the
m-th power, or, in the case of n inputs, (22n)m = 2m2n operations. The number of Boolean
operations of this generalized kind with say 5 inputs and 5 outputs is 1.46 × 1048. A logic
gate or computer module mapping 32 bits to 32 bits could implement any of 5.47 ×
1041,373,247,567 operations, more than is obtained by squaring a googol 28 times.
Laws
Axioms

With values and operations in hand, the next aspect of Boolean algebra is that of laws or
properties. As with many kinds of algebra, the principal laws take the form of equations
between terms built up from variables using the operations of the algebra. Such an
equation is deemed a law or identity just when both sides have the same value for all
values of the variables, equivalently when the two terms denote the same operation.

Numeric algebra has laws such as commutativity of addition and multiplication, x + y =


y + x and xy = yx. Similarly, Boolean algebra has commutativity in that x ∨ y = y ∨ x for
disjunction and x ∧ y = y ∧ x for conjunction. Not all binary operations are commutative;
Boolean implication is not commutative, like subtraction and division.

Another equally fundamental law is associativity, which in the case of numeric


multiplication is expressed as x(yz) = (xy)z, justifying abbreviating both sides to xyz and
thinking of multiplication as a single ternary operation. All four of numeric addition and
multiplication and logical disjunction and conjunction are associative, giving for the latter
two the Boolean laws x ∨ (y ∨ z) = (x ∨ y) ∨ z and x ∧ (y ∧ z) = (x ∧ y) ∧ z.

Again numeric subtraction and logical implication serve as examples, this time of binary
operations that are not associative. On the other hand exclusive-or, being just addition
mod 2, is both commutative and associative.

Boolean algebra does not completely mirror numeric algebra however, as both
conjunction and disjunction satisfy idempotence, expressed respectively as x ∧ x = x and
x ∨ x = x. These laws are easily verified by considering the two valuations 0 and 1 for x.
But since 2 + 2 = 2 × 2 = 4 in arithmetic, clearly numeric addition and multiplication are
not idempotent. With arithmetic mod 2 on the other hand, multiplication is idempotent,
though not addition since 1 + 1 = 0 mod 2, reflected logically in the idempotence of
conjunction but not of exclusive-or.

A more subtle difference between number and logic is with x(x + y) and x + xy, neither of
which equal x numerically. In Boolean algebra however, both x ∧ (x ∨ y) and x ∨ (x ∧ y)
are equal to x, as can be verified for each of the four possible valuations for x and y.
These two Boolean laws are called the laws of absorption. These laws (both are needed)
together with the associativity, commutativity, and idempotence of conjunction and
disjunction constitute the defining laws or axioms of lattice theory. (Actually
idempotence can be derived from the other axioms.)

Another law common to numbers and truth values is distributivity of multiplication over
addition, when paired with distributivity of conjunction over disjunction. Numerically we
have x(y + z) = xy + xz, whose Boolean algebra counterpart is x ∧ (y ∨ z) =
(x ∧ y) ∨ (x ∧ z). On the other hand Boolean algebra also has distributivity of disjunction
over conjunction, x ∨ (y ∧ z) = (x ∨ y) ∧ (x ∨ z), for which there is no numeric
counterpart, consider 1 + 2 × 3 = 7 whereas (1 + 2) × (1 + 3) = 12. Like associativity,
distributivity has three variables and so requires checking 23 = 8 cases.

Either distributivity law for Boolean algebra entails the other. Adding either to the
axioms for lattices axiomatizes the theory of distributive lattices. That theory does not
need the idempotence axioms because they follow from the six absorption, distributivity,
and associativity laws.

Two Boolean laws having no numeric counterpart are the laws characterizing logical
negation, namely x ∧ ¬x = 0 and x ∨ ¬x = 1. These are the only laws thus far that have
required constants. It then follows that x ∧ 0 = x ∧ (x ∧ ¬x) = (x ∧ x) ∧ ¬x = x ∧ ¬x = 0,
showing that 0 works with conjunction in logic just as it does with multiplication of
numbers. Also x ∨ 0 = x ∨ (x ∧ ¬x) = x by absorption. Dualizing this reasoning, we
obtain x ∨ 1 = 1 and x ∧ 1 = x. Alternatively we can justify these laws more directly
simply by checking them for each of the two valuations of x.

The six laws of lattice theory along with these first two laws for negation axiomatize the
theory of complemented lattices. Including either distributivity law then axiomatizes the
theory of complemented distributive lattices. For convenience we collect these nine laws
in one place as follows.

associativity
commutativi
ty
absorption

distributivity

complement
s

The next two sections show that this theory is sufficient to axiomatize all the valid laws
or identities of two-valued logic, that is, Boolean algebra. It follows that Boolean algebra
as commonly defined in terms of these axioms coincides with the intuitive semantic
notion of the valid identities of two-valued logic.

Derivations

While the Boolean laws enumerated in the previous section are certainly highlights of
Boolean algebra, they by no means exhaust the laws, of which there are infinitely many,
nor do they even exhaust the highlights. As it is out of the question to proceed in the ad
hoc way of the preceding section for ever, the question arises as to how best to present
the remaining laws.
One way of establishing an equation as being a law is to verify its truth for all valuations
of its variables, sometimes called the method of truth tables. This is the method we
depended on in the previous section to justify each law as we introduced it, constituting
the semantic approach to establishing laws. From a practical standpoint the method lends
itself to computer implementation for 20-30 variables because the enumeration of
valuations is straightforward to program and boring to carry out, making it ideal work for
a computer. Because there are 2n valuations to check the method starts to become
impractical as 40 variables is approached. Beyond that the approach becomes of value
mainly as the in-principle semantic definition of what constitutes an identically true or
valid equation.

In contrast the syntactic approach is to derive new laws by symbolic manipulation from
already established laws such as those listed in the previous section. (This is not to imply
that derivations of a law shorter than the length of a semantic verification of that law need
exist, although some thousand-variable laws impossible to verify by enumeration of
valuations can have quite short derivations.) Here is an example showing the derivation
of (w∨x)∨(y∨z) = (w∨y)∨(x∨z) from just the commutativity and associativity of
disjunction.

(w∨x)∨(y∨z)
= ((w∨x)∨y)∨z
= (w∨(x∨y))∨z
= (w∨(y∨x))∨z
= ((w∨y)∨x)∨z
= (w∨y)∨(x∨z)

The first two and last two steps appealed to associativity while the middle step used
commutativity.

The rules of derivation for forming new laws from old can be assumed to be those
permissible in high school algebra. For definiteness however it is worthwhile formulating
a well-defined set of rules showing exactly what is needed. These are the domain-
independent rules of equational logic, as sound for logic as they are for numerical
domains or any other kind.

Reflexivity: t = t. That is, any equation whose two sides are the same term t is a law.
(While arguably an axiom rather than a rule since it has no premises, we classify it as a
rule because like the other three rules it is domain-independent, making no mention of
specific logical, numeric, or other operations.)

Symmetry: From s = t infer t = s. That is, the two sides of a law may be interchanged.
Intuitively one attaches no importance to which side of an equation a term comes from.

Transitivity: A chain s = t = u of two laws yields the law s = u. (This law of "cutting out
the middleman" is applied four times in the above example to eliminate the intermediate
terms.)
Substitution: Given two laws and a variable, each occurrence of that variable in the first
law may be replaced by one or the other side of the second law. (Distinct occurrences can
be replaced by distinct sides, but every occurrence must be replaced by one or the other
side.)

While the first equation in the above example might seem simply a straightforward
application of the associativity law, when analyzed more carefully according to the above
rules it can be seen to require something more. We can justify it in terms of the
reflexivity and substitution rules. Beginning with the laws x∨(y∨z) = (x∨y)∨z and w∨x =
w∨x, we use substitution to replace both occurrences of x by w∨x to arrive at the first
equation. All five equations in the chain are accounted for along similar lines, with
commutativity in place of associativity in the middle equation.

Soundness and completeness

It can be shown that the two approaches, semantic and syntactic, to constructing all the
laws of Boolean algebra lead to the same set of laws. We say that the syntactic approach
is sound when it yields a subset of the semantically obtained laws, and complete when it
yields a superset thereof. We can then restate this coinciding of the semantic and
syntactic approaches as the soundness and completeness of the syntactic approach with
respect to (or as calibrated by) the semantic approach.

Soundness follows firstly from the fact that the initial laws or axioms we started from
were all identities, that is, semantically true laws. Secondly it depends on the easily
verified fact that the rules preserve identities.

Completeness can be proved by first deriving a few additional useful laws and then
showing how to use the axioms and rules to prove that a term with n variables, ordered
alphabetically say, is equal to its n-ary normal form, namely a unique term associated
with the n-ary Boolean operation realized by that term with the variables in that order. It
then follows that if two terms denote the same operation (the same thing as being
semantically equal), they are both provably equal to the normal form term denoting that
operation, and hence by transitivity provably equal to each other.

There is more than one suitable choice of normal form, but complete disjunctive normal
form will do. A literal is either a variable or a negated variable. A disjunctive normal
form (DNF) term is a disjunction of conjunctions of literals. (Associativity allows a term
such as x∨(y∨z) to be viewed as the ternary disjunction x∨y∨z, likewise for longer
disjunctions, and similarly for conjunction.) A DNF term is complete when every disjunct
(conjunction) contains exactly one occurrence of each variable, independently of whether
or not the variable is negated. Such a conjunction uniquely represents the operation it
denotes by virtue of serving as a coding of those valuations at which the operation returns
1. Each conjunction codes the valuation setting the positively occurring variables to 1 and
the negated ones to 0; the value of the conjunction at that valuation is 1, and hence so is
the whole term. At valuations corresponding to omitted conjunctions, all conjunctions
present in the term evaluate to 0 and hence so does the whole term.
In outline the general technique for converting any term to its normal form, or
normalizing it, is to use De Morgan's laws to push the negations down to the variables.
This yields monotone normal form, a term built from literals with conjunctions and
disjunctions. For example ¬(x ∨ (¬y∧z)) becomes ¬x ∧ ¬(¬y∧z) and then ¬x ∧
(¬¬y∨¬z). Applying ¬¬x = x then yields ¬x ∧ (y∨¬z).

Next use distributivity of conjunction over disjunction to push all conjunctions down
below all disjunctions, yielding a DNF term. This makes the above example (¬x∧y) ∨
(¬x∧¬z).

Then for each variable y, replace each conjunction x not containing y with the disjunction
of two copies of x, with y conjoined to one copy of x and ¬y conjoined to the other, in the
end yielding a complete DNF term. (This is one place where an auxiliary law helps, in
this case x = x∧1 = x∧(y∨¬y) = (x∧y) ∨ (x∧¬y).) In the above example the first
conjunction lacks z while the second lacks y; expanding appropriately yields the complete
DNF term (¬x∧y∧z) ∨ (¬x∧y∧¬z) ∨ (¬x∧¬z∧y) ∨ (¬x∧¬z∧¬y).

Next use commutativity to put the literals in each conjunction in alphabetical order. The
example becomes (¬x∧y∧z) ∨ (¬x∧y∧¬z) ∨ (¬x∧y∧¬z) ∨ (¬x∧¬y∧¬z). This brings any
repeated copies of literals next to each other; delete the redundant copies using
idempotence of conjunction, not needed in our example.

Lastly order the disjuncts according to a suitable uniformly applied criterion. The
criterion we use here is to read the positive and negative literals of a conjunction as
respectively 1 and 0 bits, and to read the bits in a conjunction as a binary number. In our
example the bits are 011, 010, 010, 000, or in decimal 3, 2, 2, 0. Ordering them
numerically as 0, 2, 2, 3 yields (¬x∧¬y∧¬z) ∨ (¬x∧y∧¬z) ∨ (¬x∧y∧¬z) ∨ (¬x∧y∧z).
Note that these bits are exactly those valuations for x, y, and z that satisfy our original
term ¬(x∨(¬y∧z)). Complete DNF amounts to a canonical way of representing the truth
table for the original term as another term.

Repeated conjunctions can then be deleted using idempotence of disjunction, which


simplifies our example to (¬x∧¬y∧¬z) ∨ (¬x∧y∧¬z) ∨ (¬x∧y∧z).

In this way we have proved that the term we started with is equal to the normal form term
for the operation it denotes. Hence all terms denoting that operation are provably equal to
the same normal form term and hence by transitivity to each other.
Boolean logic
Boolean logic is a complete system for logical operations, used in many systems[clarification
needed]
. It was named after George Boole, who first defined an algebraic system of logic in
the mid 19th century. Boolean logic has many applications in electronics, computer
hardware and software, and is the basis of all modern digital electronics. In 1938, Claude
Shannon showed how electric circuits with relays were a model for Boolean logic. This
fact soon proved enormously consequential with the emergence of the electronic
computer.

Using the algebra of sets, this article contains a basic introduction to sets, Boolean
operations, Venn diagrams, truth tables, and Boolean applications. The Boolean algebra
article discusses a type of algebraic structure that satisfies the axioms of Boolean logic.
The binary arithmetic article discusses the use of binary numbers in computer systems.

Set logic vs. Boolean logic


Sets can contain any elements. We will first start out by discussing general set logic, then
restrict ourselves to Boolean logic, where elements (or "bits") each contain only two
possible values, called various names, such as "true" and "false", "yes" and "no", "on"
and "off", or "1" and "0".

Terms

Venn diagram showing the intersection of sets "A AND B" (in violet/dark shading), the
union of sets "A OR B" (all the colored regions), and the exclusive OR case "set A XOR
B" (all the colored regions except the violet). The "universe" is represented by all the area
within the rectangular frame.

Let X be a set:

• An element is one member of a set and is denoted by . If the element is not a


member of a set it is denoted by .
• The universe is the set X, sometimes denoted by 1. Note that this use of the word
universe means "all elements being considered", which are not necessarily the
same as "all elements there are".

• The empty set or null set is the set of no elements, denoted by and sometimes
0.

• A unary operator applies to a single set. There is one unary operator, called
logical NOT. It works by taking the complement with respect to the universe, i.e.
the set of all elements under consideration.

• A binary operator applies to two sets. The basic binary operators are logical OR
and logical AND. They perform the union and intersection of sets. There are also
other derived binary operators, such as XOR (exclusive OR).

• A subset is denoted by and means every element in set A is also in set B.

• A superset is denoted by and means every element in set B is also in set


A.

• The identity or equivalence of two sets is denoted by and means that


every element in set A is also in set B and every element in set B is also in set A.

• A proper subset is denoted by and means every element in set A is also


in set B and the two sets are not identical.

• A proper superset is denoted by and means every element in set B is


also in set A and the two sets are not identical.

Example
Imagine that set A contains all even numbers (multiples of two) in "the universe"
(defined in the example below as all integers between 0 and 30 inclusive) and set B
contains all multiples of three in "the universe". Then the intersection of the two sets (all
elements in sets A AND B) would be all multiples of six in "the universe". The
complement of set A (all elements NOT in set A) would be all odd numbers in "the
universe".
Chaining operations together

While at most two sets are joined in any Boolean operation, the new set formed by that
operation can then be joined with other sets utilizing additional Boolean operations.
Using the previous example, we can define a new set C as the set of all multiples of five
in "the universe". Thus "sets A AND B AND C" would be all multiples of 30 in "the
universe". If more convenient, we may consider set AB to be the intersection of sets A
and B, or the set of all multiples of six in "the universe". Then we can say "sets AB AND
C" are the set of all multiples of 30 in "the universe". We could then take it a step further,
and call this result set ABC.

Use of parentheses

While any number of logical ANDs (or any number of logical ORs) may be chained
together without ambiguity, the combination of ANDs and ORs and NOTs can lead to
ambiguous cases. In such cases, parentheses may be used to clarify the order of
operations. As always, the operations within the innermost pair is performed first,
followed by the next pair out, etc., until all operations within parentheses have been
completed. Then any operations outside the parentheses are performed.

Application to binary values

In this example we have used natural numbers, while in Boolean logic binary numbers
are used. The universe, for example, could contain just two elements, "0" and "1" (or
"true" and "false", "yes" and "no", "on" or "off", etc.). We could also combine binary
values together to get binary words, such as, in the case of two digits, "00", "01", "10",
and "11". Applying set logic to those values, we could have a set of all values where the
first digit is "0" ("00" and "01") and the set of all values where the first and second digits
are different ("01" and "10"). The intersection of the two sets would then be the single
element, "01". This could be shown by the following Boolean expression, where "1st" is
the first digit and "2nd" is the second digit:

(NOT 1st) AND (1st XOR 2nd)

Properties
We define symbols for the two primary binary operations as (logical AND/set
intersection) and (logical OR/set union), and for the single unary operation / ~
(logical NOT/set complement). We will also use the values 0 (logical FALSE/the empty
set) and 1 (logical TRUE/the universe). The following properties apply to both Boolean
logic and set logic (although only the notation for Boolean logic is displayed here):

associativity
commutativi
ty
absorption
distributivity
complement
s
idempotency

boundedness

0 and 1 are
complement
s
de Morgan's
laws
involution

The first three properties define a lattice; the first five define a Boolean algebra. The
remaining five are a consequence of the first five.

Other notations
Mathematicians and engineers often use plus (+) for OR and a product sign ( ) for AND.
OR and AND are somewhat analogous to addition and multiplication in other algebraic
structures, and this notation makes it very easy to get sum of products form for normal
algebra. NOT may be represented by a line drawn above the expression being negated (
). It also commonly leads to giving a higher precedence than +, removing the need for
parenthesis in some cases.

Programmers will often use a pipe symbol (|) for OR, an ampersand (&) for AND, and a
tilde (~) for NOT. In many programming languages, these symbols stand for bitwise
operations. "||", "&&", and "!" are used for variants of these operations.

Another notation uses "meet" for AND and "join" for OR.[clarification needed] However, this can
lead to confusion, as the term "join" is also commonly used for any Boolean operation
which combines sets together, which includes both AND and OR.[citation needed]

Basic mathematics use of Boolean terms


• In the case of simultaneous equations, they are connected with an implied logical
AND:

x+y=2
AND
x-y=2

• The same applies to simultaneous inequalities:

x+y<2
AND
x-y<2

• The greater than or equals sign ( ) and less than or equals sign ( ) may be
assumed to contain a logical OR:

X<2
OR
X=2

• The plus/minus sign ( ), as in the case of the solution to a square root problem,
may be taken as logical OR:

WIDTH = 3
OR
WIDTH = -3

English language use of Boolean terms


Care should be taken when converting an English sentence into a formal boolean
statement. Many English sentences have imprecise meanings.
In certain cases, AND and OR can be used interchangeably in English:

• I always carry an umbrella for when it rains and snows.

• I always carry an umbrella for when it rains or snows.

• I never walk in the rain or snow.

Sometimes the English words "and" and "or" have a meaning that is apparently opposite
of its meaning in boolean logic:

• "Give me all the red and blue berries," usually means, "Give me all berries that
are red or blue". (The former might have been interpreted as a request for berries
that are each both red and blue.) An alternative phrasing for this request would be,
"Give me all berries that are red and all berries that are blue."

Depending on the context, the word "or" may correspond with either logical OR or
logical XOR:

• The waitress asked, "Would you like cream or sugar with your coffee?" (Logical
OR.)

• The waitress asked, "Would you like soup or salad with your meal?" (Logical
XOR.)

Logical XOR can be translated as "one, or the other, but not both". In most cases, this
concept is most effectively communicated in English using "either/or".

The word combination "and/or" is sometimes used in English to specify a logical OR,
when just using the word "or" alone might have been mistaken as meaning logical XOR:

• "I'm having chicken and/or beef for dinner." (Logical OR.) An alternative
phrasing for standard written English would be, "For dinner, I'm having chicken
or beef (or both)."

• The use of "and/or" is generally disfavored in formal writing.[1] Its usage may
introduce critical imprecision in legal agreements, research findings, and
specifications for computer programs or electronic circuits.

This can be a significant challenge when providing precise specifications for a computer
program or electronic circuit in English. The description of such functionality may be
ambiguous. Take for example the statement, "The program should verify that the
applicant has checked the male or female box." This should be interpreted as an XOR
and a verification performed to ensure that one, and only one, box is selected. In other
cases the proper interpretation of English may be less obvious; the author of the
specification should be consulted to determine the original intent.
De Morgan's laws
In formal logic, De Morgan's laws are rules relating the logical operators "and" and "or"
in terms of each other via negation, namely:

NOT (P OR Q) = (NOT P) AND (NOT Q)


NOT (P AND Q) = (NOT P) OR (NOT Q)

Formal definition
In propositional calculus form:

where:

• ¬ is the negation operator (NOT)


• ∧ is the conjunction operator (AND)
• ∨ is the disjunction operator (OR)
• ⇔ means logically equivalent (if and only if)

In set theory and Boolean algebra, it is often stated as "Union and intersection
interchange under complementation."[1]:

where:

• A is the negation of A, the overline is written above the terms to be negated


• ∩ is the intersection operator (AND)
• ∪ is the union operator (OR)

The generalized form is:

where I is some, possibly uncountable, indexing set.

In set notation, De Morgan's law can be remembered using the mnemonic "break the line,
change the sign".[2]
History
The law is named after Augustus De Morgan (1806–1871)[3] who introduced a formal
version of the laws to classical propositional logic. De Morgan's formulation was
influenced by algebraization of logic undertaken by George Boole, which later cemented
De Morgan's claim to the find. Although a similar observation was made by Aristotle and
was known to Greek and Medieval logicians [4] (in the 14th century William of Ockham
wrote down the words that would result by reading the laws out[5]), De Morgan is given
credit for stating the laws formally and incorporating them in to the language of logic. De
Morgan's Laws can be proved easily, and may even seem trivial.[6] Nonetheless, these
laws are helpful in making valid inferences in proofs and deductive arguments.

Informal proof
De Morgan's theorem may be applied to the negation of a disjunction or the negation of a
conjunction in all or part of a formula.

Negation of a disjunction

In the case of its application to a disjunction, consider the following claim: it is false that
either A is true or B is true, which is written as:

In that it has been established that neither A nor B is true, then it must follow that A is
not true and B is not true. If either A or B were true, then the disjunction of A and B
would be true, making its negation false.

Working in the opposite direction with the same type of problem, consider this claim:

This claim asserts that A is false and B is false (or "not A" and "not B" are true).
Knowing this, a disjunction of A and B must be false also. However, the negation of said
disjunction yields a true result that is logically equivalent to the original claim. Presented
in English, this follows the logic that "Since two things are false, it is also false that either
of them is true."

Negation of a conjunction

The application of De Morgan's theorem to a conjunction is very similar to its application


to a disjunction both in form and rationale. Consider the following claim: It is false that A
and B are both true, which is written as:
In order for this claim to be true, either or both of A or B must be false, for if they both
were true, then the conjunction of A and B would be true, making its negation false. So,
the original claim may be translated as "Either A is false or B is false", or "Either 'not A'
is true or 'not B' is true".

Presented in English, this would follow the logic that "Since it is false that two things are
both true, at least one of them must be false."

Proof
By truth table

The laws may be proven directly using truth tables; "1" represents true, "0" represents
false.

First we prove: ¬(p ∨ q) ⇔ (¬p) ∧ (¬q).

p∨
pq ¬(p ∨ q) ¬p ¬q (¬p) ∧ (¬q)
q
0 00 1 1 1 1
0 11 0 1 0 0
1 01 0 0 1 0
1 11 0 0 0 0

Since the values in the 4th and last columns are the same for all rows (which cover all
possible truth value assignments to the variables), we can conclude that the two
expressions are logically equivalent.Now we prove ¬(p ∧ q) ⇔ (¬p) ∨ (¬q) by the same
method:

p∧
pq ¬(p ∧ q) ¬p ¬q (¬p) ∨ (¬q)
q
0 00 1 1 1 1
0 10 1 1 0 1
1 00 1 0 1 1
1 11 0 0 0 0

Using sets

Two sets are equal if and only if they have the same elements. For an arbitrary x, the
following are equivalent:

• x∈A∩B
• x∉A∩B
• x ∉ A or x ∉ B
• x ∈ A or x ∈ B
• x∈A∪B

Therefore A ∩ B = A ∪ B.

A ∪ B = A ∩ B can be shown using a similar method.

Extensions
In extensions of classical propositional logic, the duality still holds (that is, to any logical
operator we can always find its dual), since in the presence of the identities governing
negation, one may always introduce an operator that is the De Morgan dual of another.
This leads to an important property of logics based on classical logic, namely the
existence of negation normal forms: any formula is equivalent to another formula where
negations only occur applied to the non-logical atoms of the formula. The existence of
negation normal forms drives many applications, for example in digital circuit design,
where it is used to manipulate the types of logic gates, and in formal logic, where it is a
prerequisite for finding the conjunctive normal form and disjunctive normal form of a
formula. Computer programmers use them to change a complicated statement like IF ...
AND (... OR ...) THEN ... into its opposite. They are also often useful in computations in
elementary probability theory.

Let us define the dual of any propositional operator P(p, q, ...) depending on elementary
propositions p, q, ... to be the operator Pd defined by

This idea can be generalised to quantifiers, so for example the universal quantifier and
existential quantifier are duals:

To relate these quantifier dualities to the De Morgan laws, set up a model with some
small number of elements in its domain D, such as

D = {a, b, c}.

Then

and
But, using De Morgan's laws,

and

verifying the quantifier dualities in the model.

Then, the quantifier dualities can be extended further to modal logic, relating the box
("necessarily") and diamond ("possibly") operators:

In its application to the alethic modalities of possibility and necessity, Aristotle observed
this case, and in the case of normal modal logic, the relationship of these modal operators
to the quantification can be understood by setting up models using Kripke semantics.

You might also like