Professional Documents
Culture Documents
Detailed Notes
Unit-I
Part A: Number System
Number Systems
Convenient as the decimal number system generally is, its usefulness in machine computation is
limited because of the nature of practical electronic devices. In most present digital machines, the
numbers are represented, and the arithmetic operations performed, in a different number system called
the binary number system. This section is concerned with the representation of numbers in various
systems and with methods of conversion from one system to another.
Number Representation
An ordinary decimal number actually represents a polynomial in powers of 10.For example, the
number 123.45 represents the polynomial
123.45 = 1 × 102 + 2 × 101 + 3 × 100 + 4 × 10−1 + 5 × 10−2.
This method of representing decimal numbers is known as the decimal number system, and the number
10 is referred to as the base (or radix) of the system. In a system whose base is b, a positive number N
represents the polynomial
where the base b is an integer greater than 1 and the a’s are integers in the range 0 ≤ ai ≤ b − 1. The
sequence of digits aq−1aq−2 · · · a0 constitutes the integer part of N, while the sequence a−1a−2 · · · a−p
constitutes the fractional part of N . Thus, p and q designate the number of digits in the fractional and
integer parts, respectively. The integer and fractional parts are usually separated by a radix point. The
digit a−p is referred to as the least significant digit while aq−1 is called the most significant digit.
When the base b equals 2, the number representation is referred to as the binary number system. For
example, the binary number 1101.01 represents the polynomial
Page 21 of 169
The complement of a digit a, denoted a’ , in base b is defined as
a' = (b − 1) − a.
That is, the complement a is the difference between the largest digit in base b and digit a. In the binary
number system, since b = 2, 0 = 1 and 1 = 0.
In the decimal number system, the largest digit is 9. Thus, for example, the complement 1 of 3 is 9 − 3 =
6.
Base Conversion Methods
Suppose that some number N , which we wish to express in base b2, is presently expressed in base b1.
In converting a number from base b1 to base b2, it is convenient to distinguish between two cases. In
the first case b1 < b2, and consequently base-b2 arithmetic can be used in the conversion process. The
conversion technique involves expressing number (N )b1 as a polynomial in powers of b1 and
evaluating the polynomial using base-b2 arithmetic.
When b1 > b2 it is more convenient to use base-b1 arithmetic. The conversion procedure will be
obtained by considering separately the integer and fractional parts of N . Let (N )b1 be an integer whose
value in base b2 is given by:
To find the values of the a’s, let us divide the above polynomial by b2.
Thus, the least significant digit of (N )b2 , i.e., a0, is equal to the first remainder. The next most
significant digit, a1, is obtained by dividing the quotient Q0 by b2, i.e.,
Page 22 of 169
The remaining a’s are evaluated by repeated divisions of the quotients until Qq−1 is equal to zero. If N
is finite, the process must terminate.
Example We wish to express the numbers (432.2)8 and (1101.01)2 in base 10. Thus
(432.2)8 = 4 × 82 + 3 × 81 + 2 × 80 + 2 × 8−1 = (282.25)10,
(1101.01)2 = 1 × 23 + 1 × 22 + 0 × 21 + 1 × 20 + 0×2−1 + 1 × 2−2 = (13.25)10.
In both cases, the arithmetic operations are done in base 10.
Example The above conversion procedure is now applied to convert (548)10 to base 8. The ri in the
table below denote the remainders. The first entries in the table are 68 and 4, corresponding,
respectively, to the quotient Q0 and the first remainder from the division (548/8)10. The remaining
entries are found by successive division.
Thus, (548)10 = (1044)8. In a similar manner we can obtain the conversion of (345) 10 to (1333)6, as
illustrated in the table below.
Example To convert (432.354)10 to binary, we first convert the integer part and then the fractional part.
For the integer part we have
Consequently (0.354)10 = (0.0101101 · · ·)2. The conversion is usually car-ried up to the desired
accuracy. In our example, reconversion to base 10 shows that
(110110000.0101101)2 = (432.3515)10
A considerably simpler conversion procedure may be employed in converting octal numbers (i.e.,
numbers in base 8) to binary and vice versa. Since 8 = 23, each octal digit can be expressed by three
binary digits. For example, (6)8 can be expressed as (110)2, etc. The procedure of converting a binary
number into an octal number consists of partitioning the binary number into groups of three digits,
starting from the binary point, and to determine the octal digit corresponding to each group.
Example
(123.4)8 = (001 010 011.100)2,
(1010110.0101)2 = (001 010 110.010 100) = (126.24)8.
A similar procedure may be employed in conversions from binary to hexa-decimal (base 16), except
that four binary digits are needed to represent a single hexadecimal digit. In fact, whenever a number is
Page 24 of 169
converted from base b1 to base b2, where b2 = b1k , k digits of that number when grouped may be
represented by a single digit from base b2.
Binary Arithmetic
The binary number system is widely used in digital systems. Although a detailed study of digital
arithmetic is beyond the scope of this book, we shall present the elementary techniques of binary
arithmetic. The basic arithmetic operations are summarized in Table 1.2, where the sum and carry,
difference and borrow, and product are computed for every combination of binary digits (abbreviated
bits) 0 and 1.
Binary addition is performed in a manner similar to that of decimal addition. Corresponding bits are
added and if a carry 1 is produced then it is added to the binary digits at the left.
In subtraction, if a borrow of 1 occurs and the next left digit of the minuend (the number from which a
subtraction is being made) is 1 then the latter is changed to 0 and subtraction is continued in the usual
manner. If, however, the next left digit of the minuend is 0 then it is changed to 1, as is each successive
minuend digit to the left which is equal to 0. The first minuend digit to the left, which is equal to 1, is
changed to 0, and subtraction is continued.
Example The subtraction of (12.50)10 from (18.75)10 in binary proceeds as follows:
Just as with decimal numbers, the multiplication of binary numbers is per-formed by successive
addition while division is performed by successive sub-traction.
Page 25 of 169
Example Multiply the binary numbers below:
Complements of Numbers
In digital computers to simplify the subtraction operation and for logical manipulation complements are
used. There are two types of complements for each radix system: The radix complement and
diminished radix complement. The first is referred as the r’s complement and the second as the (r-
1)’s complement. For example, in binary system we substitute base vale 2 in place of r to refer
complements as 2’s complement and 1’s complement. In decimal number system, we substitute base
value 10 in place of r to refer complements as 10’s complement and 9’s complement.
Advantage of performing subtraction by the compliment method is reduction in the hardware.( instead
of addition and subtraction, only adding circuit‘s are needed.) i.e, subtraction is also performed by
adders only. Instead of subtracting one number from other, the compliment of the subtrahend is added
to minuend. In sign magnitude form, an additional bit called the sign bit is placed in front of the
number. If the sign bit is 0, the number is positive, If it is a 1, the number is negative.
Page 26 of 169
Codes
Binary Codes
Although the binary number system has many practical advantages and is widely used in digital
computers, in many cases it is convenient to work with the decimal number system, especially
when the communication between human being and machine is extensive, since most numerical
data generated by humans is in terms of decimal numbers. To simplify the problem of
communication between human and machine, several codes have been devised in which decimal
digits are represented by sequences of binary digits.
Weighted codes
In order to represent the 10 decimal digits 0, 1, . . . , 9, it is necessary to use at least four binary
digits. Since there are 16 combinations of four binary digits, of which 10 combinations are used, it
is possible to form a very large number of distinct codes. Of particular importance is the class of
weighted codes, whose main characteristic is that each binary digit is assigned a decimal “weight,”
and, for each group of four bits, the sum of the weights of those binary digits whose value is 1 is
equal to the decimal digit which they represent. If w1, w2, w3, and w4 are the given weights of the
binary digits and x1, x2, x3, x4 the corresponding digit values then the decimal digit N = w4x4 + w3x3
+ w2x2 + w1x1 can be represented by the binary sequence x4x3x2x1. The sequence of binary digits
that represents a decimal digit is called a code word. Thus, the sequence x4x3x2x1 is the code word
for N . Three weighted four-digit binary codes are shown in Table 1.3.
Page 27 of 169
The binary digits in the first code in Table 1.3 are assigned weights 8, 4, 2, 1. As a result of this
weight assignment, the code word that corresponds to each decimal digit is the binary equivalent of
that digit; e.g., 5 is represented by 0101, and so on. This code is known as the binary-coded-
decimal (BCD) code. For each code in Table 1.3, the decimal digit that corresponds to a given code
word is equal to the sum of the weights in those binary positions that are 1’s rather than 0’s. Thus,
in the second code, where the weights are 2, 4, 2, 1, decimal 5 is represented by 1011,
corresponding to the sum 2 × 1 + 4 × 0 + 2 × 1 + 1 × 1 = 5. The weights assigned to the binary
digits may also be negative, as in the code (6, 4, 2, −3). In this code, decimal 5 is represented by
1011, since 6 × 1 + 4 × 0 + 2 × 1 − 3 × 1 = 5.
It is apparent that the representations of some decimal numbers in the (2, 4, 2, 1) and (6, 4, 2, −3)
codes are not unique. For example, in the (2, 4, 2, 1) code, decimal 7 may be represented by 1101
as well as 0111. Adopting the representations shown in Table 1.3 causes the codes to become self-
complementing. A code is said to be self-complementing if the code word of the “9’s complement
of N ”, i.e., 9 − N , can be obtained from the code word of N by interchanging all the 1’s and 0’s.
For example, in the (6, 4, 2, −3) code, decimal 3 is represented by 1001 while decimal 6 is
represented by 0110. In the (2, 4, 2, 1) code, decimal 2 is represented by 0010 while decimal 7 is
represented by 1101. Note that the BCD code (8, 4, 2, 1) is not self-complementing. It can be
shown that a necessary condition for a weighted code to be self-complementing is that the sum of
the weights must equal 9. There exist only four positively weighted self-complementing codes,
namely, (2, 4, 2, 1), (3, 3, 2, 1), (4, 3, 1, 1), and (5, 2, 1, 1). In addition, there exist 13 self-
complementing codes with positive and negative weights.
Nonweighted codes
There are many nonweighted binary codes, two of which are shown in Table 1.4. The Excess-3
code is formed by adding 0011 to each BCD code word.
Thus, for example, the representation of decimal 7 in Excess-3 is given by 0111 + 0011 = 1010.
The Excess-3 code is self-complementing and possesses a number of properties that made it
practical in early decimal computers.
In many practical applications, e.g., analog-to-digital conversion, it is desirable to use codes in
which the code words for successive decimal integers differ in only one digit. Codes that have
such a property are referred to as cyclic codes. The second code in Table 1.4 is an example of such
a code. (Note that in this, as in all cyclic codes, the code word representing the decimal digits 0
Page 28 of 169
and 9 differ in only one digit.) A particularly important cyclic code is the Gray code. A four-bit
Gray code is shown in Table 1.5.
The feature that makes this cyclic code useful is the simplicity of the procedure for converting from the
binary number system into the Gray code, as follows.
Let gn · · · g2g1g0 denote a code word in the (n + 1)th-bit Gray code, and let bn · · · b2b1b0 designate the
corresponding binary number, where the subscripts 0 and n denote the least significant and most significant
digits, respectively. Then, the ith digit gi can be obtained from the corresponding binary number as follows:
gi = bi ⊕ bi+1, 0 ≤ i ≤ n − 1,
gn = bn,
where the symbol ⊕ denotes the modulo-2 sum, which is defined as follows:
For example, the Gray code word that corresponds to the binary number 101101 is found to be 111011 in a manner indicated
in the following diagram:
Thus, to convert from Gray code to binary, start with the leftmost digit and proceed to the least significant
digit, setting bi = gi if the number of 1’s preceding gi is even and setting bi = gi if the number of 1’s
preceding gi is odd. (Note that zero 1’s counts as an even number of 1’s.) For example, the Gray code word
1001011 represents the binary number 1110010. The proof that the preceding conversion procedures does
indeed work is left to the reader as an exercise.
The n-bit Gray code is a member of a class called reflected codes. The term “reflected” is used to designate
codes which have the property that the n-bit code can be generated by reflecting the (n − 1)th-bit code, as
illustrated in Fig. 1.1. The two-bit Gray code is shown in Fig. 1.1a. The three-bit Gray code (Fig. 1.1b) can
be obtained by reflecting the two-bit code about an axis at the end of the code and assigning a most
significant bit of 0 above the axis and 1 below the axis. The four-bit Gray code is obtained in the same
manner from the three-bit code, as shown in Fig. 1.1c.
Page 29 of 169
Fig. 1.1 Refection of Gray Code
The advantage of the Binary Coded Decimal system is that each decimal digit is represented by a
group of 4 binary digits or bits in much the same way as Hexadecimal. So for the 10 decimal digits
(0-to-9) we need a 4-bit binary code. Binary coded decimal is not the same as hexadecimal.
Whereas a 4-bit hexadecimal number is valid up to F16representing binary 11112, (decimal 15),
binary coded decimal numbers stop at 9 binary 10012. This means that although 16 numbers (24)
can be represented using four binary digits, in the BCD numbering system the six binary code
combinations of: 1010 (decimal 10), 1011 (decimal 11), 1100 (decimal 12), 1101 (decimal
13), 1110 (decimal 14), and 1111 (decimal 15) are classed as forbidden numbers and can not be
used.
The main advantage of binary coded decimal is that it allows easy conversion between decimal
(base-10) and binary (base-2) form. However, the disadvantage is that BCD code is wasteful as the
states between 1010 (decimal 10), and 1111 (decimal 15) are not used. Nevertheless, binary coded
decimal has many important applications especially using digital displays.
In the BCD numbering system, a decimal number is separated into four bits for each decimal digit
within the number. Each decimal digit is represented by its weighted binary value performing a
direct translation of the number. So a 4-bit group represents each displayed decimal digit
from 0000 for a zero to 1001 for a nine.
So for example, 35710 (Three Hundred and Fifty Seven) in decimal would be presented in Binary
Coded Decimal as:
Page 30 of 169
weights used in binary coded decimal code are 8, 4, 2, 1, commonly called the 8421 code as it
forms the 4-bit binary representation of the relevant decimal digit.
Then the relationship between decimal (denary) numbers and weighted binary coded decimal digits
is given below.
Page 31 of 169
because of equipment failure or noise in the transmission channel. In any practical system there is
always a finite probability of occurrence of a single error. The probability that two or more errors
will occur simultaneously, although nonzero, is substantially smaller. We, therefore, restrict our
discussion mainly to the detection and correction of single errors.
Error-detecting codes
In a four-bit binary code, the occurrence of a single error in one of the binary digits may result in
another, incorrect but valid, code word. For example, in the BCD code (see above), if an error
occurs in the least significant digit of 0110 then the code word 0111 results and, since it is a valid
code word, it is incorrectly interpreted by the receiver. If a code possesses the property that the
occurrence of any single error transforms a valid code word into an invalid code word, it is said to
be a (single-)error-detecting code. Two error-detecting codes are shown in Table 1.6.
Error detection in either code in Table 1.6 is accomplished by a parity check. The basic idea in a
parity check is to add an extra digit to each code word of a given code so as to make the number of
1’s in each code word either odd or even. In the codes of Table 1.6 we have used even parity. The
even-parity BCD code is obtained directly from the BCD code of Table 1.3. The added bit, denoted
p, is called the parity bit. The 2-out-of-5 code consists of all 10 possible combinations of two 1’s in
a five-bit code word. With the exception of the code word for decimal 0, the 2-out-of-5 code of
Table 1.6 is a weighted code and can be derived from the (1, 2, 4, 7) code.
In each of the codes in Table 1.6 the number of 1’s in a code word is even. Now, if a single error
occurs it transforms the valid code word into an invalid one, thus making the detection of the error
straightforward. Although parity check is intended only for the detection of single errors, it, in fact,
detects any odd number of errors and some even numbers of errors. For example, if the code word
10100 is received in an even-parity BCD message, it is clear that the message is erroneous, since
such a code word is not defined although the parity check is satisfied. We cannot determine,
however, the original transmitted word.
In general, to obtain an n-bit error-detecting code, no more than half the possible 2n combinations
of digits can be used. The code words are chosen in such a manner that, in order to change one
valid code word into another valid code word, at least two digits must be complemented. In the case
of four-bit codes this constraint means that only eight valid code words can be formed of the 16
possible combinations. Thus, to obtain an error-detecting code for the 10 decimal digits, at least
Page 32 of 169
five binary digits are needed. It is useful to define the distance between two code words as the
number of digits that must change in one word so that the other word results. For example, the
distance between 1010 and 0100 is three, since the two code words differ in three bit positions. The
minimum distance of a code is the smallest number of bits in which any two code words differ.
Thus, the minimum distance of the BCD or the Excess-3 codes is one, while that of the codes in
Table 1.6 is two. Clearly, a code is an error-detecting code if and only if its minimum distance is
two or more.
Error-correcting codes
For a code to be error-correcting, its minimum distance must be further increased. For example,
consider the three-bit code which consists of only two valid code words, 000 and 111. If a single
error occurs in the first code word, it could become 001, 010, or 100. The second code word could
be changed by a single error to 110, 101, or 011. Note that in each case the invalid code words are
different. Clearly, this code is error-detecting since its minimum distance is three. Moreover, if we
assume that only a single error can occur then this error can be located and corrected, since every
error results in an invalid code word that can be associated with only one of the valid code words.
Thus, the two code words 000 and 111 constitute an error-correcting code whose minimum distance
is three. In general, a code is said to be error-correcting if the correct code word can always be
deduced from the erroneous word. In this section, we shall discuss a type of single-error-correcting
codes known as Hamming codes.
If the minimum distance of a code is three, then any single error changes a valid code word into an
invalid one, which is distance one away from the original code word and distance two from any
other valid code word. Therefore, in a code with minimum distance three, any single error is
correctable or any double error detectable. Similarly, a code whose minimum distance is four may
be used for either single-error correction and double-error detection or triple-error detection. The
key to error correction is that it must be possible to detect and locate erroneous digits. If the
location of an error has been determined then, by complementing the erroneous digit, the message
is corrected.
The basic principles in constructing a Hamming error-correcting code are as follows. To each group
of m information or message digits, k parity-checking digits, denoted p1, p2, . . . , pk , are added to
form an (m + k)-digit code. The location of each of the m + k digits within a code word is assigned
a decimal value; one starts by assigning a 1 to the most significant digit and m + k to the least
significant digit. Then k parity checks are performed on selected digits of each code word. The
result of each parity check is recorded as 1 or 0, depending, respectively, on whether an error has or
has not been detected. These parity checks make possible the development of a binary number,
c1c2 · · · ck , whose value is equal to the decimal value assigned to the location of the erroneous digit
when an error occurs and is equal to zero if no error occurs. This number is called the position (or
location) number.
The number k of digits in the position number must be large enough to describe the location of any
of the m + k possible single errors, and must in addition take on the value zero to describe the “no
Page 33 of 169
error” condition. Consequently, k must satisfy the inequality 2k ≥ m + k + 1. Thus, for example, if
the original message is in BCD where m = 4 then k = 3 and at least three parity-checking digits
must be added to the BCD code. The resultant error-correcting code thus consists of seven digits. In
this case, if the position number is equal to 101, it means that an error has occurred in position 5. If,
however, the position number is equal to 000, the message is correct.
In order to be able to specify the checking digits by means of only mes-sage digits and
independently of each other, they are placed in positions valid code words, 000 and 111. If a single
error occurs in the first code word, it could become 001, 010, or 100. The second code word could
be changed by a single error to 110, 101, or 011. Note that in each case the invalid code words are
different. Clearly, this code is error-detecting since its minimum distance is three. Moreover, if we
assume that only a single error can occur then this error can be located and corrected, since every
error results in an invalid code word that can be associated with only one of the valid code words.
Thus, the two code words 000 and 111 constitute an error-correcting code whose minimum distance
is three. In general, a code is said to be error-correcting if the correct code word can always be
deduced from the erroneous word. In this section, we shall discuss a type of single-error-correcting
codes known as Hamming codes.
If the minimum distance of a code is three, then any single error changes a valid code word into an
invalid one, which is distance one away from the original code word and distance two from any
other valid code word. Therefore, in a code with minimum distance three, any single error is
correctable or any double error detectable. Similarly, a code whose minimum distance is four may
be used for either single-error correction and double-error detection or triple-error detection. The
key to error correction is that it must be possible to detect and locate erroneous digits. If the
location of an error has been determined then, by complementing the erroneous digit, the message
is corrected.
The basic principles in constructing a Hamming error-correcting code are as follows. To each group
of m information or message digits, k parity-checking digits, denoted p1, p2, . . . , pk , are added to
form an (m + k)-digit code. The location of each of the m + k digits within a code word is assigned
a decimal value; one starts by assigning a 1 to the most significant digit and m + k to the least
significant digit. Then k parity checks are performed on selected digits of each code word. The
result of each parity check is recorded as 1 or 0, depending, respectively, on whether an error has or
has not been detected. These parity checks make possible the development of a binary number,
c1c2 · · · ck , whose value is equal to the decimal value assigned to the location of the erroneous digit
when an error occurs and is equal to zero if no error occurs. This number is called the position (or
location) number.
The number k of digits in the position number must be large enough to describe the location of any
of the m + k possible single errors, and must in addition take on the value zero to describe the “no
error” condition. Consequently, k must satisfy the inequality 2k ≥ m + k + 1. Thus, for example, if
the original message is in BCD where m = 4 then k = 3 and at least three parity-checking digits
must be added to the BCD code. The resultant error-correcting code thus consists of seven digits. In
Page 34 of 169
this case, if the position number is equal to 101, it means that an error has occurred in position 5. If,
however, the position number is equal to 000, the message is correct.
In order to be able to specify the checking digits by means of only mes-sage digits and
independently of each other, they are placed in positions1, 2, 4, . . . , 2k−1. Thus, if m = 4 and k = 3
then the checking digits are placed in positions 1, 2, and 4 while the remaining positions contain the
original (BCD) message bits. For example, in the code word 1100110, the checking digits (in
boldface) are p1 = 1, p2 = 1, p3 = 0, while the message digits are 0, 1, 1, 0, which correspond to
decimal 6.
We shall now show how the Hamming code is constructed, by constructing the code for m = 4 and
k = 3. As discussed above, the parity-checking digits must be specified in such a way that, when an
error occurs, the position number will take on the value assigned to the location of the erroneous
digit. Table 1.7 lists the seven error positions and the corresponding values of the position number.
It is evident that if an error occurs in position 1, or 3, or 5, or 7, the least significant digit, i.e., c3, of
the position number must be equal to 1. If the code is constructed so that in every code word the
digits in positions 1, 3, 5, and 7 have even parity, then the occurrence of a single error in any of
these positions will cause an odd parity. In such a case, the least significant digit of the position
number is recorded as 1. If no error occurs among these digits, a parity check will show an even
parity and the least significant digit of the position number is recorded as 0.
From Table 1.7, we observe that an error in positions 2, 3, 6, or 7 should result in the recording of a
1 in the center of the position number. Hence, the code must be designed so that the digits in
positions 2, 3, 6, and 7 have even parity. Again, if the parity check of these digits shows an odd
parity then the corresponding position-number digit, i.e., c2, is set to 1; otherwise it is set to 0.
Finally, if an error occurs in positions 4, 5, 6, or 7 then the most significant digit of the position
number, i.e., c1, should be a 1. Therefore, if digits 4, 5, 6, and 7 are designed to have even parity, an
error in any of these digits will be recorded as a 1 in the most significant digit of the position
number. To summarize the situation regarding the checking digits pi :
p1 is selected so as to establish even parity in positions 1, 3, 5, 7;
p2 is selected so as to establish even parity in positions 2, 3, 6, 7;
p3 is selected so as to establish even parity in positions 4, 5, 6, 7.
The code can now be constructed by adding the appropriate checking digits to the message digits.
Consider, for example, the message 0100 (i.e., decimal 4), as shown in the table below.
Page 35 of 169
Thus checking digit p1 is set equal to 1 so as to establish even parity in positions 1, 3, 5, and 7.
Similarly, it is evident that p2 must be 0 and p3 must be 1, so that even parity is established,
respectively, in positions 2, 3, 6, and 7 and 4, 5, 6, and 7. The Hamming code for the decimal digits
coded in BCD is shown in Table 1.8.
Error location and correction are performed for the Hamming code in the fol-lowing manner.
Suppose, for example, that the sequence 1101001 is transmitted but, owing to an error in the fifth
position, the sequence 1101101 is received. The location of the error can be determined by
performing three parity checks as follows:
Thus, the position number formed as c1c2c3 is 101, which means that the location of the error is in
position 5. To correct the error, the digit in position 5 is complemented and the correct message
1101001 is obtained.
It is easy to prove that the Hamming code constructed as shown above is a code whose distance is
three. Consider, for example, the case where the two original four-bit (code) words differ in only
one position, e.g., 1001 and 0001. Since each message digit appears in at least two parity checks,
the parity checks that involve the digit in which the two code words differ will result in different
parities and hence different checking digits will be added to the two words, making the distance
between them equal to three. For example, consider the two words below.
Page 36 of 169
The two words differ in only m1 (i.e., position 3). Parity checks 1-3-5-7 and 2-3-6-7 for these two
words will give different results. Therefore, the parity-checking digits p1 and p2 must be different
for these words. Clearly, the foregoing argument is valid in the case where the original code words
differ in two of the four positions. Thus, the Hamming code has a distance of three.
If the distance is increased to four, by adding a parity bit to the code in Table 1.8 in such a way that
all eight digits have even parity, the code may be used for single-error correction and double-error
detection in the following manner. Suppose that two errors occur; then the overall parity check is
satisfied but the position number (determined as before from the first seven digits) will indicate an
error. Clearly, such a situation indicates the existence of a double error. The error positions,
however, cannot be located. If only a single error occurs, the overall parity check will detect it.
Now, if the position number is 0 then the error is in the last parity bit; otherwise, it is in the position
given by the position number. If all four parity checks indicate even parities then the message is
correct.
Page 37 of 169
Unit I
Part B: Boolean Algebra and Switching Functions
Switching algebra
The basic concepts of switching algebra will be introduced by means of a set of postulates, from which
we shall derive useful theorems and develop necessary tools that will enable us to manipulate and
simplify algebraic expressions.
Fundamental postulates
The basic postulate of switching algebra is the existence of a two-valued switch-ing variable that can
take either of two distinct values, 0 and 1. Precisely stated,
if x is a switching variable then
A switching algebra is an algebraic system consisting of the set {0, 1}, two binary1 operations called
OR and AND, denoted by the symbols + and · respectively, and one unary operation called NOT,
denoted by a prime.
Thus the OR combination of two switching variables x + y is equal to 1 if the value of either x or y is 1
or if the values of both x and y are 1. The AND combination of these variables x · y is equal to 1 if and
only if the values of x and y are both equal to 1. The result of the OR operation is very often called the
(logical) sum or union and may be denoted by ∪ or ∨. The result of the AND operation is referred to
as the (logical) product or intersection, and is denoted by ∩ or ∧. We shall generally omit the dot · and
write xy to mean x · y.
The preceding postulates and definitions of switching operations enable us to derive many useful
theorems and develop an entire algebraic structure that may be advantageously applied to switching
circuits.
Page 38 of 169
Basic Theorems and Properties
The first property that drastically differs from the algebra of real numbers and accounts for the special
characteristics of switching algebra, is the idempotent law for a switching variable x:
To prove this property, we shall employ perfect induction. Perfect induction is a method of proof
whereby a theorem is verified for every possible combination of values that the variables may assume.
Since x is a two-valued variable, x + x = x may assume the values 1 + 1 = 1 and 0 + 0 = 0. These
equations, being identities, clearly verify the validity of Eq. (3.1), and similarly for Eq. (3.2) we have
1 · 1 = 1 and 0 · 0 = 0.
The following two pairs of relations establish the commutativity and asso-ciativity of switching
operations. The convention adopted for parenthesizing is that of ordinary algebra, where x + y · z
means x + (y · z) and not (x + y) · z. Let x, y, and z be switching variables. Then
The properties established by Eqs. (3.2) through (3.12) can be proved by the method of perfect
induction. The actual proofs are left to the reader as exercises. It is the associative law which enables us
to extend the definitions of the AND and OR operations to more than two variables, i.e., we write T = x
+ y + z to mean that T equals 1 if any of x, y, or z, or any combination thereof, equals 1.
In switching algebra, multiplication distributes over addition and addition distributes over
multiplication – a property known as the distributive law:
To verify Eq. (3.13) for every possible combination of values of x, y, and z, it is convenient to tabulate
these combinations in a table called a truth table or table of combinations. Since every variable may
Page 39 of 169
assume one of two values, 0 or 1, the truth table for the three variables contains 23 = 8 combinations.
These combinations are tabulated in the leftmost column of Table 3.1.
The value of x(y + z) is computed for every possible combination of x and y + z. The value of xy + xz is
computed independently by adding the entries in columns xy and xz. Since the two different methods of
computation yield identical results, as shown in the two rightmost columns, Eq. (3.13) is verified.
We observe that all the preceding properties are grouped in pairs. Within each pair, one statement can
be obtained from the other by interchanging the OR and AND operations and replacing the constants 0
and 1 by 1 and 0, respectively. Any two statements or theorems that have this property are called dual,
and this quality of duality that characterizes switching algebra is known as the principle of duality. It
stems from the symmetry of the postulates and definitions of switching algebra with respect to the two
operations and two constants. The implication of the concept of duality is that it is necessary to prove
only one of each pair of statements because its dual is, henceforth, proved.
The method of proof by perfect induction is efficient, as long as the number of combinations for which
the statement is to be verified is small. In other cases, algebraic procedures are more appropriate, such,
for example, as are demonstrated in the following proof of Eq. (3.15).
Page 40 of 169
Another property of switching expressions, important in their simplification, is the following:
The consensus theorem is noteworthy in that it is used frequently in the simplification of switching
expressions. It is stated in the following two equations:
The preceding properties permit a variety of manipulations on switching expressions. In particular, they
enable us (whenever possible) to convert an expression into an equivalent one with fewer literals,
where by a literal we mean an appearance of a variable or its complement. For example, while the left-
hand side of Eq. (3.19) consists of six literal appearances; its right-hand side consists of only four
appearances. If the value of a switching expression is independent of the value of some literal xi , then
xi is said to be redundant. Equations (3.1) through (3.20) provide, among other things, the tools for
manipulating expressions so as to eliminate redundant literals
It is important to observe that no inverse operations are defined in switching algebra and, consequently,
no cancellations are allowed. For example, if A + B = A + C, the equality of B and C is not implied; in
fact, if A = B = 1 and = 0 then 1 + 1 = 1 + 0, but B =C. Similarly, B is not necessarily equal to if AB =
AC.
Hence, T (x, y, z) is actually independent of the values of x and y and depends only on z.
Page 41 of 169
De Morgan’s theorems
The rules governing complementation operations are summarized by three theorems. The first is the
involution theorem:
Proof The proof of Eq. (3.22) follows by perfect induction, using the truth table of Table 3.2;
(x + y) and x y are computed independently and are shown to be identical for all possible combinations
of values of x and y. The proof of Eq. (3.23) then follows by the principle of duality.
For n variables, Eqs. (3.22) and (3.23) can be expressed as follows: the complement of any expression
can be obtained by replacing each variable and element with its complement and, at the same time,
interchanging the OR and AND operations, that is,
Equation (3.24) is known as the general De Morgan’s theorem and its proof follows immediately from
Eq. (3.22) and mathematical induction on the number of operations.
it is necessary first to apply De Morgan’s theorem and then to multiply out the expressions in
parentheses:
Page 42 of 169
Example Prove the following identity:
From the application of Eq. (3.19) to x y + yz, it follows that the term x z may be added to the left-
hand side of the equation; i.e., the equation becomes
Another application of Eq. (3.19) to the first, third, and fourth terms in the augmented left-hand side
of the equation shows that yz is redundant. After elimination of yz, the left-hand side of the equation is
identical to its right-hand side (i.e., both consist of identical terms), and thus the proof is complete.
Switching Functions
Definitions
Let T (x1, x2, . . . , xn) be a switching expression. Since each of the variables x1, x2, . . . , xn can
independently assume either of the two values 0 or 1, there are 2n combinations of values to be
considered in determining the values of T . In order to determine the value of an expression for a given
combination, it is only necessary to substitute the values for the variables in the expression. For
example, if
then, for the combination x = 0, y = 0, z = 1, the value of the expression is 1 because T (0, 0, 1) = 0’1 +
01’ + 0’0’ = 1. In a similar manner, the value of T may be computed for every combination, as shown
in the right-hand column of Table 3.3.
If we now repeat the above procedure and construct the truth table for the expression
we find that it is identical to that of Table 3.3. Hence, for every possible combination of variables, the
value of the expression is identical to the value of . Thus different
switching expressions may represent the same assignment of values specified by the right-hand column
of a truth table. The values assumed by an expression for all the combinations of variables x1, x2, . . . ,
xn define a switching function. In other words, a switching function f (x1, x2, . . . , xn) is a
correspondence that associates an element of the algebra with each of the 2 n combinations of variables
x1, x2, . . . , xn. This correspondence is best specified by means of a truth table. Note that each truth
table defines only one switching function, although this function may be expressed in a number of
ways.
The complement f (x1, x2, . . . , xn) is a function whose value is 1 whenever the value of f (x1, x2, . . . ,
xn) is 0, and 0 whenever the value of f is 1. The sum of two functions f (x1, x2, . . . , xn) and g(x1,
x2, . . . , xn) is 1 for every combination in which either f or g or both equal 1, while their product is
equal to 1 if and only if both f and g equal 1. If a function f (x1, x2, . . . , xn) is specified by means of a
truth table, its complement is obtained by comple-menting each entry in the column headed f . New
functions that are equal to the sum f + g and the product f g are obtained by adding or multiplying the
corresponding entries in the f and g columns.
Page 43 of 169
Example Two functions f (x, y, z) and g(x, y, z) are specified in columns f and g of Table 3.4. The complement
f’, the sum f + g, and the product f. g are specified in the corresponding columns.
Simplification of Expressions
The truth table assigns to each combination of variable values a specific switch-ing element.
Consequently, all the properties of switching elements (Eqs. (3.1) through (3.24)) are valid when the
elements are replaced by expressions. For example, xy + xyz = xy by virtue of the property established
in Eq. (3.15).
First, apply the consensus theorem, Eq. (3.19), to the first three terms of T, letting x, y, and z replace
A’, C’ , and BD, respectively. As a result the third term, BC’D, is redundant. Next, apply the
distributive law, Eq. (3.13), to the fourth and fifth terms. This gives the expression AD’ (B’ + BC).
Letting x and y replace B’ and C’, respectively, and applying Eq. (3.17) yields AD’ (B’ + C). No other
literal is redundant; thus the simplest expression for T is
First apply Eq. (3.17) to the first two terms and to the last two terms. This yields
The next step in the simplification is not as obvious; in order to simplify T , it is first necessary to
expand it. Since
we have
The application of Eq. (3.15) to the first and last terms results in the elimi-nation of the last term.
Now apply Eq. (3.19) to the second, third, and fourth terms, letting x, y, and z replace D, B, and AC,
respectively. This step eliminates ABC and yields
Page 44 of 169
Canonical and Standard Form
Canonical Forms
Truth tables have been shown to be the means for describing switching functions. An expression
representing a switching function is derived from the table by finding the sum of all the terms that
correspond to those combinations (i.e., rows) for which the function assumes the value 1. Each term is
a product of the variables on which the function depends. Variable xi appears in uncomplemented form
in the product if it has value 1 in the corresponding combination, and it appears in complemented form
if it has value 0. For example, the product term that corresponds to row 3 of Table 3.5, where the values
of x, y, and z are 0, 1, and 1, is x’ yz.
The sum of all product terms for the function defined by Table 3.5 is
A product term that, as for each term in the above expression, contains each of the n variables as
factors in either complemented or uncomplemented form is called a minterm. Its characteristic property
is that it assumes the value 1 for exactly one combination of variables. If we assign to each of the n
variables a fixed arbitrary value, either 0 or 1, then, of the 2 n minterms, one and only one minterm will
have value 1 while all the remaining 2n − 1 minterms will have value 0, because they differ by at least
one literal, whose value is 0, from the minterm whose value is 1. The sum of all minterms derived from
those rows for which the value of the function is 1 takes on the value 1 or 0 according to the value
assumed by f.
Therefore, this sum is in fact an algebraic representation of f. An expression of this type is called a
canonical sum of products or disjunctive normal expression.
Switching functions are usually expressed in a compact form, obtained by listing the decimal codes
associated with the minterms for which f = 1. The decimal codes are derived from the truth tables by
regarding each row as a binary number; e.g., the minterm x’ yz’ is associated with row 010, which,
when interpreted as a binary number, is equal to 2. The function defined by Table 3.5 can thus be
expressed as
where∑( ) means that f (x, y, z) is the sum of all the minterms whose decimal code is one of the
numbers given within the parentheses.
Page 45 of 169
A switching function can also be expressed as a product of sums. This is accomplished by considering
those combinations for which the function is required to have the value 0. For example, the sum term x
+ y + z’ has the value 1 for all combinations of x, y, and z, except for x = 0, y = 0, and z = 1, when it has
the value 0. Any similar term assumes the value 0 for only one combination. Consequently, a product
of such sum terms will assume the value 0 for precisely those combinations for which the individual
terms are 0. For all other combinations, the product-of-sum terms will have the value 1. A sum term
that contains each of the n variables in either a complemented or an uncomplemented form is called a
maxterm. An expression formed of the product of all maxterms for which the function takes on the
value 0 is called a canonical product of sums or conjunctive normal expression.
In each maxterm, a variable xi appears in uncomplemented form if it has the value 0 in the
corresponding row in the truth table, and it appears in complemented form if it has the value 1. For
example, the maxterm that corresponds to the row whose decimal code is 1 in Table 3.5 is x + y + z’ .
The canonical product-of-sums expression for the function defined by Table 3.5 is given by
This function can also be expressed in a compact form by listing the combinations for which f is to
have value 0, i.e.,
Where П( ) means the product of all maxterms whose decimal code is given within the parentheses.
One way of obtaining the canonical forms of any switching function is by means of Shannon’s
expansion theorem (also called Shannon’s decomposition theorem), which states that any switching
function f (x1, x2, . . . , xn) can be expressed as either
Or
Proof this proceeds by perfect induction. Let x1 be equal to 1; then x1 equals 0 and Eq. (3.25) becomes
an identity, i.e.,
Similarly, substituting x1 = 0 and x1 = 1 also reduces Eq. (3.25) to an identity and thus the theorem is
proved.
If we now apply the expansion theorem with respect to variable x2 to each of the two terms in Eq.
(3.25), we obtain
Page 46 of 169
The expansion of the function about the remaining variables yields the dis-junctive normal form. In a
similar manner, repeated applications of the dual expansion theorem, Eq. (3.26), to f (x1, x2, . . . , xn)
about its variables x1, x2, . . . , xn yield the conjunctive normal form.
A simpler and faster procedure for obtaining the canonical sum-of-products form of a switching
function is summarized as follows.
1. Examine each term; if it is a minterm, retain it, and continue to the next term.
2. In each product that is not a minterm, check the variables that do not occur; for each xi that does
not occur, multiply the product by (xi + xi ).
3. Multiply out all products and eliminate redundant terms.
The canonical product-of-sums form is obtained in a dual manner by expressing the function as a
product of factors and adding the product xi xi to each factor in which the variable xi is missing. The
expansion into canonical form is obtained by repeated applications of Eq. (3.14).
In some instances, it is desirable to transform a function from one form to another. This transformation
can be accomplished by writing down the truth table and using the previously described techniques. An
alternative method, which is based on the involution theorem (x ) = x, is illustrated by the following
example.
Example Determine the canonical sum-of-products form for T (x, y, z) = x y + z + xyz. Applying rules
1–3, we obtain
The complement T’ consists of those minterms that are not contained in the expression for T , i.e.,
Page 47 of 169
Algebraic Simplification of Digital Logic Gates, Properties of XOR Gates,
The EXCLUSIVE OR, denoted ⊕, is a binary operation on the set of switching elements. It assigns
value 1 to two arguments if and only if they have complementary values; that is, A ⊕ B = 1 if either A
or B is 1 but not when both A and B are 1. It is evident that the EXCLUSIVE-OR operation assigns to
each pair of elements its modulo-2 sum; consequently, it is often called the modulo-2 addition
operation. The following properties of the EXCLUSIVE OR are direct consequences of its definition:
In general, the modulo-2 addition of an even number of elements whose value is 1 gives 0 and the
modulo-2 addition of an odd number of elements whose value is 1 gives 1. The usefulness of the
modulo-2-addition operation will become evident in subsequent chapters, and especially in the analysis
and design of linear sequential machines.
Logic Gates
Logic gates are fundamental building blocks of digital systems. Logic gate produces one output level
when some combinations of input levels are present. & a different output level when other combination
of input levels is present. In this, 3 basic types of gates are there. AND OR & NOT
The interconnection of gates to perform a variety of logical operation is called Logic Design. Inputs &
outputs of logic gates can occur only in two levels.1,0 or High, Low or True , False or On , Off. A table
which lists all the possible combinations of input variables & the corresponding outputs is called a
Truth Table. It shows how the logic circuits output responds to various combinations of logic levels at
the inputs. Level Logic, a logic in which the voltage levels represent logic 1 & logic 0.Level logic may
be Positive Logic or Negative Logic. In Positive Logic the higher of two voltage levels represent logic
1 & Lower of two voltage levels represent logic 0.In Negative Logic the lower of two voltage levels
represent logic 1 & higher of two voltage levels represent logic 0.
In TTL (Transistor-Transistor Logic) Logic family voltage levels are +5v, 0v.Logic 1 represent +5v &
Logic 0 represent 0v.
AND Gate:
It is represented by “.” (dot) . It has two or more inputs but only one output. The output assumes the
logic 1 state only when each one of its inputs is at logic 1 state. The output assumes the logic 0 state
even if one of its inputs is at logic 0 state. The AND gate is also called an All or Nothing gate.
Page 48 of 169
Symbol: Truth Table: Boolean Expression:
Y= A.B
OR Gate:
It is represented by “+” (plus) It has two or more inputs but only one output. The output assumes the
logic 1 state only when one of its inputs is at logic 1 state. The output assumes the logic 0 state even if
each one of its inputs is at logic 0 state. The OR gate is also called an Any or All gate. Also called an
inclusive OR gate because it includes the condition both the inputs can be present
Symbol Truth Table Boolean Expression
Y=A+B
NOT Gate:
It is represented by “ ―” (bar). It is also called an Inverter. It has only one input and one output.
Whose output always the compliment of its input. The output assumes logic 1 when input is logic 0 and
output assume logic 0 when input is logic 1.
Symbol Truth Table
Boolean Expression
A Y
0 1 Y = A’ or
1 0
Logic circuits of any complexity can be realized using only AND, OR , NOT gates. Using these 3
called AND-OR-INVERT i.e, AOI Logic circuits.
Universal Gates
The universal gates are NAND, NOR. Each of which can also realize Logic Circuits Single handedly.
NAND-NOR called Universal Building Blocks.. Both NAND-NOR can perform all the three basic
logic functions. AOI logic can be converted to NAND logic or NOR logic.
NAND Gate:
NAND assumes Logic 0 when each of inputs assume logic 1.
NAND gate mean NOT AND i.e, AND output is NOTed.
Page 49 of 169
NAND→AND & NOT gates
Symbol Truth Table
Boolean Expression:
Y =(A .B)’
Bubbled OR gate: Bubbled OR gate is OR gate with inverted inputs. The output of this is same as
NAND gate.
Y=A‘+ B‘=(AB)‘
Bubbled NAND Gate: Bubbled NAND gate is NAND gate with inverted inputs. The output of this is
same as OR gate.
Y=A‘. B‘= ((A+B)‘)’ = A + B
NOR Gate:
NOR assumes Logic 1 when each of inputs assume logic 0.
NOR gate is NOT gate with OR gate. i.e, OR gate is NOTed.
NOR gate mean NOT OR i.e, OR output is NOTed.
NOR→OR & NOT gates
Symbol Truth Table
A BY
0 0 1
0 1 0 Boolean Expression
1 0 0 Y = (A+B)’
1 1 0
Bubbled AND gate: AND gate with inverted inputs. The AND gate with inverted inputs is called as
Bubbled AND gate. So a NOR gate is equivalent to a Bubbled AND Gate. A Bubbled AND gate is also
called a negative AND gate. Since its output assumes the HIGH state only when all its inputs are in
LOW state, a NOR gate is also called active-LOW AND gate. Output Y is 1 only when both A & B are
equal to 0.i.e, only when both A‘ and B‘ are equal to 1.
Page 50 of 169
Bubbled NOR Gate: Bubbled NOR gate is NOR gate with inverted inputs. The output of this is same
as AND gate.
Y=A‘+ B‘= ((A.B)‘)’ = A.B
The high outputs are generated only when odd number of high inputs is present. This is why x-or
function also known as odd function.
Page 51 of 169
Multilevel NAND/NOR realizations.
Two level implementation:
Case (I): The implementation of a logic expression such that each one of the inputs has to pass through
only two gates to reach the output is called Two-level implementation.
Both SOP , POS forms result in two-level logic
Two level implementation can be with AND, OR gates or only NAND or with only NOR gates
Boolean expression with only NAND gates requires that the function be in SOP form.
xample: Implement the Function F= AB+CD using AND-OR Logic and NAND-NAND Logic
Example: The implementation of the form: F=XY‘+X‘Y+Z using AND-OR logic and NAND- NAND
logic .
Case (II): The implementation of Boolean expressions with only NOR gates requires that the function
be in the form of POS form.
Example: Implementation of the function (A+B)(C‘+D‘)using OR-AND logic and NOR- NOR logic.
Page 52 of 169
(a) OR – AND Logic (b) NOR – NOR Logic
Summary:
Page 53 of 169
Unit II
Minimization of Switching Functions
Introduction:
The aim in simplifying a switching function f (x1, x2, . . . , xn) is to find an expression g(x1, x2, . . . , xn)
which is equivalent to f and which minimizes some cost criteria. There are various criteria to determine
minimal cost. The most common are:
1. the minimum number of appearances of literals (recall that a literal is a variable in complemented
or uncomplemented form);
2. the minimum number of literals in a sum-of-products (or product-of-sums) expression;
3. the minimum number of terms in a sum-of-products expression, provided that there is no other such
expression with the same number of terms and fewer literals.
In subsequent discussions, we shall adopt the third criterion and restrict our attention to the sum-of-
products form. Of course, dual results can be obtained by employing the product-of-sums form instead.
Note that the expression xy + xz + x’ y’ is minimal according to criterion 3, although it may be written
as x(y + z) + x’ y ‘, which requires fewer literals.
Consider the minimization of the function f (x, y, z) given below. A combination of the first and
second product terms yields x’ z ‘(y + y ‘) = x’ ‘z . Similarly, combinations of the second and third,
fourth and fifth, and fifth and sixth terms yield a reduced expression for f :
This expression is said to be in an irredundant form, since any attempt to reduce it, either by deleting
any of the four terms or by removing a literal, will yield an expression that is not equivalent to f . In
general, a sum-of-products expression, from which no term or literal can be deleted without altering its
logic value, is called an irredundant, or irreducible, expression.
The above reduction procedure is not unique, and a different combination of terms may yield different
reduced expressions. In fact, if we combine the first and second terms of f , the third and sixth, and the
fourth and fifth, we obtain the expression
In a similar manner, by combining the first and fourth terms, the second and third, and the fifth and
sixth, we obtain a third irredundant expression,
While all three expressions are irredundant, only the latter two are minimal. Consequently, an
irredundant expression is not necessarily minimal, nor is the minimal expression always unique. It is,
therefore, desirable to develop procedures for generating the set of all minimal expressions, so that the
appropriate one may be selected according to other criteria (e.g., the distribution of gate loads, etc.).
A
B
A+B = B+A
B A
Truth Table:
A B A+B B+A
0 0 0 0
0 1 1 1
1 0 1 1
1 1 1 1
Truth Table:
A B A.B B.A
0 0 0 0
0 1 0 0
1 0 0 0
1 1 1 1
Commutative law can be extended to any number of variables.
Page 55 of 169
e. Associative law:
(A + B) + C = A + ( B + C)
A OR B ored with C is same as A ORed Truth Table:
with B OR C/ A B C A+ (A+B)+C B+C A +
B (B+C)
0 0 0 0 0 0 0
0 0 1 0 1 1 1
0 1 0 1 1 1 1
The above logic is same as the below 0 1 1 1 1 1 1
one.
1 0 0 1 1 0 1
1 0 1 1 1 1 1
1 1 0 1 1 1 1
1 1 1 1 1 1 1
Similarly, (A.B).C = A. (B.C)
This law can be extended to any number of variables.
f. Distributive law:
a. A ( B + C ) = A.B + A.C
This law states that ORing of several variables and ANDing the result with a single variable is
equivalent to ANDing that single variable with each of the several variables and then ORing the
products.
Truth Table:
A B C B+C A(B+C) AB AC AB+AC
0 0 0 0 0 0 0 0
0 0 1 1 0 0 0 0
0 1 0 1 0 0 0 0
0 1 1 1 0 0 0 0
1 0 0 0 0 0 0 0
1 0 1 1 1 0 1 1
1 1 0 1 1 1 0 1
1 1 1 1 1 1 1 1
b. A + BC = (A+B).(A+C)
This law states that ANDing of several variables and ORing the result with a single variable is
equivalent to ORing that single variable with each of the several variables and then ANDing the
same.
Proof:
RHS : (A+B)(A+C) = A.A + A.C+ A.B + B.C = A + AC + AB + BC = A ( 1 + C + B) + BC
= A + BC = LHS
Hence Proved.
Page 56 of 169
g. Redundant Literal Rule: h. Idempotence Law:
A + A’B = A + B A.A = A
Proof: If A= 0, then 0.0 = 0 = A
A + A’B = (A+A’)(A+B) by If A=1, then 1.1 = 1 = A
Distributive law This law states that ANDing of a variable
= 1. (A+B) = A + B with itself is equal to that variable only.
Similarly, A ( A’+ B) = AB
Proof: A (A’ + B) = A.A’ + A.B = 0 + Similarly, A+A = A
AB = AB If A=0, then A+A = 0 + 0 = 0 = A
If A = 1, then A + A = 1 + 1 = 1 = A
This law states that ORing of a variable
with itself is equal to that variable only.
i. Absorption law:
a. A + A.B = A
This law states that ORing of a variable (A) with ANDing of that variable AND another
variable B is equal to that variable itself (A).
Truth table:
A B A.B A+A.B
0 0 0 0
0 1 0 0
1 0 0 1
A + A.B = A(1+B) = A.1 = A
1 1 1 1
i.e. A + any logic = A
b. A(A+B) = A
This law states that ANDing of a variable (A) with the one of that variable (A) ORed with
another variable (B) is equal to that variable itself (A).
Truth table:
A B A+B A(A+B)
0 0 0 0
0 1 1 0
1 0 1 1
1 1 1 1
Page 57 of 169
j. Consensus Theorem:
i) AB + A’C + BC = AB + A’C ii) (A+B)(A’+C)(B+C) = (A+B)(A’+C)
Proof : Proof:
LHS = AB + A’C + BC ( A + A’) LHS = (A+B)(A’+C)(B+C)
= AB + A’C + BCA + BCA’ = (A.A’ + AC + A’B+ BC)(B+C)
= AB (1+C) + A’C (1 + B) = (AC+A’B+BC)(B+C)
= AB + A’C = RHS = ABC+AC+A’B+A’BC+BC+BC
Similarly, AB + A’C + BCD = AB + A’C =AC+A’B+BC
RHS = (A+B)(A’+C)
=A.A’+AC+A’B+BC
=AC+A’B+BC
LHS = RHS
k. Transposition Theorem:
AB + A’C = (A+C)(A’+B)
Proof:
RHS = (A+C)(A’+B)
= A.A’ + A.B + A’.C + BC
= 0 + A’C+AB+BC
= A’C + AB+ BC (A + A’)
= AB + ABC +A’C + A’BC
= AB + A’C = LHS
l. DeMorgan’s Theorem:
(X+Y)’ = X’.Y’
This law states that the complement of a sum of variables is equal to product of their individual
complements. This law can be represented using logic gates as
Truth Table:
X Y X+Y (X+Y)’ X’ Y’ X’.Y’
0 0 0 1 1 1 1
0 1 1 0 1 0 0
1 0 1 0 0 1 0
1 1 1 0 0 0 0
Page 58 of 169
Truth Table:
X Y (X.Y)’ X’ Y’ X’ + Y’
0 0 1 1 1 1
0 1 1 1 0 1
1 0 1 0 1 1
1 1 0 0 0 0
From the above truth table, column 3 is equal to column 6. Hence, (AB)’ = A’ + B’
Steps to be followed to DeMorganize a given function:
Identify the function as a SOP or POS.
Complement the individual terms and change the SOP to POS or vice-versa.
Check the individual terms whether they require DeMorganization.
If so, repeat it again till there is no such requirement.
Examples: Apply De Morgan’s theorem to get the complement of the following Boolean functions:
i. F = (A + B’).(C + D’)
To get F’, Change the sign and take complements iii. F= {(X.Y’)’}.Y’ + Z
F’ = (A + B’).(C + D’) F’ =[ {(X.Y’)’}.Y’ + Z]’
= (A + B’)’ + (C + D’)’ = [ {(X.Y’)’}.Y’]’ . Z’
= A’.B + C’.D = [{(X.Y’)’}’ +Y] . Z’
=[X.Y’+Y].Z’
ii. F = [(AB)’] (CD +E’F) . {(AB)’ + (CD)’}]’ =(X + Y).Z’
F’ = [(AB)’]’ + (CD +E’F)’ + {(AB)’ + (CD)’}’
= AB + {(CD)’.(E’F)’}+ (AB.CD) iv. Simplify F =[ ( X’ + Z) . (XY)’]’
= AB + (C’ +D’).(E+F’) + ABCD Ans: F =[ ( X’ + Z) . (XY)’]’
= AB(1+CD) + (C’ +D’).(E+F’) = [ ( X’ + Z)]’ + [(XY)’]’
=AB + (C’ +D’).(E+F’) = X.Z’ + XY
=X ( Y + Z’)
Duality:
An algebraic expression in Boolean algebra which is obtained from any valid expression by interchanging OR &
AND operation and replacing ‘1’ by ‘0’ and ‘0’ by ‘1’ is also valid. This property is called Duality principle.
For example,
1) x+1 = 1, then its duality is x.0 = 0 3) x + x’ = 1, then its duality is x.x’=0
2) x+x = x, then its duality is x.x = x 4) x + xy = x, then its duality is x.(x+y) = x
6) (A+C+D)(A+C+D’)(A+C’+D)(A+B’)
= (A.A + A.C + A.D’ +A.C+ C.C+CD’+A.D+C.D+D.D’).(A+C’+D).(A+B’)
=(A +A.C+A.D’+A.C+ C + CD’ +C.D +0)(A+C’+D).(A+B’) as A.A =A & D.D’= 0
={A(1+C+D’+C) + C (1 +D’+D)}.(A+C’+D).(A+B’)
=(A + C)(A+C’+D).(A+B’) as 1 + any function = 1
= (A.A + AC’ + AD + AC + C.C’ + CD).(A+B’)
= (A + AC’ + AD + AC + 0+CD)(A+B’)
={A(1+C’+D+C)+CD}.(A+B’)
=(A+CD).(A+B’)
=A+AB’ + ACD + B’CD
= A(1+B’+CD) + B’CD
=A + B’CD
A decimal numerical value is assigned to each cell and the labeling of the cells is done in such a
manner that only one variable changes at a time. A ‘0’ denotes a complemented variable and “1” an un-
Page 61 of 169
complemented variable. K-Map can be created for 3-variable, 4-variable, 5-variable and so on. A k-
variable K-Map has 2k cells. Below diagram is of a 3-variable K-Map:
At a time only one variable is changing from complemented to un-complemented & vice-versa as we
move from one cell to next.
Looping adjacent 1’s for simplification
The expression for output Y can be simplified by properly combining those squares in the K-Map
which contain 1s. The process of combining those 1s is called looping.
Pairs – Looping groups of Two 1s
Any adjacent pair of cells marked by a 1 in a K-Map can be combined into one term and one variable is
eliminated which is changing i.e. from A to A’ or B’ to B etc.Any single logical 1 on the map
represents AND function. The total expression corresponding to the logical 1s of a map are the OR
function (sum) of the various variable terms, which covers all the logical 1 in the map.
F = Σ(2,3) = AB’ + AB = A (B’ + B) = A
An example of 2-variable K-Map. A 2-variable K-Map will have 22 = 4 cells.
Page 62 of 169
An implicant of Y such that if any variable is removed from the implicant, the resulting term does not
imply Y.
Example: Y=AB+ABC+BC
Ans:
Prime Implicants: AB, BC. Not a prime implicant: ABC
ABC is not a prime implicant because the literal A can be removed to give BC and BC still implies Y.
Conversely AB is not a prime implicant because you can't remove either A or B and have the
remaining term still imply Y.
In truth tables the prime implicants are represented by the largest rectangular groups of ones that can
be circled. If a smaller subgroup is circled, the smaller group is an implicant, but not a prime
implicant.
PI Theorem
A minimal sum is a sum of prime implicants.
Distinguished 1-Cell
An input combination that is covered by 1 prime implicant. In terms of Karnaugh maps, distinguished
1-cells are 1's that are circled by only 1 prime implicant.
Essential Prime Implicant
A prime implicant that that includes one or more distinguished one cells. Essential prime implicants are
important because a minimal sum contains all essential prime implicants.
3 Variable K-Map:
A 3-variable K-Map will have 23 = 8 cells. The number of variables is decided by the biggest decimal
number in a given function. A function F which has maximum decimal value of 7, can be defined and
simplified by a 3-variable Karnaugh Map.
3-Variable K-Map
Page 63 of 169
First cell is denoted by 0, second by 1 and then third by 3 and not by 2. This is because, A’BC
(ANDing of first row A’ and third column BC) corresponds to decimal number 3 in the boolean table.
Similarly, second row, third column ABC is denoted by 7 and not by 6.
Example of 3-Variable K-Map
Given function, F = Σ (1, 2, 3, 4, 5, 6)
Since the biggest number in this function is 6, it can be defined by 3 variables.
Draw K-Map for this function by writing 1 in cells that are present in function and 0 in rest of the cells.
apply rules for simplifying K-Map. So, first look for an octet i.e. 8 adjacent 1′s. There is none, then
look for a quad i.e. 4 adjacent 1′s. Again, there is none, hence look for pairs. There are 3 pairs circled in
red.
(1,3) – A’C (Since B is the changing variable between these two cells, it is eliminated)
(2,6) – BC’ (Since A is the changing variable, it is eliminated)
(4, 5) – AB’ (Since C is the changing variable, it is eliminated)
Thus, F = A’C + BC’ + AB’.
Page 64 of 169
4-Variable K-Map
A 4-variable K-Map will have 24 = 16 cells. A function F which has maximum decimal value of 15,
can be defined and simplified by a 4-variable Karnaugh Map.
Applying rules of simplifying K-Map, there are no Octets and Quads. There are 3 pairs, circled in red.
(0, 4) – A’C'D’ (Since B is the changing variable between these two cells, it is eliminated)
(4, 6) – A’BD’ (Since C is the changing variable, it is eliminated)
(8, 10) – AB’D’ (Since C is the changing variable, it is eliminated)
Page 65 of 169
There is 1 in cell 15, which cannot be looped with any adjacent cell, hence it can not be simplified
further and left as it is.
15 = ABCD
Thus, F = A’C'D’ + A’BD’ + AB’D’ + ABCD
Applying rules of K-Map, there is no octet. There are 2 quads and there are 3 pairs.
(1, 5, 13, 9) – C’D (Since A and B are changing variables, they are eliminated)
(9, 11, 13, 15) – AD (Since B and C are changing variables, they are eliminated)
(0, 1) – A’B'C’ (Since D is the changing variable, it is eliminated)
(1, 3) – A’B'D (Since C is the changing variable, it is eliminated)
(12, 13) – ABC’ (Since D is the changing variable, it is eliminated)
There is 1 in cell 6, which cannot be looped with any adjacent cell, hence it can not be simplified
further and left as it is.
6 = A’BCD’
Thus, F = C’D + AD + A’B'C’ + A’B'D + ABC’ + A’BCD’
Page 66 of 169
Applying rules of simplifying K-Map, there is no octet. There are 1 quad and 3 pairs.
(5, 7, 13, 15) – BD (Since A and C are changing variables, they are eliminated)
(0, 4) – A’C'D’ (Since B is the changing variable, it is eliminated)
(2, 3) – A’B'C (Since D is the changing variable, it is eliminated)
(8, 9) – AB’C’ (Since D is the changing variable, it is eliminated)
Thus, F = BD + A’C'D’ + A’B'C + AB’C’
Applying rules of simplifying K-Map, there is no octet. There are two quads and two pairs.
(4, 6, 12, 14) – BD’ (Since A and C are changing variables, they are eliminated)
(6, 7, 14, 15) – BC (Since A and D are changing variables, they are eliminated)
(0, 4) – A’C'D’ (Since B is the changing variable, it is eliminated)
(3, 7) – A’CD (Since B is the changing variable, it is eliminated)
There is 1 in cell 9, which cannot be looped with any adjacent cell, hence it cannot be simplified further
and left as it is.
9 – AB’C'D
Thus, F = BD’ + BC + A’C'D’ + A’CD + AB’C'D
Page 67 of 169
Karnaugh Map Examples to find Prime Implicants, Distinguished 1-Cells, Essential Prime
Implicants, Minimal Sums
In the following examples the distinguished 1-cells are marked in the upper left corner of the cell with
an asterisk (*). The essential prime implicants are circled in blue, the prime implicants are circled
in black, and the non-essential prime implicants included in the minimal sum are shown in red.
Example 1
Prime Implicants: 5
Distinguished 1-Cells: 2
Essential Prime Implicants: 2
Minimal Sums: 1
Example 2
Prime Implicants: 7
Distinguished 1-Cells: 2
Essential Prime Implicants: 2
Minimal Sums: 1
Page 68 of 169
5-Variable K-Map
A 5-variable K-Map will have 25 = 32 cells. A function F which has maximum decimal value of 31,
can be defined and simplified by a 5-variable Karnaugh Map.
Boolean Table For 5 Variables
5-Variable K-Map
In above Boolean table, from 0 to 15, A is 0
and from 16 to 31, A is 1. A 5-variable K-Map
is drawn as below.
Page 69 of 169
Example 1 of 5-Variable K-Map
Given function, F = Σ (1, 3, 4, 5, 11, 12, 14, 16, 20, 21, 30)
Since, the biggest number is 30, 5 variables are required to define this
function.
Draw K-Map for this function by writing 1 in cells that are present in
function and 0 in rest of the cells.
Page 70 of 169
Applying rules of simplifying K-Map, there are 2 octets. First one is in
square 2 circled in red. Another octet is between 2 squares highlighted by
blue connecting lines. There are 2 quads between each of the squares.
(16, 17, 20, 21, 28, 29, 24, 25) – AD’ (Since B, C and E are changing
variables, they are eliminated)
(0, 1, 8, 9, 16, 17, 24, 25) – C’D’ (Since A, B and E are changing variables,
they are eliminated)
(0, 1, 2, 3) – A’B'C’ (Since D and E are changing variables, they are
eliminated)
(28, 29, 30, 31) – ABC (Since D and E are changing variables, they are
eliminated)
Thus, F = AD’ + C’D’ + A’B'C’ + ABC
6-Variable K-Map
Visualize each of these squares one on another and figure out adjacent cells.
Example 1 of 6-Variable K-Map
Given function, F = Σ (0, 2, 4, 8, 10, 13, 15, 16, 18, 20, 23, 24, 26, 32, 34,
40, 41, 42, 45, 47, 48, 50, 56, 57, 58, 60, 61)
Since, the biggest number is 61, 6 variables are required to define this
function.
Page 71 of 169
Draw K-Map for this function by writing 1 in cells that are present in
function and 0 in rest of the cells.
Applying rules of simplifying K-Map, there is one loop which has 16 1′s –
containing 1′s at all the corners of all 4 squares. We obtain it by visualizing
all the 4 squares over one another but only in horizontal or vertical direction
(not diagonal) and figuring out adjacent cells. All the 1′s in corners are
circled in green.
There are 4 pairs, one in fourth square at bottom-right and other 3 are
between the squares and are highlighted by blue connecting line.
(0, 2, 8, 10, 16, 18, 24, 26, 32, 34, 40, 42, 48, 50, 56, 58) – D’F’ (A, B, C
and E are changing variables, so they are eliminated)
(41, 45, 57, 61) – ACE’F (B & D are changing variables, so they are
eliminated)
(13, 15, 45, 47) – B’CDF (A & E are changing variables, so they are
eliminated)
(0, 4, 16, 20) – A’C'E’F’ (B & D are changing variables, so they are
eliminated)
(56, 57, 60, 61) – ABCE’ (D and F are changing variables, so they are
eliminated)
There is 1 in cell 23, which cannot be looped with any adjacent cell, hence it
cannot be simplified further and left as it is.
23 = A’BC’DEF
Thus, F = D’F’ + ACE’F + B’CDF + A’C'E’F’ + ABCE’ + A’BC’DEF
Page 72 of 169
DON’T CARE CONDITIONS
Functions that have unspecified output for some input combinations are
called incompletely specified functions. Unspecified minterms of functions
are called ‘don’t care’ conditions. We simply don’t care whether the value of
0 or 1 is assigned to F for a particular minterm. ßDon’t care conditions are
represented by X in the K-Map table. Don’t care conditions play a central
role in the specification and optimization of logic circuits as they represent
the degrees of freedom of transforming a network into a functionally
equivalent one.
Example of the Use of a Karnaugh Map with Don’t Care Conditions
Design a circuit with four inputs D, C, B, A that are natural 8421-binary
encoded with D as the most-significant bit. The output F is true if the month
represented by the input (0,0,0,0 = January, 1011 = December) is a
vacation month . Vacation is on Christmas, Easter, July, birthday
(September), or friend’s birthday (May). Since Easter can occur in either
March or April, we have to include both months.
Step 1: The truth table
Construct a truth table. Note that binary input 1100 to 1111 (12 to 15) do
not represent valid months and cannot occur. Although the output F should
be 0 for these months since the output does not represent a vacation month, it
does not matter whether we choose F as 0 or 1. So, we put x in the output
column to represent don’t care and we can later choose x as 0 or 1 to
simplify the logic.
D C B A MONTH F
0 0 0 0 JAN 0
0 0 0 1 FEB 0
0 0 1 0 MAR 1
0 0 1 1 APR 1
0 1 0 0 MAY 1
0 1 0 1 JUN 0
0 1 1 0 JULY 0
0 1 1 1 AUG 0
1 0 0 0 SEP 1
1 0 0 1 OCT 0
1 0 1 0 NOV 0
1 0 1 1 DEC 1
Page 73 of 169
1 1 0 0 X
1 1 0 1 X
1 1 1 0 X
1 1 1 1 X
Step 2 The Karnaugh map
put 1s in the squares where F = 1, an x in don’t care squares and leave the
remaining squares (that contain a zero) empty.
This solution is not unique because one could have created other groupings
of 1s (but none simpler than this).
Page 74 of 169
Quine McCluskey Tabulation Method:
The Quine McCluskey tabulation method is a very useful and convenient
tool for simplification of Boolean functions for large numbers of variables.
The Karnaugh map method is a very useful and convenient tool for
simplification of Boolean functions as long as the number of variables does
not exceed four. But for case of large number of variables, the visualization
and selection of patterns of adjacent cells in the Karnaugh map becomes
complicated and too much difficult. For those cases Quine McCluskey
tabulation method takes vital role to simplify the Boolean expression. The
Quine McCluskey tabulation method is a specific step-by-step procedure to
achieve guaranteed, simplified standard form of expression for a function.
Consider the Boolean expression F= (0,1,2,3,5,7,8,10,14,15) and To
minimize by Quine McCluskey tabulation method.
To start with we have to make table and kept all the numbers in same
group whose binary numbers containing equal 1s. Like 1,2,8
(0001,0010,1000) are in same group because all has equal 1s in their
binary number. See in below table
Add another column to right side of that table naming 1st Now between
two groups depending upon number of 1s, we have to find similar
number with only one position change to 0 to 1. See the binary number
of 1 (0001) from first group and 3 (0011) from second group. Both the
numbers are similar only second bit position from LSB change 0 to 1. So
in new column we should write (1, 3) 00-1 (in place of number change
we put “–“on that). In this way we have to check entire table and make
new column accordingly.
Page 75 of 169
Again add another column to right side of that table naming 2nd and now
between two groups from 1st column, we have to find similar number
with only one position change to 0 to 1. See the binary number of (0,1)
(000-) and (2,3) (001-). Both the numbers are similar only second bit
position from LSB change 0 to 1. So in new column we should write (0,
1, 2, 3) 00– (in place of number change we put “–“ on that). In this way
we have to check entire table and make new column accordingly.
Mark those combinations that are used in 2nd Like for first one (0,1,2,3),
the used combinations are 0,1 and 2,3 from 1st column.
The final table is drawn for getting the simplified Boolean expression.
The table is drawn with all combination of 2nd column and unused
portion of 1st
Page 76 of 169
Strike of the rows for which cross is found in more than one column,
i.e.first row is striked since 0, 2 are available in row 2 similarly 1,3 are
available in 3rd row , But don’t strike the rows which has only one cross
in a column.
Now we can get the simplified Boolean expression from above table and
it’s correspond 2nd column value. We have to take uncut row with its
2ndcolumn value and convert it with ABCD variable. Like if 0 is found
take complement value, 1 for uncomplement value and “–“for no
variable.
0, 8,2,10 (- 0 – 0) =B’D’
1,3,5,7 (0 – – 1) = A’D
14, 15 (1 1 1 -) =ABC
Hence F =B’D’+A’D+ABC
Partially Specified Expressions: These are the expressions which not are
not completely specified i.e, the output is not specified for all the
combinations of the input .These expressions involve the don’t care
conditions and simplified like expressions with don’t care conditions.
Multi-output minimizations:
The multi output minimization is getting multiple outputs from a set of given
inputs and utilizing the intermediate stages of the expression as a common
wherever required. Example : BCD to 7 segment driver, where the input is a
4 bit a number and the outputs are 7 driving individual LEDs of the 7
segment.
Page 77 of 169
Unit III
PART A: Design of Combinational Circuits
Comparators
An n-bit comparator is a circuit that compares the magnitude of two numbers X
and Y. It has three outputs f1, f2, and f3, such that: f1 = 1 if (if and only if ) X > Y ; f2
= 1 if X = Y ; f3 = 1 if X < Y . As an example, consider an elementary 2-bit
comparator, as in Fig. 5.4a.
The circuit has four inputs x1, x2, y1 and y2, where x1 and y1 denote the most
significant digit of X and Y, respectively. The logic equations may be determined
with the aid of the map in Fig. 5.4b, where the values 1, 2, and 3 are entered in
appropriate cells to denote, respectively, f1 = 1, f2 = 1, and
The circuit for f1 is shown in Fig. 5.4c. Similar circuits are obtained for f2
and f3.
The reader can verify that X >Y, i.e. f1 = 1, when the most significant bit of
X is larger than that of Y , i.e., x1 > y1, or when the most significant bits are
equal but the least significant bit of X is larger than that of Y , namely, x1 =
y1 and x2 > y2. In a similar way, we can determine the conditions for f2 = 1
and f3 = 1.
Page 78 of 169
This line of reasoning can be further generalized to yield the logic
equations for a 4-bit comparator.
Data selectors
Multiplexer is essentially an electronic switch that can connect one out of n
inputs to the output. The most important application of the multiplexer is as
a data selector. In general, a data selector has n data input lines D0, D1, . . . ,
Dn−1, m select digit inputs s0, s1, . . . , sm−1, and one output. The m select
digits form a binary select number ranging from 0 to 2m − 1, and when this
number has the value k then Dk is connected to the output. Thus this circuit
selects one of n data input lines, according to the value of the select number,
and in effect connects it to the output. Clearly, the number of select digits
must equal m = log2 n, so that it can identify all the data inputs.
Data selectors have numerous applications. For example, they may be used
to connect one out of n input sources of a device to its output. As we shall
subsequently show, data selectors may also be used to implement all
Boolean functions.
A block diagram for a data selector with eight data input lines is shown in
Fig. 5.6a. The select number consists of the three digits s2s1s0. Thus, for
example, when s2s1s0 = 101 then D5 is to be connected to the output, and so
Page 79 of 169
on. The Enable (or Strobe) input “enables” or turns the circuit on. A logic
diagram for this data selector is shown in Fig. 5.6b. Such a unit provides the
complement z of the output as well as the output z itself. The Enable input
turns the circuit on when it assumes the value 0.
Implementing switching functions with data selectors
An important application of data selectors is the implementation of arbitrary
switching functions. As an example, we shall show how functions of two
variables can be implemented by means of the data selector of Fig. 5.7.
Clearly, in this circuit, if s = 0 then z assumes the value of D0 and if s = 1
then z assumes the value of D1. Thus, z = sD1 + s D0. Now, suppose that we
want to implement the EXCLUSIVE-OR operation A ⊕ B. This can be
accomplished by connecting variable A to the input s and variables B and B
to D0 and D1, respectively. In this case z = AB + A B = A ⊕ B. Similarly, if
we want to implement the NAND operation z = A + B then we connect
variable A to s and variable B to D1; D0 is connected to a constant 1. Clearly,
z =AB + A 1 = A + B .
Page 80 of 169
A priority encoder is a device with n input lines and log2 n output lines.
The input lines represent units which may request service. When two lines pi
and pj , such that i > j , request service simultaneously, line pi has priority
over line pj . The encoder produces a binary output code indicating which of
the input lines requesting service has the highest priority. An input line pi
indicates a request for service by assuming the value 1. A block diagram for
an eight-input three-output priority encoder is shown in Fig. 5.8a.
The truth table for this encoder is shown in Fig. 5.8b. In the first row, only
p0 requests service and, consequently, the output code should be the binary
number zero to indicate that p0 has priority. This is accomplished by setting
z4z2z1 = 000. The fourth row, for example, describes the situation where p3
requests service while p0, p1, and p2 each may or may not request service
simultaneously. This is indicated by an entry 1 in column p3 and don’t-cares
in columns p0, p1, and p2. No request of a higher priority than p3 is present at
this time. Since in this situation p3 has the highest priority, the output code
must be the binary number three. Therefore, we set z1 and z2 to 1 while z4 is
set to 0. (Note that the binary number is given by N = 4z4 + 2z2 + z1.) In a
similar manner the entire table is completed.
From the truth table, we can derive the logic equations for z1, z2, and z4.
Starting with z4, we find that
z4 = p4p5p6p7 + p5p6p7 + p6p7 + p7.
This equation can be simplified to
z4 = p4 + p5 + p6 + p7.
For z2 and z1, we find
z2 = p2p3p4p5p6p7 + p3p4p5p6p7
+ p6p7 + p7 = p2p4p5 + p3p4p5 +
p6 + p7,
Page 81 of 169
z1 = p1p2p3p4p5p6p7 + p3p4p5p6p7 +
p5p6p7 + p7 = p1p2p4p6 + p3p4p6 +
p5p6 + p7.
Decoders
A decoder is a combinational circuit with n inputs and at most 2n outputs. Its
characteristic property is that for every combination of input values, only one
output value will be equal to 1 at any given time. Decoders have a wide
variety of applications in digital technology. They may be used to route input
data to a specified output line, as, for example, is done in memory
addressing, where input data are to be stored in (or read from) a specified
memory location. They can be used for some code conversions. Or they may
be used for data distribution, i.e., demultiplexing, as will be shown later.
Finally, decoders are also used as basic building blocks for implementing
arbitrary switching functions.
Figure 5.9a illustrates a basic 2-to-4 decoder. Clearly, if w and x are the
input variables then each output corresponds to a different minterm of two
variables. Two such 2-to-4 decoders plus a gate-switching matrix can be
connected, as shown in Fig. 5.9b, to form a 4-to-16 decoder. Switching
matrices are very widely used in the design of digital circuits.
Not all decoders have exactly 2n outputs. Figure 5.10 describes a decimal
decoder that converts information from BCD to decimal. It has four inputs
w,
Page 82 of 169
x, y, and z, where w is the most significant and z the least significant digit,
and 10 outputs, f0 through f9, corresponding to the decimal numbers. In
designing this decoder, we have taken advantage of the don’t-care
combinations, f10 through f16, as can be verified by means of the map in Fig.
5.10b. Another implementation of decimal decoders is by means of a partial-
gate matrix, as shown in Fig. 5.11.
A decoder with exactly n inputs and 2n outputs can also be used to
implement any switching function. Each output of such a decoder realizes
one distinct min term. Thus, by connecting the appropriate outputs to an OR
gate, the required function can be realized. Figure 5.12 illustrates the
implementation of the function f (A, B, C, D) = (1, 5, 9, 15) by means of a
complete decoder, i.e., one with n inputs and 2n outputs.
A decoder with one data input and n address inputs is called a
demultiplexer. It directs the input data to any one of the 2 n outputs, as
specified by the n-bit
Page 83 of 169
input address. A block diagram for a demultiplexer is shown in Fig. 5.13. A
demultiplexer with four outputs is shown in Fig. 5.3.When larger-size
decoders are needed, they can usually be formed by inter-connecting several
smaller decoders with some additional logic.
Adders_sutractors:
Page 84 of 169
products expressions are S = x 'y + xy',C = x )'The logic diagram of the half
adder implemented in sum of products is shown in Fig. 4.5 (a),It can be also
implemented with an exclusive-OR and an AND gate as shown in Fig.
45(b).This form is used to show that two half adders can be used to construct
a full adder.
The logic diagram for the full adder implemented in sum-of-products form is
shown in Fig. 4.7. It can also be implemented with two half adders and one
OR gate. as shown in Fig.4.8.The S output from the second half adder is the
exclusive-OR of z and the output of the first half adder giving
Page 85 of 169
Binary Adder
A binary adder is a digital circuit that produces the arithmetic sum of two
binary numbers. It can be constructed with full adders connected in cascade.
with the output carry from each full adder connected to the input carry of the
next full adder in the chain. Figure 4.9 shows the interconnection of four
full-adder (FA) circuits to provide a four-bit binary ripple carry adder. The
augend bits of A and the addend bits of B are designated by subscript
numbers from right to left with subscript 0 denoting the least significant bit.
The carries are connected in a chain through the full adders. The input carry
to the adder is Co. and it ripples through the full adders to the output carry
C4 The S outputs generate the required sum bits. An two-bit adder requires n
full adders with each output carry connected to the input carry of the next
higher order full adder.
To demonstrate with a specific example. consider the two binary numbers A
= 101 1 and B = 00 11. Their sum S = 1110 is formed with the four-bit adder
as follows:
The bits are added with full adders. starting from the least significant
position (subscript 0). 10 to form the sum bit and carry bit. The input carry
Co in the least significant position must be 0. The value of C;-+l in a given
significant position is the output cart)' of the full adder. This value is
transferred into the input carry of the full adder that adds the bits one higher
significant position to the left. The sum bits are thus generated starting from
the rightmost position and are available as soon as the corresponding
previous carry bit is generated. All the carries must be generated for the
correct sum bits to appear at the outputs.
Page 86 of 169
The four-bit adder is a typical example of a standard component. It can be
used in many applications involving arithmetic operations. Observe that the
design of this circuit by the
classical method would require a truth table with 29 = 5 12 entries. since
there are nine inputs to the circuit. By using an iterative method of cascading
a standard function, it is possible to obtain a simple and straightforward
implementation
Carry Propagation
The addition of two binary numbers in parallel implies that all the bits of the
augend and addend are available for computation at the same time. As in any
combinational circuit, the signal must propagate through the gates before the
correct output sum is available in the output terminals. The total propagation
time is equal to the propagation delay of a typical gate times the number of
gate levels in the circuit. The longest propagation delay time in an adder is
the time it takes the carry to propagate through the full adders. Since each bit
of the sum output depends on the value of the input carry the value of Sj at
any given stage in the adder will be in its stead y-state final value only after
the input carry to that stage has been propagated. In this regard consider
output S3 in Fig. 4.9. Inputs A3 and BJ are available as soon as input signals
are applied to the adder. However, input carry C3 does no (settle to its final
value until C2 is available from the previous stage.
Similarly, C2 has to wait for C1and so on down 10 Co- Thus, only after the
carry propagates and ripples through all stages will the last output 5J and
carry Col settle to their final correct value . The number of gate levels for the
carry propagation can be found from the circuit of the fulladder. The circuit
is redraw n with different labels in Fig. 4.10 for convenience. The input and
output variables use the subscript i to denote a typical stage of the adder .
The signals at P; and G, settle to their steady -state values after they
propagate through their respective gates, These two signs are common to all
full adder and depend only on the input augend and addend bits.
The signal from the input carry Cj 10 the out put carry Cj- 1 propagates
through an AND gate and an OR gate. which contribute two gate levels. If
there are four full adders in the adder. the output carry C4 would have 2 x 4
=- 8 gate levels from Co to C4' For an n-bit adder. there are 2n gate levels
for the carry to propagate from input to output.
The carry propagation lime is an import ant attribute of the adder because it
limits the speed
Page 87 of 169
with which two numbers are added . Although the adder for that matter any
combinational circuits will always have some value at its output terminals
the outputs will not be correct unless the signals are given enough time to
propagate through the gates connected from the inputs 10 the outputs. Since
all other arithmetic operations are implemented by successive additions the
time consumed during the addition process is critical.
An obvious solution for reducing the carry propagation del ay time is to
employ faster gate s with reduced delays. However, physical circuits have
limit to their capability. Another solution is to increase the complexity o f
the equipment in such a way that the carry delay time is reduced. There are
several techniques for reducing the carry propagation time in a parallel
adder. The most widely used technique employs the principle o f carry look
ahead IOR ie. Consider the circuit of the full adder shown in Fig. 4.10. If
we define 1'01.'0 new binary variables
Gi, is called a carry generate and it produces a carry of 1 when both A, and
B, are 1. regardless f the input carry)' C,. Pi’s called carry propagate because
it determines whether a carry into rage i will propagate into stage i .... I (i.e ..
whether an assertion of Ci will propagate to an assertion of Ci-tl. We now
write the Boolean functions for the carry )' outputs o f each stage and
substitute the value of each C; from the previous equations:
Since the Boolean function for each out put carry is expressed in sum-of-
products form. each function can be implemented with one level of AND
gates followed by an OR gate (or by a two- level NAND).1lle three Boolean
functions for C t. C2• and C) are implemented in the carry lookahead
generator shown in Fig. 4 .11. shows that this circuit can add in less time
because C1 doe s not have to wait for C2 and C. to propagate: in fact. C" is
propagated at the same time as C 1 and C2. This gain in speed of operation
is achieved at to be expense of additional complexity (hardware)
Page 88 of 169
The construction of a four-bit adder with a carry lookahead scheme is shown
in Fig. 4.12. Each sum output requires two exclusive-OR gates. The output
of the first exclusive-OR gate generates the 11 variable, and the AND gate
generates the G1variable. The carries are propagated through the carry look
ahead generator (similar 10 that in Fig. 4. 11) and applied as inputs to the
second exclusive-OR gate. All output carries are generated after a delay
through two levels of gates. Thus, outputs 5 1through 53 have equal
propagation delay times. The two-level circuit for the output carry C4 is not
shown. This circuit can easily be derived by the equation substitution
method.
Page 89 of 169
DECODERS
Page 90 of 169
The input variables represent a binary number. and the outputs represent the eight dig its of a number in
the octal number system. However, a three-to-eight-line decoder can be used for decoding all three-bit
code to provide eight outputs, one for each element of the code. The operation of the decoder may be
clan lied by the truth table listed in Table 4.6. For each possible input combination, there are seven
outputs that are equal 10 0 and only one that is equal (0 I. The output whose value is equal 10 I
represents the minterm equivalent of the binary number currently available in the input lines. Some
decoders arc constructed with NAND gates. Since a NAND gate produces the AND operation with an
inverted output. it becomes more economical to generate the decoder minterms in their complemented
form. Furthermore decoders include one or more enable inputs to control the circuit operation. A two-
to-four-line decoder with an enable input constructed with NAND gates is shown in Fig. 4.19. The
circuit opera tes with complemented outputs and a complement enable input.
The decode r is enabled when E is equ al to 0 (i.e.. active-low enable ). As indicated by the truth table
only one output can be equal to 0 at any given time; all other outputs are equal 10 I. The output whose
value is equal to 0 represents the minterm selected by inputs A and B. The circuit is disabled when E is
equal 10 I. regardless of the values of the other two inputs. When the circuit is disabled. none of the
outputs are equal 10 0 and none of the minterms are selected .
In general a decoder may operate with complemented or un complemented outputs. The enable input
may be activated with a 0 or with a 1 signal. Some decoders have two or more enable inputs that must
satisfy a given logic condition in only to enable the circuit.
A decode r with enable input can function as a demultiplexer\ a circuit that receives information from a
single line and directs to one of 2" possible output lines. The selection of a specific output is
controlled by the bit combination of n selection lines. The decoder of Fig. 4.19 can function as a o ne-
to-four-line demultiplexer when E is taken as a data input line and A and B are taken as the selection
input s. The single input variable E has a pat h 10 all four outputs. but the input information is directed
10 only one of the output lines as specified by the bi nary combination of the two selection lines A and
B. This feature can be verified from the truth tab le of the circuit. For example, if the selection lines AB
= 10, will be the same as the input value E. while all other outputs are maintained at I. Because decoder
and demultiplexer operations are obtained from the same circuit. a decoder with an enable input is
referred to as a decoder-demultiplexer. Decoders with enable inputs can be connected together to form
a larger decoder circuit. Figure 4.20 shows two 3-to-8-line decoders with enable inputs connected to
form a 4-10-16 line decoder. When w = 0. the top decoder is enabled and the other is disabled. The
bottom decoder outputs are all O's. and the top eight outputs generate minterms 0000 to 011 1. When w
= 1 , the en able conditions are reversed: The bottom decoder outputs generate minterms 1000 to 1111,
while the outputs of the top decoder are all D's. This example demonstrates the usefulness of enable
inputs in decoders and other combinational logic components. In general enable inputs are a convenient
feature for interconnecting two or more standard components for the purpose of combining them into
a similar function with more inputs and outputs
Page 91 of 169
Combinational logic Implementation
A decoder provides the 2" minterms of n input variables. Each asserted output of the decoder is
associated with a unique pattern of input bits. Since any Boolean function can be expressed in
sum-of-minterms form a decoder that generates the minterms of the function together with an
external OR gate that forms their logical sum. provide a hardware implementation of the
function. In this way any combinational circuit with I I inputs and m outputs ca n be implemented
with an II-l<l-2"-line decoder and m OR gates. The procedure for implementing a combinational
circuit by means of a decoder and OR gates requires that the Boolean function for the circuit be-
expressed as. a sum of minterms. A decode r is then chosen that generates all the minterms of the
input variable". The inputs to each OR gate are selected from the decoder oring to the list of
minterms of each function. This procedure will be illustrated by an example that implements a
full-adder circuit. From the truth tab le of the full adder (see Table 4.4). we obtain the functions
for the combinational circuit in sum-of-minterms form :
Since there are three input s and a total of eight min terms we need
a three-to-eight-line decoder. The implement at ion is shown in Fig. 4.2 1. The- decoder generates
the eight minterms for x. y. and z. The OR gate for output S forms the logical sum of minterms 1.
2. 4. and 7. The OR gate for output C forms the logical sum of minterms 3. 5. 6. and 7. A funct ion
with a long list of minterms requires an OR gate- with a large number of inputs. A function
having a list of k. minterms can be expressed in it complemented from F' with 2" - k. minterm s.
If the number of min terms in the function is greater than 2"/2. then F' can be expressed with
fewer minterms. In such a case. it is advantageous to use a OR gate to sum the min terms of F' .
The output of the OR gate complements this sum and generates the normal output F. l f NAN D
gate s are used for the decode r. as in Fig. 4.19. then the external gates must be NAND gates
instead of OR gates. This is because a two-level AND gate circuit implements a sum-of-minterms
function equivalent to a two-level AND-OR circuit.
Page 92 of 169
4 .10 ENCODERS
An encoder is a digital circuit that performs the inverse operation of a decoder. An encoder has 2"
(or fewer) input lines and output lines. The output lines. as an aggregate. generate the binary
code corresponding 10 the input value. An example of an encoder is the octal-to-binary encoder
whose truth table is given in Table 4.7. It has eight inputs (one for each of the octal digits) and
three outputs that generate the corresponding binary number. It is assumed that only one input has
a value of 1 at any given time. The encode r can be implemented with OR gates whose inputs
are determined directly from the truth table. Output c is equal to I when the input octal digit is I.
3. 5. or 7. Output ) is I for octal digits 2. 3. 6. or 7. and output x is I for digits 4. 5. 6. or 7. These
condition s can be expressed by the following Boolean output functions:
z = Dj + D3 + D5 + D7
y = D1 + D3+ D6+ D7
.r = D4 + D5 + Do + D7
The encoder can be implemented with three OR gates. The encoder defined in Table 4.7 has the
limitation that only one input can be active at any given time. If two inputs are active
simultaneously. the output produces an undefined combination.
For example: If DJ and Db are I simultaneously the output of the encoder will be 111 because all
three outputs are equal 10 I. The output II I does not represent either binary 3 or binary 6. To
resolve this ambiguity, encoder circuits must establish an input priority to ensure that only one
input is encoded. If we establish a higher priority for inputs with higher subscript numbers, and if
both lJ) and Do are 1 at the same time, the output will be 110 because D6 has higher priority than
Dl. Another ambiguity in the octal-to-binary encode r is that an output with all O's is generated
when all the inputs arc 0; but this output is the same as when Do is equal to I. The discrepancy
can be resolved by providing one more output to indicate whether at least one input is equal 10 1 .
Page 93 of 169
Priority Encoder
A priority encoder is an encode r circuit that includes the priority function. The operation of the priority
encoder is such that if two or more inputs are equal to I at the same time. the input having the highest
priority will take precedence. The truth table of a four-input priority encoder is given in Table 4.8. In
addition to the two outputs .r and y. the circuit has a third output designated by V which is a valid bit
indicator that is set to I when one or more inputs are equal to I. If all input s are 0, there is no valid
input and V is equal to 0 .The other two outputs are not inspected when equals 0 and are specified as
don't -care conditions. Note that whereas X's in output columns represent don ' t-care conditions, the X
's in the input columns are useful for representing a truth table in condensed form. Instead of listing all
16 minterms of four variables, the truth table uses an X 10 represent either 1 or 0. For example, x100
represents the two minterms 0100 and 1100.According to Table 4.8, the higher the subscript number,
the higher the priority of the input. Input D has the highest priority. so, regardless of the values of the
other inputs.
For example.the fourth row in the table, with inputs XX10. represent s the four minterms 0010,01 10,
10 10, and 11 10. The simplified Boolean express ions for the priority encoder are obtained from the
maps. The condition for output V is an OR function of all the input variables. The priority encoder is
implemented in Fig. 4.23 according to the following Boolean
functions:
Multiplexers:
A multiplexer is a combinational circuit that selects binary information from one of many input lines
and directs it to a single output line. The selection of a particular input line is controlled by a set of
selection lines. Normally there are 2n input lines and II selection lines whose bit combinations
determine which input is selected.
A two-to-one-line multiplexer connects one of two l -bit sources to a common destination, as shown in
Fig. 4.24. The circuit has two data input lines. one output line. and one selection line S. When 5 = 0. the
Page 94 of 169
upper AND gate is enabled and 10 has a path to the output. When S = I, the lower AND gate is enabled
and 11has a path to the output. The multiplexer acts like
an electronic switch that selects one of two sources . The block diagram of a multiplexer is sometimes
depicted by a wedge-shaped symbol. as shown in Fig. 4.24(b). It suggests visually how a selected one
of multiple data sources is directed into a single destination. The multiplexer is often labeled "MUX" in
block diagrams. A four-to-one-line multiplexer shown in Fig. 4.25. Each of the four Inputs I0 through
I3 is applied 10 one input or an AND gate. Selection lines S, and So are decoded to select
particular AND gate .The outputs of the AND gate arc applied to a single OR gate that provides the
one -line output The function table lists the input that is passed to the output for each combination of
the binary selection values . To demonstrate the operation of the circuit consider the case when S 1 S0 =
10. The AND gate associated with input I2 has two of its inputs equal to 1 and the third input
connected to I2. The other three AND gates have at least one input equal to 0. which makes their
outputs equal to 0. The output of the OR gate is now equal to the value of I 2. providing a path from the
selected input to the output. A multiplexer is also called a delta selector, since it selects one of many
inputs and steers the binary information to the output line. The AND gates and inverters in the
multiplexer resemble a decoder circuit and indeed they decode the selection input lines. In general a
2"-to- l-line multiplexer is constructed from an,1-10-2"decoder by adding 2"input lines to it one to
each AND gate. T be outputs of the AND gates are applied to single OR gate. The size of a
multiplexer is specified by the number 2"of its data input line and the single output line. The two
selection lines arc implied from the 2" data lines. As in decoders, multiplexers may have an enable
input to control the operation of the unit. When the enable input is in the inactive mode the outputs are
disabled and when it is in the active the circuit functions as a normal multiplexer. Multiplexer circuit
its can he combined with common selection inputs to provide multiple-bit Selection logic.
Page 95 of 169
The circuit has four multiplexers each capable of selecting one of two input lines. Output Yo can be
selected to come from either input A0 or input B0. Similarly output Y, may haw the value of A 1or 8,.
and won. Input selection line S selects one of the lines in each of the four multiplexers. The enable
input E must be active (i.e asserted) for normal operation. Although the circuit contains two z-to-1-
line multiplexers, we are more likely to view it a-s 3 circuit that selects one of two -l-bit set of data
lines. As shown in the function table the unit is enabled when E = 0. Then. if S = 0.the four A input
have a path to the four outputs. If. by contrast. 5 I the four B inputs are applied to the outputs. 1be
outputs have 3111's when E I. regard 101’s of the value of 5.
The types of gates most often found in integrated circuits are NAND and NOR gates. For this reason
NAND and NOR logic implementations are the most Important from a practical point of view. Some
(but not all) NAND or NOR gates al low the possibility of a wire connection between the outputs of
two gates to provide a specific logic function. This type of logic is called wired logic. For example
open-collector TTL NAND gates when tied together perform wired AND logic.(The open-collector
TTL gale is shown in Chapter 10. Fig.10.1) The wired AND logic performed with two NAND gates is
depicted in Fig. 3.28(a). The AND gate is drawn with the lines going through the center of the gate to
distinguish it from a conventional gate. The wired-AND gate is not a physical gate but only a symbol to
designate the function obtained from the indicated wired connection. The logic function implemented
by the circuit of Fig. 3.28(a) is
A wired-logic gale does not produce a physical second-level gate. since it is just a wire connect ion.
Nevertheless, for discussion purposes, we will consider the circuits of Fig 3.28 as two-level
implementations. The first level consists of NAND(or NOR) gates and the second level has a sing le
AND(or OR) gate . The wired connection in the graphic symbol will be omitted in subsequent
discussions.
Nondegenerate Forms
It will be instructive from a theoretical point of view to find out how many two-level combinations
of gates are possible. We consider four types of gates: AND,OR,NAND and) NOR. If we assign one
type of gate for the first level and one type for the second level . We find that there are 16 possible
combinations of two- level forms. (The same type of gate ca n be in the first and second levels. as in
a NAND-NAND implementation.) Eight of these combinations are said to be degenerate forms
because they degenerate to a single operation. This can be seen from a circuit with AND gates in the
Page 96 of 169
first level and an AND gate in the second level. The output of the circuit is merely the AND function
of all input variables. The remaining eight nandegenerate forms produce an implementation in sum-
of-products form or product-of-sums form . The eight nondegenerate forms are as follows:
AND-OR
NAND-NAND
NOR-OR
OR-NAND
OR- AND
NOR-NOR
NAND-AND
AN D-NOR
The first gate listed in each of the forms constitutes a first level in the implementation. The second
gate listed is a single gate placed in the IC next level. Note that any two (forms listed on the same
line are duals of each other. The AND-OR and OR-AND form s are the basic two-level forms
discussed in Section 3.4. The NAND-NAND and NOR- NOR forms were presented in
Section3.6.The remaining four forms are investigated in this section.
AND-OR-INVERT Implementation
The two forms NAND-AND and AND-NOR are equivalent and can be treated together. Both
perform (he AND-OR- INVERT function as shown in Fig. 3.29. The AND-NOR form resembles the
AND-OR form. but with an inversion done by the bubble in the output of the NOR gate. It
implements the function
F=(AB+CD+E)'
By using the alternative graphic symbol for the NOR gate. we obtain the diagram of
Fig. 3.29(b). Note that the single variable E is not complemented, because the only change
made is in the graphic symbol of the NOR gate. Now we move the bubble from the input terminal of
tile second-level gate to the output terminals of the first-level gates. An inverter is needed for the single
variable in order to compensate for the bubble. Alternatively, the inverter can be removed. provided
that input E is complemented. The circuit of Fig. 3.29(c) is a NAND-AND form and was shown in Fig.
3.28 to implement the AND-OR- INVERT function. An AND-OR implementation requires an
expression in sum-of-products form. The AND-OR- INVERT implementation is similar, except for the
inversion. Therefore, if tile complement of the function is simplified into sum-of-products form (by
combining the D's in the map). it will be possible to implement F' with the AND-OR part of the
function. When F' passes through the always present output inversion (the INVERT pan ), it will
Page 97 of 169
generate the output F of the function. An example for the AND-OR- INVERT implementation will be
shown subsequently.
OR-AND-INVERT Implementation
The OR- NAND and NOR..QR forms per form the OR- AND-INVERT function, as shown in Fig.
3.30. The OR-NAND form resembles the OR-AND form, except for the inversion done by the
bubble in the NAND gate. It implements the function
By using the alternative graphic symbol for the NAND gate, we obtain the diagram of
Fig. 3.30(b). The circuit in (c) is obtained by moving the small circ les from the inputs of the second-
level gate to the outputs of the first-level gates. The circuit of Fig. 3.30(c) is a NOR-OR form and was
shown in Fig. 3.28 to implement the OR-AND-INVERT function. The OR- AND-INVERT
implementation requires an express ion in product-of-sums form. If the complement of the function is
simplified into that form, we can implement F' with the OR- AND part of the function. When F' passes
through the INVERT part. we obtain the complement of F', or F, in the output.
HAZARDS
In designing asynchronous sequential circuits, care must be taken to conform with certain restrictions
and precautions 10 ensure that the circuits operate properly. The circuit must be operated in
fundamental mode with only one input changing at any time and must be free of critical races. In
addition, there is one more phenomenon. called a hazard, that may cause the circuit to malfunction .
Hazards are unwanted switching transients that may appear at the output of a circuit because different
paths exhibit different propagation delays. Hazards occur in combinational circuits, where they may
cause a temporary false output value. When they occur in asynchronous sequential circuits. hazards
may result in a transition 10 a wrong stable state. It is therefore necessary 10 check for possible hazard
Page 98 of 169
s and determine whether they can cause improper operations. If so, then steps must be taken to
eliminate their effect.
Page 99 of 169
Minterm 111 is covered by the product term implemented in gate I of Fig. 9.33. and minterm 101 is
covered by the product term implemented in gate 2. Whenever the circuit must move from one product
term to another. there is a possibility of a momentary interval when neither term is equal to l, giving
rise to an undesirable 0 output. The remedy for eliminating a hazard is to enclose the two mm terms in
question with another product term that overlaps both groupings. This situation is shown in the map of
Fig. 9.35(b). where the two minterms that cause the hazard are combined into one product term. The
hazard-free circuit obtained by such a configuration is shown in Fig. 9.36. The extra gate in the circuit
generates the product term XIX). In general, hazards in combinational circuits can be removed by
covering any two minterms that may produce a hazard with a product term common to both. The
removal of hazards requires the addition of redundant gates to the circuit.
Hazards In Sequential Circuits
In normal combinational-circuit design associated with synchronous sequential circuits. hazards are of
no concern, since momentary erroneous signals are not generally troublesome. However. if a
momentary incorrect signal is fed back in an asynchronous sequential circuit. it may cause the circuit to
go to the wrong stable state. This situation is illustrated in Fig. 9.37. If the circuit is in total stable
state )'x lx2 = II t and input X2 changes from I to u. the next total stable stale should be 110. However .
because of the hazard. output Y may go to 0 momentarily. If this false signal feeds back into gate 2
before the output of the inverter goes to I. the output of gate 2 will remain at 0 and the circuit will
switch to the incorrect total stable stale 010.This malfunction can be eliminated by adding an extra
gate, as is done in Fig. 9.36.
This implementation is shown in Fig. 9.38(a). S is generated with two NAND gates and one AND gate.
The Boolean function for output Q is
Q = (Q'S)' =[Q'(AB)'(CD)']'
This function is generated in Fig. 9.38(b) with two levels of NAND gates. If output Q is equal to I. then
Q' is equal to. If two of the three inputs go momentarily to I, the NAND gate associated with output Q
will remain at I because Q' is maintained ill O. Figure 9.38(b) shows a typical circuit that can be used
to construct asynchronous sequential circuits. The two NAND gates forming the latch normally have
two inputs. However, if the 45 5 or functions contain two or more product terms when expressed as a
sum of products, then the corresponding NAND gate of the SR latch will have three or more inputs.
Thus the two terms in the original sum-of-products expression for5 are AD and CD and each is
Thus far, we have considered what are known as static and dynamic hazards. Another type of hazard
that may occur in asynchronous sequential circuits is called an essential hazard. This type of hazard is
caused by unequal delays along two or more paths that originate from the same input. An excessive
delay through an inverter circuit in comparison to the delay associated with the feedback path may
cause such a hazard. Essential hazards cannot be corrected by adding redundant gates as in static
hazards. The problem that they impose can be corrected by adjusting the amount of delay in the
affected path. To avoid essential hazards each feedback loop must be handled with individual care to
ensure that the delay in the feedback path is long enough compared with delays of other signals that
originate from the input terminals. This problem tends to be specialized, as it depends on the particular
circuit used and the size of the delays that are encountered in its various paths.
Relay operation:
A Relay is a electromechanical device, which contains a coil and one or more contacts. When the coil
is excited by applying the rated voltage, the coil will become an electromagnet and changes the
position of the contact by attracting it. If the excitation is removed, the contact will go back to normal
position due to a spring action. The different types of contacts are NO (Normally open), NC ( Normally
closed) and Transfer contact ( Change over ). The symbols are as given below.
Symmetric Networks:
Definitions: A switching function of n variables f(x1,x2,........xn) is called symmetric or totally
symmetric if and only if it is invariant under any permutation of variables ;
It is called partially symmetric in the variables xi,xj, where {xi,xj} is a subset of {x1,x2,........xn} , if
and only if the interchange of the variables xi, xj leaves the function unchanged.
Example: f(a,b,c)=a’b’c+ab’c’+a’bc’ is symmetric because interchange between variables doesn’t
change the function. Where as
f(a,b,c) = a’b’c+ab’c’ is partially symmetric in the variables a and c.
The variables in which a function is symmetric are called the variables of symmetry.
A symmetric function is denoted Sa1,a2...ak(x1,x2,........xn), where S designates the property of
symmetry, the superscripts a1,...ak designate the a-numbers, and (x1,x2,........xn) designates the
variables of symmetry.
Example-1: The function f(a,b,c)=a’b’c+a’bc’+ab’c’ assumes the value ‘1’when and only when one out
of its three variables is ‘1’.
This function is denoted as S1(a,b,c), similarly the symmetric function S 1,3 (a,b,c) is
f(a,b,c)=abc+a’b’c+ab’c’+a’bc’
Definition-2: Let f1(a,b,c,d) = S 0,2,4 (a,b,c,d) and f2(a,b,c,d) = S 3,4 (a,b,c,d)
then f3(a,b,c,d) = f1+f2 = S 0,2,3,4 (a,b,c,d) and f4(a,b,c,d) = f1*f2 = S 4 (a,b,c,d).
The complement of the symmetric function is also a symmetric function whose a-numbers are
included in the set {0,1..n}and not included in the original function.
for example S’ 0,2,4 (a,b,c,d) = S 1,3 (a,b,c,d) for set of {0,1,2,3,4}.
Mirror image function of any symmetric function is symmetric for same negated variable for which
original function is symmetric.
Identification of Symmetric Functions:
The procedure for identifying symmetric function is as follows:
1. Obtain column sums
a. if more than two different sums occur (case 1), the function is not symmetric.
b. If two different sums occur (case 2), compare the total of these two sums with the number of rows in
the table. If they are not equal (case 2a), the function is not symmetric. If they are equal (case 2b),
complement the columns corresponding to either one of the column sums (preferably the one of fewer
occurrences) and continue to step 2.
c. If all column sums are identical (case 3), compare their sum with one-half the number of rows in the
table. If they are not equal (case 3a), continue to step 2. If they are equal, continue to step 3.
2. Obtain row sums and check each for sufficient occurrence, that is if a is one row sum and n is the
number of variables, then that row sum much n!/(n-a)!a! Times.
a. If any row sum does not occur the required number of times, the function is not symmetric.
b. If all row sums occur the required number of times, the function is symmetric, its a-numbers are
given by the different row sums in column a. and its variables of symmetry are given at the top of the
table.
3. Obtain row sums and check each for sufficient occurrence.
a. If all row sums occur the required number of times, the function is symmetric.
b. If any row sum does not occur the required number of times, expand the function about any of its
variables – that is, find functions g and h such that f = x'g + xh. Write g and in tabular form and find
their column sums. Determine all variable complementations required for the identifications of
symmetries in g and h. Test f under the same variable complementations. If all row sums occur the
required number of times, f is symmetric; if any row sum does not occur the required number of times,
f is not symmetric.
1. f(x,y,z) = ∑ (1,2,4,7).
x y z Row Sum The sum of all columns is equal (case 3). Their
sum is 6 which is not equal to ½ of the number
0 0 1 1
of row (case 3a). Go to step 2: Check for the
0 1 0 1 row sums for sufficient occurrence i.e. n!/(n-
1 0 0 1 a)!a! I.e 3! / (3-1)!.1! = 3 and 3! / (3-3)!.3! = 1.
1 1 1 3 Both the row sums occur the required number
of times, so the function is symmetrical and can
2 2 2 Column Sum
be expressed as S1,2 (x,y,z).
2. f(w,x,y,z) = ∑(0,1,3,,5,8,10,11,12,13,15).
The truth table is rows (case 2b). We have to complement x and y
w x y z Row Sum ( whose column sum is less)
If columns w and x are complemented, then the function can be expressed as:
f(w,x,y,z) = S1,2 (w',x,y,z').
Introduction:
Threshold elements are another type of switching elements. Logic Circuits constructed using threshold
elements usually consists of fewer elements and simpler interconnections compared to conventional gates.
The conventional gates logic is specified by Boolean algebra where as the logic with threshold gates are
specified by mathematical equations. All basic logical gates and universal gates can be implemented using a
Threshold gate (XOR gate cannot be implemented using a single Threshold gate). Also, some simpler
Boolean logics can be implemented using a single Threshold gate.
Threshold Elements:
A threshold element has n two-valued inputs x1, x2,..., xn and a single two valued output y. Its internal
parameters are a threshold T where as each weight w i associated with a particular input variable xi. These
values of T and xi may be any real, finite, positive or negative numbers. The relation of a threshold element
is defined as follows:
where the sum and product operations are conventional arithmetic ones. The sum is called the
weighted sum of the element.
Threshold element Symbol:
Example:
Write the input output relation of the Threshold gate given below and obtain the switching funtion for the
same.
Ans: The inputs are the three x1, x2 and x3 with multiplication factors -1, 2 and 1. The Threshold value T is
½. The output Y is calculated as for the following table.
Because of wide range of weights and threshold values, a large class of switching functions can be realized
by a single Threshold element. However, every switching function cannot be realized by a single Threshold
element.
The above two requirements are conflicting and no Threshold element can be realized for the above
function.
Hence, if a switching function 'f' is given, it has to be checked first, whether it is realizable with a single
Threshold function and if it is realizable, then appropriate weights and the Threshold value is to be
calculated. This is done by identifying 2 n linear simultaneous inequalities from the truth table and solving
them. For the input combinations for which f=1, the weighted sums have to exceed or equal to T and for
f=0, the weighted sums have to be less than T. If a solution ( not necessarily unique) to the above
Page 110 of 169
inequalities exists, it provides values for the weights and Threshold value. If, no solution exists then f is not
a Threshold function.
Example: For the function f(x1, x2, x3) = ∑(0, 1, 3) . Check whether this function is realizable and if so,
find the weights.
Ans: The truth table for the above function is
As per the combinations 2 and 4, T must be negative and w 1 and w2 are also to be negative. From
combinations 3 and 5, w2 has to be greater than w1. From combination 1, w3 is greater than or equal to T.
Hence, the relations are
where w3 may be positive. If the weights are restricted to integer values with smallest magnitudes, then
w2 = -1, w1 = -2, T = -1/2, and w3 =1.
With the above weights and Threshold value, all the combinations are satisfied and hence, f is a Threshold
function.
A switching function that can be realized by a sing threshold element is called a threshold function.
Limitations of Threshold logic: The limitation is its sensitivity to variations in the circuit parameters.
Therefore, the maximum number of inputs and the Threshold value T are to be restricted and care is to be
taken to increase the difference between the values of the weighted sums.
Elementary Properties
Property 1: For a given Threshold function, if one of the input is complemented, the same function can be
realized by a single Threshold element by negating the weight of that inverted input and subtracting the
value of the weight of that inverted input from the Threshold value T.
Consider a function f(x1, x2, .., xj, .. , xn) which is realized by V1 = {w1, w2, .., wj, .., wn; T}and if xj input is
complemented, then, f(x1, x2, .., xj', .., xn) can be realized by V2 = {w1, w2, …, -wj, .., wn; T-wj}.
The above property gives various other conclusions like the following:
Unate function
A function f(x1, x2, .. xn) is said to be positive in a variable x, if there exists a disjunctive or conjunctive
expression for the function in which xi appears only in uncompleted form. A function f(x1,x2,..xn) is said to
be negative in xi if there exists a disjunctive or conjunctive expression for f in which x i appears only in
complemented form. If f is either positive or negative in xi, then it is said to be unate in xi. No variable
appears in both its complemented and uncompleted form.
Ex 1: The function f= x1x2' + x2x3' is positive in x1 and negative in x3 but is not unate in x2.
If a function f(x1, x2, .., xn) is unate in each of its variables, then it is called Unate function.
Ex 2: The function f = x1'x2 + x1.x2.x3' is unate because by simplification, f = x1'.x2 + x2.x3' has no variable
in both its complemented and uncompleted form.
Ex 3: The function f = x1. x2' + x1'.x2 is clearly not unate.
If f (x1, x2, …, xn) is positive in xi, then it can be expressed as
Hence, the existence of two such functions, g1 and h1 ( g2 and h2), is a necessary and sufficient condition for
f to be positive (negative) in xi.
Linear seperability
If an n cube representation for threshold function with vertices, with a linear equation
b. Convert the given function into another function which has all variables in non-complement form only.
ϕ = x1.x2.x3 + x2.x3.x4
c. Find out all minimal True and maximal false vertices of ϕ.
There are two minimal vertices (1, 1, 1, 0) and (0, 1, 1, 1).
The false vertices are (1, 1, 0, 1), (1, 0, 1, 1) and (0, 1, 1, 0).
d. Check whether the function ϕ is linearly separable and if it so, find an appropriate set of weights and
threshold, which is necessary to determine the coefficients of the separating hyper plane.
In the above example, p = 2 and q = 3, there are six inequalities and these are:
From the above, the following are the constraints that must be observed.
Letting w1 = w4 = 1 and w2 = w3 = 2, then T must be smaller than 5 but larger than 4. Selecting T=9/2, then
the weight-threshold vector for ϕ V = { 1, 2, 2, 1; 9/2}.
e. convert this weight-threshold vector to one that corresponds to the original function f.
Then the weight-threshold vector will be V = { 1,2,-2,-1; 9/2 -2-1} = { 1,2,-2,-1; 3/2}
If the 1-cells of the given function follow any of the above patterns, then that function can be realized using
a Threshold element. Otherwise, the 1-cells pattern is divided into two admissible patterns.
Ex: Let f (x1, x2, x3, x4) = sigma (2, 3, 6, 7, 10, 12, 14, 15)
The map for this function exhibiting two admissible patterns is
The threshold elements for realizing each of the admissible patterns are as below, as per the realization of
the Threshold function described above.
The weight of g in the second element is determined by computing the minimal weighted sum that can
occur in the second element when g has the value 1. Since f must have the value 1 whenever g does, this
minimal weighted sum must be larger than the threshold of the second element. In this case, the minimal
weighted sum is w g, and it occurs when x1 = x2 = 0 and x3 = x4 = 1. Clearly, wg must be larger than 5/2 and,
therefore, the value wg = 3 has been selected.
State table:
In Sequential circuits, the effect of all previous inputs on the outputs is represented by a state of the
circuit. Thus, the output of the circuit at any time depends upon its current state and input. These also
determine the next state of the circuit .The relationship that exists among the inputs, outputs, present
states and next states can be specified by either a State Table or the State Diagram.
The state Table representation of a sequential circuit consists of three sections labeled present state,
next state and output. The present state designates the state of Flip-flops before the occurrence of a
clock pulse. The next state shows the states of Flip-flops after the clock pulse, and the output section
lists the value of the output variables during the present state.
An example of a state table is as follows:
State Assignment:
State Assignment procedures are concerned with methods for assigning binary values to states in such a
way as to reduce the number of states of the sequential circuit. This is helpful when a sequential circuit
is viewed from its external input-output terminals. Such a circuit may follow a sequence of internal
The sequence of operations are defined by a state table or state diagram. The example of a state
diagram for different types of flip-flops is shown below:
SR Flip-flop: T Flip-flop:
JK Flip-flop:
D Flip-flop:
Symbol
Timing Diagram
Truth table:
Operation:
It has one input: D. During the clock active time, the output is strobed with whatever input is given at
D.
Excitation Table:
Qn Qn+1 D Remarks
0 1 1 Data loaded as 1
1 0 0 Data loaded as 0
1 1 1 Data loaded as 1
3. JK Flip-flop:
Timing Diagram
Symbol
Operation:
Excitation Table:
Characteristic Equation
Qn Qn+1 J (set) K(Reset) Remarks
0 0 0 X No change / Reset
0 1 1 X Toggle / set
1 0 X 1 Toggle / Reset
1 1 X 0 No change / set
Timing Diagram
Symbol
Operation:
It has one input T. When ever clock is given, the output toggles if T=1. Otherwise, there is no change
in the output.
Excitable Table:
Qn Qn+1 T Remarks
1 0 1 Output toggles
Clock Timing
The Setup time: the setup time is the amount of time that an input signal (to the device) must be stable
(unchanging) before the clock ticks in order to guarantee minimum pulse width and thus avoid possible
metastability.
Hold time: The hold time is the amount of time that an input signal must be stable (unchanging) after
the clock tick in order to guarantee minimum pulse width and thus avoid possible metastability.
Block Diagram:
Timing Diagram:
A. Sequence detector:
Design of the 11011 Sequence Detector
A sequence detector accepts as input a string of bits: either 0 or 1.
Its output goes to 1 when a target sequence has been detected.
There are two basic types: overlap and non-overlap.
In an sequence detector that allows overlap, the final bits of one sequence can be the start of another
sequence.
11011 detector with overlap X 11011011011
Z 00001001001
11011 detector with no overlap Z 00001000001
Step 1 – Derive the State Diagram and State Table for the Problem
Page 121 of 169
Step 1a – Determine the Number of States
We are designing a sequence detector for a 5-bit sequence, so we need 5 states. We label these states A,
B, C, D, and E. State A is the initial state.
Step 1b – Characterize Each State by What has been Input and What is Expected
Note the labeling of the transitions: X / Z. Thus the expected transition from A to B has an input of 1
and an output of 0.
The transition from E to C has an output of 1 denoting that the desired sequence has been detected.
The sequence is 1 1 0 1 1.
Step 1d – Insert the Inputs That Break the Sequence The sequence is 1 1 0 1 1.
Each state has two lines out of it – one line for a 1 and another line for a 0.
The notes below explain how to handle the bits that break the sequence.
Step 3 – Assign a unique 3-bit binary number (state Assignment) to each state. Straigh forward
assignment:
A = 000
Page 123 of 169
B = 001
C = 010
D = 011
E = 100
Non-sequential assignment:
States A and D are given even numbers. States B, C, and E are given odd numbers. The assignment is
as follows.
A = 000
B = 001
C = 011 States 010, 110, and 111 are not used.
D = 100
E = 101
Step 4 – Generate the Transition Table With Output
The output equation can be obtained from inspection. As is the case with most sequence detectors, the
output Z is 1 for only one combination of present state and input. Thus we get Z = X . Y2 .Y1 ’ .Y0.
This can be simplified by noting that the state 111 does not occur, so the answer is Z = X . Y2 .Y0.
Step 5 – Separate the Transition Table into 3 Tables, One for Each Flip-Flop We shall generate a
present state / next state table for each of the three flipflops; labeled Y2, Y1, and Y0. It is important to
note that each of the tables must include the complete present state, labeled by the three bit vector
Y2Y1Y0.
Step 6- Decide on the type of flip-flops to be used. The problem stipulates JK flip-flops, so we use
them.
Timing Diagram:
Also, the directional movement of the data through a shift register can be either to the left, (left
shifting) to the right, (right shifting) left-in but right-out, (rotation) or both left and right shifting within
the same register thereby making it bidirectional.
A: Serial-in to Parallel-out (SIPO) Shift Register
4-bit Serial-in to Parallel-out Shift Register:
The operation is as follows. Lets assume that all the flip-flops (FFA to FFD) have just been RESET
(CLEAR input) and that all the outputs QA to QD are at logic level “0” ie, no parallel data output.
If a logic “1” is connected to the DATA input pin of FFA then on the first clock pulse the output of
FFA and therefore the resulting QA will be set HIGH to logic “1” with all the other outputs still
remaining LOW at logic “0”. Assume now that the DATA input pin of FFA has returned LOW again to
logic “0” giving us one data pulse or 0-1-0.
The second clock pulse will change the output of FFA to logic “0” and the output of FFB and QB
HIGH to logic “1” as its input D has the logic “1” level on it from QA. The logic “1” has now moved
or been “shifted” one place along the register to the right as it is now at QA.
When the third clock pulse arrives this logic “1” value moves to the output of FFC (QC) and so on until
the arrival of the fifth clock pulse which sets all the outputs QA to QD back again to logic level “0”
because the input to FFA has remained constant at logic level “0”.
The effect of each clock pulse is to shift the data contents of each stage one place to the right, and this
is shown in the following table until the complete data value of 0-0-0-1is stored in the register. This
data value can now be read directly from the outputs of QA to QD.
Then the data has been converted from a serial data input signal to a parallel data output. The truth
table and following waveforms show the propagation of the logic “1” through the register from left to
right as follows.
This type of Shift Register acts as a temporary storage device or it can act as a time delay device for the
data, with the amount of time delay being controlled by the number of stages in the register, 4, 8, 16 etc
or by varying the application of the clock pulses. Commonly available IC’s include the 74HC595 8-bit
Serial-in to Serial-out Shift Register all with 3-state outputs.
Ring Counter:
Ring counter is a type of counter composed of a type of circular shift register. The output of the last
shift register is fed to the input of the first register.
There are two types of ring counters:
2) A straight ring counter connects the output of the last shift register to the first shift register input
and circulates a single one (or zero) bit around the ring. For example, in a 4-register ring counter,
with initial register values of 1000, the repeating pattern is: 1000, 0100, 0010, 0001, 1000... . Note
that one of the registers must be pre-loaded with a 1 (or 0) in order to operate properly.
Circuit Diagram:
Timing Diagram: