You are on page 1of 114

14.

Detailed Notes
Unit-I
Part A: Number System
Number Systems
Convenient as the decimal number system generally is, its usefulness in machine computation is
limited because of the nature of practical electronic devices. In most present digital machines, the
numbers are represented, and the arithmetic operations performed, in a different number system called
the binary number system. This section is concerned with the representation of numbers in various
systems and with methods of conversion from one system to another.
Number Representation
An ordinary decimal number actually represents a polynomial in powers of 10.For example, the
number 123.45 represents the polynomial
123.45 = 1 × 102 + 2 × 101 + 3 × 100 + 4 × 10−1 + 5 × 10−2.
This method of representing decimal numbers is known as the decimal number system, and the number
10 is referred to as the base (or radix) of the system. In a system whose base is b, a positive number N
represents the polynomial

where the base b is an integer greater than 1 and the a’s are integers in the range 0 ≤ ai ≤ b − 1. The
sequence of digits aq−1aq−2 · · · a0 constitutes the integer part of N, while the sequence a−1a−2 · · · a−p
constitutes the fractional part of N . Thus, p and q designate the number of digits in the fractional and
integer parts, respectively. The integer and fractional parts are usually separated by a radix point. The
digit a−p is referred to as the least significant digit while aq−1 is called the most significant digit.

When the base b equals 2, the number representation is referred to as the binary number system. For
example, the binary number 1101.01 represents the polynomial

1101.01 = 1 × 23 + 1 × 22 + 0 × 21 + 1 × 20 + 0 × 2−1 + 1 × 2−2,


that is,
1101.01 = ∑ai2i
where i = -2 to 3, a−2 = a0 = a2 = a3 = 1 and a−1 = a1 = 0.
A number N in base b is usually denoted (N)b. Whenever the base is not specified, base 10 is implicit.
Below table shows the representations of integers 0 through 15 in several number systems.

Page 21 of 169
The complement of a digit a, denoted a’ , in base b is defined as
a' = (b − 1) − a.
That is, the complement a is the difference between the largest digit in base b and digit a. In the binary
number system, since b = 2, 0 = 1 and 1 = 0.
In the decimal number system, the largest digit is 9. Thus, for example, the complement 1 of 3 is 9 − 3 =
6.
Base Conversion Methods

Suppose that some number N , which we wish to express in base b2, is presently expressed in base b1.
In converting a number from base b1 to base b2, it is convenient to distinguish between two cases. In
the first case b1 < b2, and consequently base-b2 arithmetic can be used in the conversion process. The
conversion technique involves expressing number (N )b1 as a polynomial in powers of b1 and
evaluating the polynomial using base-b2 arithmetic.

When b1 > b2 it is more convenient to use base-b1 arithmetic. The conversion procedure will be
obtained by considering separately the integer and fractional parts of N . Let (N )b1 be an integer whose
value in base b2 is given by:

To find the values of the a’s, let us divide the above polynomial by b2.

Thus, the least significant digit of (N )b2 , i.e., a0, is equal to the first remainder. The next most
significant digit, a1, is obtained by dividing the quotient Q0 by b2, i.e.,

Page 22 of 169
The remaining a’s are evaluated by repeated divisions of the quotients until Qq−1 is equal to zero. If N
is finite, the process must terminate.

If (N )b1 is a fraction, a dual procedure is employed. It can be expressed in base b2 as follows:


(N )b1 = a−1b2−1 + a−2b2−2 + · · · + a−p b2−p .
The most significant digit, a−1, can be obtained by multiplying the polynomial by b2:
b2 · (N )b1 = a−1 + a−2b2−1 + · · · + a−p b2−p+1.
If the above product is less than 1 then a−1 equals 0; if the product is greater than or equal to 1 then a−1
is equal to the integer part of the product. The next most significant digit, a−2, is found by multiplying
the fractional part of the above product part by b2 and determining its integer part; and so on. This
process does not necessarily terminate since it may not be possible to represent the fraction in base b2
with a finite number of digits.

Example We wish to express the numbers (432.2)8 and (1101.01)2 in base 10. Thus
(432.2)8 = 4 × 82 + 3 × 81 + 2 × 80 + 2 × 8−1 = (282.25)10,
(1101.01)2 = 1 × 23 + 1 × 22 + 0 × 21 + 1 × 20 + 0×2−1 + 1 × 2−2 = (13.25)10.
In both cases, the arithmetic operations are done in base 10.

Example The above conversion procedure is now applied to convert (548)10 to base 8. The ri in the
table below denote the remainders. The first entries in the table are 68 and 4, corresponding,
respectively, to the quotient Q0 and the first remainder from the division (548/8)10. The remaining
entries are found by successive division.

Thus, (548)10 = (1044)8. In a similar manner we can obtain the conversion of (345) 10 to (1333)6, as
illustrated in the table below.

Indeed, (1333)6 can be reconverted to base 10, i.e.,


(1333)6 = 1 × 63 + 3 × 62 + 3 × 61 + 3 × 60 = 345

Example To convert (0.3125)10 to base 8, find the digits as follows:

Thus (0.3125)10 = (0.24)8.


Page 23 of 169
Similarly, the computation below proves that (0.375)10 = (0.011)2:

Example To convert (432.354)10 to binary, we first convert the integer part and then the fractional part.
For the integer part we have

Hence (432)10 = (110110000)2. For the fractional part we have

Consequently (0.354)10 = (0.0101101 · · ·)2. The conversion is usually car-ried up to the desired
accuracy. In our example, reconversion to base 10 shows that
(110110000.0101101)2 = (432.3515)10

A considerably simpler conversion procedure may be employed in converting octal numbers (i.e.,
numbers in base 8) to binary and vice versa. Since 8 = 23, each octal digit can be expressed by three
binary digits. For example, (6)8 can be expressed as (110)2, etc. The procedure of converting a binary
number into an octal number consists of partitioning the binary number into groups of three digits,
starting from the binary point, and to determine the octal digit corresponding to each group.

Example
(123.4)8 = (001 010 011.100)2,
(1010110.0101)2 = (001 010 110.010 100) = (126.24)8.

A similar procedure may be employed in conversions from binary to hexa-decimal (base 16), except
that four binary digits are needed to represent a single hexadecimal digit. In fact, whenever a number is

Page 24 of 169
converted from base b1 to base b2, where b2 = b1k , k digits of that number when grouped may be
represented by a single digit from base b2.

Binary Arithmetic
The binary number system is widely used in digital systems. Although a detailed study of digital
arithmetic is beyond the scope of this book, we shall present the elementary techniques of binary
arithmetic. The basic arithmetic operations are summarized in Table 1.2, where the sum and carry,
difference and borrow, and product are computed for every combination of binary digits (abbreviated
bits) 0 and 1.

Binary addition is performed in a manner similar to that of decimal addition. Corresponding bits are
added and if a carry 1 is produced then it is added to the binary digits at the left.

Example The addition of (15.25)10 and (7.50)10 in binary proceeds as follows:

In subtraction, if a borrow of 1 occurs and the next left digit of the minuend (the number from which a
subtraction is being made) is 1 then the latter is changed to 0 and subtraction is continued in the usual
manner. If, however, the next left digit of the minuend is 0 then it is changed to 1, as is each successive
minuend digit to the left which is equal to 0. The first minuend digit to the left, which is equal to 1, is
changed to 0, and subtraction is continued.
Example The subtraction of (12.50)10 from (18.75)10 in binary proceeds as follows:

Just as with decimal numbers, the multiplication of binary numbers is per-formed by successive
addition while division is performed by successive sub-traction.
Page 25 of 169
Example Multiply the binary numbers below:

Example Divide the binary number 1000100110 by 11001.

Complements of Numbers
In digital computers to simplify the subtraction operation and for logical manipulation complements are
used. There are two types of complements for each radix system: The radix complement and
diminished radix complement. The first is referred as the r’s complement and the second as the (r-
1)’s complement. For example, in binary system we substitute base vale 2 in place of r to refer
complements as 2’s complement and 1’s complement. In decimal number system, we substitute base
value 10 in place of r to refer complements as 10’s complement and 9’s complement.

Advantage of performing subtraction by the compliment method is reduction in the hardware.( instead
of addition and subtraction, only adding circuit‘s are needed.) i.e, subtraction is also performed by
adders only. Instead of subtracting one number from other, the compliment of the subtrahend is added
to minuend. In sign magnitude form, an additional bit called the sign bit is placed in front of the
number. If the sign bit is 0, the number is positive, If it is a 1, the number is negative.

1’s Complement Representation:


The 1’s complement of a binary number is the number that results when we change all 1’s to zeros and
the zeros to ones.

2’s Complement Representation:


The 2’s complement is the binary number that results when we add 1 to the 1’s complement. It is given
as:
2’s Complement = 1’s Complement + 1
The 2’s complement is used to represent negative numbers.

Page 26 of 169
Codes

 Binary Codes
Although the binary number system has many practical advantages and is widely used in digital
computers, in many cases it is convenient to work with the decimal number system, especially
when the communication between human being and machine is extensive, since most numerical
data generated by humans is in terms of decimal numbers. To simplify the problem of
communication between human and machine, several codes have been devised in which decimal
digits are represented by sequences of binary digits.

Weighted codes
In order to represent the 10 decimal digits 0, 1, . . . , 9, it is necessary to use at least four binary
digits. Since there are 16 combinations of four binary digits, of which 10 combinations are used, it
is possible to form a very large number of distinct codes. Of particular importance is the class of
weighted codes, whose main characteristic is that each binary digit is assigned a decimal “weight,”
and, for each group of four bits, the sum of the weights of those binary digits whose value is 1 is
equal to the decimal digit which they represent. If w1, w2, w3, and w4 are the given weights of the
binary digits and x1, x2, x3, x4 the corresponding digit values then the decimal digit N = w4x4 + w3x3
+ w2x2 + w1x1 can be represented by the binary sequence x4x3x2x1. The sequence of binary digits
that represents a decimal digit is called a code word. Thus, the sequence x4x3x2x1 is the code word
for N . Three weighted four-digit binary codes are shown in Table 1.3.

Page 27 of 169
The binary digits in the first code in Table 1.3 are assigned weights 8, 4, 2, 1. As a result of this
weight assignment, the code word that corresponds to each decimal digit is the binary equivalent of
that digit; e.g., 5 is represented by 0101, and so on. This code is known as the binary-coded-
decimal (BCD) code. For each code in Table 1.3, the decimal digit that corresponds to a given code
word is equal to the sum of the weights in those binary positions that are 1’s rather than 0’s. Thus,
in the second code, where the weights are 2, 4, 2, 1, decimal 5 is represented by 1011,
corresponding to the sum 2 × 1 + 4 × 0 + 2 × 1 + 1 × 1 = 5. The weights assigned to the binary
digits may also be negative, as in the code (6, 4, 2, −3). In this code, decimal 5 is represented by
1011, since 6 × 1 + 4 × 0 + 2 × 1 − 3 × 1 = 5.

It is apparent that the representations of some decimal numbers in the (2, 4, 2, 1) and (6, 4, 2, −3)
codes are not unique. For example, in the (2, 4, 2, 1) code, decimal 7 may be represented by 1101
as well as 0111. Adopting the representations shown in Table 1.3 causes the codes to become self-
complementing. A code is said to be self-complementing if the code word of the “9’s complement
of N ”, i.e., 9 − N , can be obtained from the code word of N by interchanging all the 1’s and 0’s.
For example, in the (6, 4, 2, −3) code, decimal 3 is represented by 1001 while decimal 6 is
represented by 0110. In the (2, 4, 2, 1) code, decimal 2 is represented by 0010 while decimal 7 is
represented by 1101. Note that the BCD code (8, 4, 2, 1) is not self-complementing. It can be
shown that a necessary condition for a weighted code to be self-complementing is that the sum of
the weights must equal 9. There exist only four positively weighted self-complementing codes,
namely, (2, 4, 2, 1), (3, 3, 2, 1), (4, 3, 1, 1), and (5, 2, 1, 1). In addition, there exist 13 self-
complementing codes with positive and negative weights.

Nonweighted codes
There are many nonweighted binary codes, two of which are shown in Table 1.4. The Excess-3
code is formed by adding 0011 to each BCD code word.

Thus, for example, the representation of decimal 7 in Excess-3 is given by 0111 + 0011 = 1010.
The Excess-3 code is self-complementing and possesses a number of properties that made it
practical in early decimal computers.
In many practical applications, e.g., analog-to-digital conversion, it is desirable to use codes in
which the code words for successive decimal integers differ in only one digit. Codes that have
such a property are referred to as cyclic codes. The second code in Table 1.4 is an example of such
a code. (Note that in this, as in all cyclic codes, the code word representing the decimal digits 0

Page 28 of 169
and 9 differ in only one digit.) A particularly important cyclic code is the Gray code. A four-bit
Gray code is shown in Table 1.5.

The feature that makes this cyclic code useful is the simplicity of the procedure for converting from the
binary number system into the Gray code, as follows.
Let gn · · · g2g1g0 denote a code word in the (n + 1)th-bit Gray code, and let bn · · · b2b1b0 designate the
corresponding binary number, where the subscripts 0 and n denote the least significant and most significant
digits, respectively. Then, the ith digit gi can be obtained from the corresponding binary number as follows:
gi = bi ⊕ bi+1, 0 ≤ i ≤ n − 1,
gn = bn,
where the symbol ⊕ denotes the modulo-2 sum, which is defined as follows:

For example, the Gray code word that corresponds to the binary number 101101 is found to be 111011 in a manner indicated
in the following diagram:

Thus, to convert from Gray code to binary, start with the leftmost digit and proceed to the least significant
digit, setting bi = gi if the number of 1’s preceding gi is even and setting bi = gi if the number of 1’s
preceding gi is odd. (Note that zero 1’s counts as an even number of 1’s.) For example, the Gray code word
1001011 represents the binary number 1110010. The proof that the preceding conversion procedures does
indeed work is left to the reader as an exercise.
The n-bit Gray code is a member of a class called reflected codes. The term “reflected” is used to designate
codes which have the property that the n-bit code can be generated by reflecting the (n − 1)th-bit code, as
illustrated in Fig. 1.1. The two-bit Gray code is shown in Fig. 1.1a. The three-bit Gray code (Fig. 1.1b) can
be obtained by reflecting the two-bit code about an axis at the end of the code and assigning a most
significant bit of 0 above the axis and 1 below the axis. The four-bit Gray code is obtained in the same
manner from the three-bit code, as shown in Fig. 1.1c.

Page 29 of 169
Fig. 1.1 Refection of Gray Code

 Binary Coded Decimal Code and its Properties


Binary Coded Decimal or BCD as it is more commonly called, is another process for converting
decimal numbers into their binary equivalents.

The advantage of the Binary Coded Decimal system is that each decimal digit is represented by a
group of 4 binary digits or bits in much the same way as Hexadecimal. So for the 10 decimal digits
(0-to-9) we need a 4-bit binary code. Binary coded decimal is not the same as hexadecimal.
Whereas a 4-bit hexadecimal number is valid up to F16representing binary 11112, (decimal 15),
binary coded decimal numbers stop at 9 binary 10012. This means that although 16 numbers (24)
can be represented using four binary digits, in the BCD numbering system the six binary code
combinations of: 1010 (decimal 10), 1011 (decimal 11), 1100 (decimal 12), 1101 (decimal
13), 1110 (decimal 14), and 1111 (decimal 15) are classed as forbidden numbers and can not be
used.

The main advantage of binary coded decimal is that it allows easy conversion between decimal
(base-10) and binary (base-2) form. However, the disadvantage is that BCD code is wasteful as the
states between 1010 (decimal 10), and 1111 (decimal 15) are not used. Nevertheless, binary coded
decimal has many important applications especially using digital displays.

In the BCD numbering system, a decimal number is separated into four bits for each decimal digit
within the number. Each decimal digit is represented by its weighted binary value performing a
direct translation of the number. So a 4-bit group represents each displayed decimal digit
from 0000 for a zero to 1001 for a nine.

So for example, 35710 (Three Hundred and Fifty Seven) in decimal would be presented in Binary
Coded Decimal as:

35710 = 0011 0101 0111 (BCD)


Then we can see that BCD uses weighted codification, because the binary bit of each 4-bit group
represents a given weight of the final value. In other words, the BCD is a weighted code and the

Page 30 of 169
weights used in binary coded decimal code are 8, 4, 2, 1, commonly called the 8421 code as it
forms the 4-bit binary representation of the relevant decimal digit.

Binary Coded Decimal Representation of a Decimal Number

Then the relationship between decimal (denary) numbers and weighted binary coded decimal digits
is given below.

 Unit Distance Codes


Gray code is a non-weighted code & is not suitable for arithmetic operations. It is not a BCD code .
It is a cyclic code because successive code words in this code differ in one bit position only i.e, it is
a unit distance code. It is also a reflective code i.e, both reflective & unit distance.

 Alpha Numeric Codes


These codes are used to encode the characteristics of alphabet in addition to the decimal digits. It is
used for transmitting data between computers & its I/O device such as printers, keyboards & video
display terminals. Popular modern alphanumeric codes are ASCII code & EBCDIC code.

 Error Detecting and Correcting Codes


In the codes presented so far, each code word consists of four binary digits, which is the minimum
number needed to represent the 10 decimal digits. Such codes, although adequate for the
representation of decimal digits, are very sensitive to the transmission errors that may occur

Page 31 of 169
because of equipment failure or noise in the transmission channel. In any practical system there is
always a finite probability of occurrence of a single error. The probability that two or more errors
will occur simultaneously, although nonzero, is substantially smaller. We, therefore, restrict our
discussion mainly to the detection and correction of single errors.
Error-detecting codes
In a four-bit binary code, the occurrence of a single error in one of the binary digits may result in
another, incorrect but valid, code word. For example, in the BCD code (see above), if an error
occurs in the least significant digit of 0110 then the code word 0111 results and, since it is a valid
code word, it is incorrectly interpreted by the receiver. If a code possesses the property that the
occurrence of any single error transforms a valid code word into an invalid code word, it is said to
be a (single-)error-detecting code. Two error-detecting codes are shown in Table 1.6.

Error detection in either code in Table 1.6 is accomplished by a parity check. The basic idea in a
parity check is to add an extra digit to each code word of a given code so as to make the number of
1’s in each code word either odd or even. In the codes of Table 1.6 we have used even parity. The
even-parity BCD code is obtained directly from the BCD code of Table 1.3. The added bit, denoted
p, is called the parity bit. The 2-out-of-5 code consists of all 10 possible combinations of two 1’s in
a five-bit code word. With the exception of the code word for decimal 0, the 2-out-of-5 code of
Table 1.6 is a weighted code and can be derived from the (1, 2, 4, 7) code.

In each of the codes in Table 1.6 the number of 1’s in a code word is even. Now, if a single error
occurs it transforms the valid code word into an invalid one, thus making the detection of the error
straightforward. Although parity check is intended only for the detection of single errors, it, in fact,
detects any odd number of errors and some even numbers of errors. For example, if the code word
10100 is received in an even-parity BCD message, it is clear that the message is erroneous, since
such a code word is not defined although the parity check is satisfied. We cannot determine,
however, the original transmitted word.

In general, to obtain an n-bit error-detecting code, no more than half the possible 2n combinations
of digits can be used. The code words are chosen in such a manner that, in order to change one
valid code word into another valid code word, at least two digits must be complemented. In the case
of four-bit codes this constraint means that only eight valid code words can be formed of the 16
possible combinations. Thus, to obtain an error-detecting code for the 10 decimal digits, at least

Page 32 of 169
five binary digits are needed. It is useful to define the distance between two code words as the
number of digits that must change in one word so that the other word results. For example, the
distance between 1010 and 0100 is three, since the two code words differ in three bit positions. The
minimum distance of a code is the smallest number of bits in which any two code words differ.
Thus, the minimum distance of the BCD or the Excess-3 codes is one, while that of the codes in
Table 1.6 is two. Clearly, a code is an error-detecting code if and only if its minimum distance is
two or more.

Error-correcting codes
For a code to be error-correcting, its minimum distance must be further increased. For example,
consider the three-bit code which consists of only two valid code words, 000 and 111. If a single
error occurs in the first code word, it could become 001, 010, or 100. The second code word could
be changed by a single error to 110, 101, or 011. Note that in each case the invalid code words are
different. Clearly, this code is error-detecting since its minimum distance is three. Moreover, if we
assume that only a single error can occur then this error can be located and corrected, since every
error results in an invalid code word that can be associated with only one of the valid code words.
Thus, the two code words 000 and 111 constitute an error-correcting code whose minimum distance
is three. In general, a code is said to be error-correcting if the correct code word can always be
deduced from the erroneous word. In this section, we shall discuss a type of single-error-correcting
codes known as Hamming codes.

If the minimum distance of a code is three, then any single error changes a valid code word into an
invalid one, which is distance one away from the original code word and distance two from any
other valid code word. Therefore, in a code with minimum distance three, any single error is
correctable or any double error detectable. Similarly, a code whose minimum distance is four may
be used for either single-error correction and double-error detection or triple-error detection. The
key to error correction is that it must be possible to detect and locate erroneous digits. If the
location of an error has been determined then, by complementing the erroneous digit, the message
is corrected.

The basic principles in constructing a Hamming error-correcting code are as follows. To each group
of m information or message digits, k parity-checking digits, denoted p1, p2, . . . , pk , are added to
form an (m + k)-digit code. The location of each of the m + k digits within a code word is assigned
a decimal value; one starts by assigning a 1 to the most significant digit and m + k to the least
significant digit. Then k parity checks are performed on selected digits of each code word. The
result of each parity check is recorded as 1 or 0, depending, respectively, on whether an error has or
has not been detected. These parity checks make possible the development of a binary number,
c1c2 · · · ck , whose value is equal to the decimal value assigned to the location of the erroneous digit
when an error occurs and is equal to zero if no error occurs. This number is called the position (or
location) number.

The number k of digits in the position number must be large enough to describe the location of any
of the m + k possible single errors, and must in addition take on the value zero to describe the “no

Page 33 of 169
error” condition. Consequently, k must satisfy the inequality 2k ≥ m + k + 1. Thus, for example, if
the original message is in BCD where m = 4 then k = 3 and at least three parity-checking digits
must be added to the BCD code. The resultant error-correcting code thus consists of seven digits. In
this case, if the position number is equal to 101, it means that an error has occurred in position 5. If,
however, the position number is equal to 000, the message is correct.

In order to be able to specify the checking digits by means of only mes-sage digits and
independently of each other, they are placed in positions valid code words, 000 and 111. If a single
error occurs in the first code word, it could become 001, 010, or 100. The second code word could
be changed by a single error to 110, 101, or 011. Note that in each case the invalid code words are
different. Clearly, this code is error-detecting since its minimum distance is three. Moreover, if we
assume that only a single error can occur then this error can be located and corrected, since every
error results in an invalid code word that can be associated with only one of the valid code words.
Thus, the two code words 000 and 111 constitute an error-correcting code whose minimum distance
is three. In general, a code is said to be error-correcting if the correct code word can always be
deduced from the erroneous word. In this section, we shall discuss a type of single-error-correcting
codes known as Hamming codes.

If the minimum distance of a code is three, then any single error changes a valid code word into an
invalid one, which is distance one away from the original code word and distance two from any
other valid code word. Therefore, in a code with minimum distance three, any single error is
correctable or any double error detectable. Similarly, a code whose minimum distance is four may
be used for either single-error correction and double-error detection or triple-error detection. The
key to error correction is that it must be possible to detect and locate erroneous digits. If the
location of an error has been determined then, by complementing the erroneous digit, the message
is corrected.

The basic principles in constructing a Hamming error-correcting code are as follows. To each group
of m information or message digits, k parity-checking digits, denoted p1, p2, . . . , pk , are added to
form an (m + k)-digit code. The location of each of the m + k digits within a code word is assigned
a decimal value; one starts by assigning a 1 to the most significant digit and m + k to the least
significant digit. Then k parity checks are performed on selected digits of each code word. The
result of each parity check is recorded as 1 or 0, depending, respectively, on whether an error has or
has not been detected. These parity checks make possible the development of a binary number,
c1c2 · · · ck , whose value is equal to the decimal value assigned to the location of the erroneous digit
when an error occurs and is equal to zero if no error occurs. This number is called the position (or
location) number.

The number k of digits in the position number must be large enough to describe the location of any
of the m + k possible single errors, and must in addition take on the value zero to describe the “no
error” condition. Consequently, k must satisfy the inequality 2k ≥ m + k + 1. Thus, for example, if
the original message is in BCD where m = 4 then k = 3 and at least three parity-checking digits
must be added to the BCD code. The resultant error-correcting code thus consists of seven digits. In

Page 34 of 169
this case, if the position number is equal to 101, it means that an error has occurred in position 5. If,
however, the position number is equal to 000, the message is correct.

In order to be able to specify the checking digits by means of only mes-sage digits and
independently of each other, they are placed in positions1, 2, 4, . . . , 2k−1. Thus, if m = 4 and k = 3
then the checking digits are placed in positions 1, 2, and 4 while the remaining positions contain the
original (BCD) message bits. For example, in the code word 1100110, the checking digits (in
boldface) are p1 = 1, p2 = 1, p3 = 0, while the message digits are 0, 1, 1, 0, which correspond to
decimal 6.

We shall now show how the Hamming code is constructed, by constructing the code for m = 4 and
k = 3. As discussed above, the parity-checking digits must be specified in such a way that, when an
error occurs, the position number will take on the value assigned to the location of the erroneous
digit. Table 1.7 lists the seven error positions and the corresponding values of the position number.
It is evident that if an error occurs in position 1, or 3, or 5, or 7, the least significant digit, i.e., c3, of
the position number must be equal to 1. If the code is constructed so that in every code word the
digits in positions 1, 3, 5, and 7 have even parity, then the occurrence of a single error in any of
these positions will cause an odd parity. In such a case, the least significant digit of the position
number is recorded as 1. If no error occurs among these digits, a parity check will show an even
parity and the least significant digit of the position number is recorded as 0.

From Table 1.7, we observe that an error in positions 2, 3, 6, or 7 should result in the recording of a
1 in the center of the position number. Hence, the code must be designed so that the digits in
positions 2, 3, 6, and 7 have even parity. Again, if the parity check of these digits shows an odd
parity then the corresponding position-number digit, i.e., c2, is set to 1; otherwise it is set to 0.
Finally, if an error occurs in positions 4, 5, 6, or 7 then the most significant digit of the position
number, i.e., c1, should be a 1. Therefore, if digits 4, 5, 6, and 7 are designed to have even parity, an
error in any of these digits will be recorded as a 1 in the most significant digit of the position
number. To summarize the situation regarding the checking digits pi :
p1 is selected so as to establish even parity in positions 1, 3, 5, 7;
p2 is selected so as to establish even parity in positions 2, 3, 6, 7;
p3 is selected so as to establish even parity in positions 4, 5, 6, 7.

The code can now be constructed by adding the appropriate checking digits to the message digits.
Consider, for example, the message 0100 (i.e., decimal 4), as shown in the table below.
Page 35 of 169
Thus checking digit p1 is set equal to 1 so as to establish even parity in positions 1, 3, 5, and 7.
Similarly, it is evident that p2 must be 0 and p3 must be 1, so that even parity is established,
respectively, in positions 2, 3, 6, and 7 and 4, 5, 6, and 7. The Hamming code for the decimal digits
coded in BCD is shown in Table 1.8.

Error location and correction are performed for the Hamming code in the fol-lowing manner.
Suppose, for example, that the sequence 1101001 is transmitted but, owing to an error in the fifth
position, the sequence 1101101 is received. The location of the error can be determined by
performing three parity checks as follows:

Thus, the position number formed as c1c2c3 is 101, which means that the location of the error is in
position 5. To correct the error, the digit in position 5 is complemented and the correct message
1101001 is obtained.

It is easy to prove that the Hamming code constructed as shown above is a code whose distance is
three. Consider, for example, the case where the two original four-bit (code) words differ in only
one position, e.g., 1001 and 0001. Since each message digit appears in at least two parity checks,
the parity checks that involve the digit in which the two code words differ will result in different
parities and hence different checking digits will be added to the two words, making the distance
between them equal to three. For example, consider the two words below.

Page 36 of 169
The two words differ in only m1 (i.e., position 3). Parity checks 1-3-5-7 and 2-3-6-7 for these two
words will give different results. Therefore, the parity-checking digits p1 and p2 must be different
for these words. Clearly, the foregoing argument is valid in the case where the original code words
differ in two of the four positions. Thus, the Hamming code has a distance of three.

If the distance is increased to four, by adding a parity bit to the code in Table 1.8 in such a way that
all eight digits have even parity, the code may be used for single-error correction and double-error
detection in the following manner. Suppose that two errors occur; then the overall parity check is
satisfied but the position number (determined as before from the first seven digits) will indicate an
error. Clearly, such a situation indicates the existence of a double error. The error positions,
however, cannot be located. If only a single error occurs, the overall parity check will detect it.
Now, if the position number is 0 then the error is in the last parity bit; otherwise, it is in the position
given by the position number. If all four parity checks indicate even parities then the message is
correct.

Page 37 of 169
Unit I
Part B: Boolean Algebra and Switching Functions
Switching algebra
The basic concepts of switching algebra will be introduced by means of a set of postulates, from which
we shall derive useful theorems and develop necessary tools that will enable us to manipulate and
simplify algebraic expressions.

Fundamental postulates
The basic postulate of switching algebra is the existence of a two-valued switch-ing variable that can
take either of two distinct values, 0 and 1. Precisely stated,
if x is a switching variable then

These values are often referred to as the truth values of x.

A switching algebra is an algebraic system consisting of the set {0, 1}, two binary1 operations called
OR and AND, denoted by the symbols + and · respectively, and one unary operation called NOT,
denoted by a prime.

The definitions of the OR and AND operations are as follows:

Thus the OR combination of two switching variables x + y is equal to 1 if the value of either x or y is 1
or if the values of both x and y are 1. The AND combination of these variables x · y is equal to 1 if and
only if the values of x and y are both equal to 1. The result of the OR operation is very often called the
(logical) sum or union and may be denoted by ∪ or ∨. The result of the AND operation is referred to
as the (logical) product or intersection, and is denoted by ∩ or ∧. We shall generally omit the dot · and
write xy to mean x · y.

The NOT operation, which is also known as complementation, is defined as follows:

The preceding postulates and definitions of switching operations enable us to derive many useful
theorems and develop an entire algebraic structure that may be advantageously applied to switching
circuits.

Page 38 of 169
Basic Theorems and Properties
The first property that drastically differs from the algebra of real numbers and accounts for the special
characteristics of switching algebra, is the idempotent law for a switching variable x:

To prove this property, we shall employ perfect induction. Perfect induction is a method of proof
whereby a theorem is verified for every possible combination of values that the variables may assume.
Since x is a two-valued variable, x + x = x may assume the values 1 + 1 = 1 and 0 + 0 = 0. These
equations, being identities, clearly verify the validity of Eq. (3.1), and similarly for Eq. (3.2) we have
1 · 1 = 1 and 0 · 0 = 0.

If x is a switching variable, then

The following two pairs of relations establish the commutativity and asso-ciativity of switching
operations. The convention adopted for parenthesizing is that of ordinary algebra, where x + y · z
means x + (y · z) and not (x + y) · z. Let x, y, and z be switching variables. Then

In addition, for every switching variable x,

The properties established by Eqs. (3.2) through (3.12) can be proved by the method of perfect
induction. The actual proofs are left to the reader as exercises. It is the associative law which enables us
to extend the definitions of the AND and OR operations to more than two variables, i.e., we write T = x
+ y + z to mean that T equals 1 if any of x, y, or z, or any combination thereof, equals 1.

In switching algebra, multiplication distributes over addition and addition distributes over
multiplication – a property known as the distributive law:

To verify Eq. (3.13) for every possible combination of values of x, y, and z, it is convenient to tabulate
these combinations in a table called a truth table or table of combinations. Since every variable may

Page 39 of 169
assume one of two values, 0 or 1, the truth table for the three variables contains 23 = 8 combinations.
These combinations are tabulated in the leftmost column of Table 3.1.

The value of x(y + z) is computed for every possible combination of x and y + z. The value of xy + xz is
computed independently by adding the entries in columns xy and xz. Since the two different methods of
computation yield identical results, as shown in the two rightmost columns, Eq. (3.13) is verified.

We observe that all the preceding properties are grouped in pairs. Within each pair, one statement can
be obtained from the other by interchanging the OR and AND operations and replacing the constants 0
and 1 by 1 and 0, respectively. Any two statements or theorems that have this property are called dual,
and this quality of duality that characterizes switching algebra is known as the principle of duality. It
stems from the symmetry of the postulates and definitions of switching algebra with respect to the two
operations and two constants. The implication of the concept of duality is that it is necessary to prove
only one of each pair of statements because its dual is, henceforth, proved.

Switching expressions and their manipulation


By a switching expression we mean the combination of a finite number of switching variables (x, y,
etc.) and constants (0, 1) by means of switching operations (+, ·, and ). More precisely, any switching
constant or variable is a switching expression, and if T1 and T2 are switching expressions then so are
T1 , T2 , T1 + T2, and T1T2. No other combinations of variables and constants are switching expressions.
The properties to be presented below in Eqs. (3.15) through (3.20) provide the basic tools for the
simplification of switching expressions. They establish the notion of redundancy and, like all the
preceding properties, they appear in dual forms. Equation (3.15) and its dual (3.16) express the
absorption law of switching algebra.

The method of proof by perfect induction is efficient, as long as the number of combinations for which
the statement is to be verified is small. In other cases, algebraic procedures are more appropriate, such,
for example, as are demonstrated in the following proof of Eq. (3.15).

Page 40 of 169
Another property of switching expressions, important in their simplification, is the following:

Equation (3.17) is proved as follows.

The consensus theorem is noteworthy in that it is used frequently in the simplification of switching
expressions. It is stated in the following two equations:

The extra term yz in Eq. (3.19) is known as the consensus.

The preceding properties permit a variety of manipulations on switching expressions. In particular, they
enable us (whenever possible) to convert an expression into an equivalent one with fewer literals,
where by a literal we mean an appearance of a variable or its complement. For example, while the left-
hand side of Eq. (3.19) consists of six literal appearances; its right-hand side consists of only four
appearances. If the value of a switching expression is independent of the value of some literal xi , then
xi is said to be redundant. Equations (3.1) through (3.20) provide, among other things, the tools for
manipulating expressions so as to eliminate redundant literals

It is important to observe that no inverse operations are defined in switching algebra and, consequently,
no cancellations are allowed. For example, if A + B = A + C, the equality of B and C is not implied; in
fact, if A = B = 1 and = 0 then 1 + 1 = 1 + 0, but B =C. Similarly, B is not necessarily equal to if AB =
AC.

Example Simplify the expression

by eliminating redundant literals.

Hence, T (x, y, z) is actually independent of the values of x and y and depends only on z.
Page 41 of 169
De Morgan’s theorems
The rules governing complementation operations are summarized by three theorems. The first is the
involution theorem:

Proof Equation (3.21) is obvious by perfect induction.

De Morgan’s theorems for two variables are

Proof The proof of Eq. (3.22) follows by perfect induction, using the truth table of Table 3.2;

(x + y) and x y are computed independently and are shown to be identical for all possible combinations
of values of x and y. The proof of Eq. (3.23) then follows by the principle of duality.

For n variables, Eqs. (3.22) and (3.23) can be expressed as follows: the complement of any expression
can be obtained by replacing each variable and element with its complement and, at the same time,
interchanging the OR and AND operations, that is,

Equation (3.24) is known as the general De Morgan’s theorem and its proof follows immediately from
Eq. (3.22) and mathematical induction on the number of operations.

Example In order to simplify the expression

it is necessary first to apply De Morgan’s theorem and then to multiply out the expressions in
parentheses:

Hence, T = 1 independently of the values of the variables.

Page 42 of 169
Example Prove the following identity:

From the application of Eq. (3.19) to x y + yz, it follows that the term x z may be added to the left-
hand side of the equation; i.e., the equation becomes

Another application of Eq. (3.19) to the first, third, and fourth terms in the augmented left-hand side
of the equation shows that yz is redundant. After elimination of yz, the left-hand side of the equation is
identical to its right-hand side (i.e., both consist of identical terms), and thus the proof is complete.

Switching Functions
Definitions
Let T (x1, x2, . . . , xn) be a switching expression. Since each of the variables x1, x2, . . . , xn can
independently assume either of the two values 0 or 1, there are 2n combinations of values to be
considered in determining the values of T . In order to determine the value of an expression for a given
combination, it is only necessary to substitute the values for the variables in the expression. For
example, if

then, for the combination x = 0, y = 0, z = 1, the value of the expression is 1 because T (0, 0, 1) = 0’1 +
01’ + 0’0’ = 1. In a similar manner, the value of T may be computed for every combination, as shown
in the right-hand column of Table 3.3.

If we now repeat the above procedure and construct the truth table for the expression

we find that it is identical to that of Table 3.3. Hence, for every possible combination of variables, the
value of the expression is identical to the value of . Thus different
switching expressions may represent the same assignment of values specified by the right-hand column
of a truth table. The values assumed by an expression for all the combinations of variables x1, x2, . . . ,
xn define a switching function. In other words, a switching function f (x1, x2, . . . , xn) is a
correspondence that associates an element of the algebra with each of the 2 n combinations of variables
x1, x2, . . . , xn. This correspondence is best specified by means of a truth table. Note that each truth
table defines only one switching function, although this function may be expressed in a number of
ways.

The complement f (x1, x2, . . . , xn) is a function whose value is 1 whenever the value of f (x1, x2, . . . ,
xn) is 0, and 0 whenever the value of f is 1. The sum of two functions f (x1, x2, . . . , xn) and g(x1,
x2, . . . , xn) is 1 for every combination in which either f or g or both equal 1, while their product is
equal to 1 if and only if both f and g equal 1. If a function f (x1, x2, . . . , xn) is specified by means of a
truth table, its complement is obtained by comple-menting each entry in the column headed f . New
functions that are equal to the sum f + g and the product f g are obtained by adding or multiplying the
corresponding entries in the f and g columns.

Page 43 of 169
Example Two functions f (x, y, z) and g(x, y, z) are specified in columns f and g of Table 3.4. The complement
f’, the sum f + g, and the product f. g are specified in the corresponding columns.

Simplification of Expressions
The truth table assigns to each combination of variable values a specific switch-ing element.
Consequently, all the properties of switching elements (Eqs. (3.1) through (3.24)) are valid when the
elements are replaced by expressions. For example, xy + xyz = xy by virtue of the property established
in Eq. (3.15).

Example Simplify the expression

First, apply the consensus theorem, Eq. (3.19), to the first three terms of T, letting x, y, and z replace
A’, C’ , and BD, respectively. As a result the third term, BC’D, is redundant. Next, apply the
distributive law, Eq. (3.13), to the fourth and fifth terms. This gives the expression AD’ (B’ + BC).
Letting x and y replace B’ and C’, respectively, and applying Eq. (3.17) yields AD’ (B’ + C). No other
literal is redundant; thus the simplest expression for T is

Example Simplify the expression

First apply Eq. (3.17) to the first two terms and to the last two terms. This yields

The next step in the simplification is not as obvious; in order to simplify T , it is first necessary to
expand it. Since

we have

The application of Eq. (3.15) to the first and last terms results in the elimi-nation of the last term.
Now apply Eq. (3.19) to the second, third, and fourth terms, letting x, y, and z replace D, B, and AC,
respectively. This step eliminates ABC and yields

Page 44 of 169
Canonical and Standard Form
Canonical Forms
Truth tables have been shown to be the means for describing switching functions. An expression
representing a switching function is derived from the table by finding the sum of all the terms that
correspond to those combinations (i.e., rows) for which the function assumes the value 1. Each term is
a product of the variables on which the function depends. Variable xi appears in uncomplemented form
in the product if it has value 1 in the corresponding combination, and it appears in complemented form
if it has value 0. For example, the product term that corresponds to row 3 of Table 3.5, where the values
of x, y, and z are 0, 1, and 1, is x’ yz.

The sum of all product terms for the function defined by Table 3.5 is

A product term that, as for each term in the above expression, contains each of the n variables as
factors in either complemented or uncomplemented form is called a minterm. Its characteristic property
is that it assumes the value 1 for exactly one combination of variables. If we assign to each of the n
variables a fixed arbitrary value, either 0 or 1, then, of the 2 n minterms, one and only one minterm will
have value 1 while all the remaining 2n − 1 minterms will have value 0, because they differ by at least
one literal, whose value is 0, from the minterm whose value is 1. The sum of all minterms derived from
those rows for which the value of the function is 1 takes on the value 1 or 0 according to the value
assumed by f.

Therefore, this sum is in fact an algebraic representation of f. An expression of this type is called a
canonical sum of products or disjunctive normal expression.

Switching functions are usually expressed in a compact form, obtained by listing the decimal codes
associated with the minterms for which f = 1. The decimal codes are derived from the truth tables by
regarding each row as a binary number; e.g., the minterm x’ yz’ is associated with row 010, which,
when interpreted as a binary number, is equal to 2. The function defined by Table 3.5 can thus be
expressed as

where∑( ) means that f (x, y, z) is the sum of all the minterms whose decimal code is one of the
numbers given within the parentheses.

Page 45 of 169
A switching function can also be expressed as a product of sums. This is accomplished by considering
those combinations for which the function is required to have the value 0. For example, the sum term x
+ y + z’ has the value 1 for all combinations of x, y, and z, except for x = 0, y = 0, and z = 1, when it has
the value 0. Any similar term assumes the value 0 for only one combination. Consequently, a product
of such sum terms will assume the value 0 for precisely those combinations for which the individual
terms are 0. For all other combinations, the product-of-sum terms will have the value 1. A sum term
that contains each of the n variables in either a complemented or an uncomplemented form is called a
maxterm. An expression formed of the product of all maxterms for which the function takes on the
value 0 is called a canonical product of sums or conjunctive normal expression.

In each maxterm, a variable xi appears in uncomplemented form if it has the value 0 in the
corresponding row in the truth table, and it appears in complemented form if it has the value 1. For
example, the maxterm that corresponds to the row whose decimal code is 1 in Table 3.5 is x + y + z’ .
The canonical product-of-sums expression for the function defined by Table 3.5 is given by

This function can also be expressed in a compact form by listing the combinations for which f is to
have value 0, i.e.,

Where П( ) means the product of all maxterms whose decimal code is given within the parentheses.

One way of obtaining the canonical forms of any switching function is by means of Shannon’s
expansion theorem (also called Shannon’s decomposition theorem), which states that any switching
function f (x1, x2, . . . , xn) can be expressed as either

Or

Proof this proceeds by perfect induction. Let x1 be equal to 1; then x1 equals 0 and Eq. (3.25) becomes
an identity, i.e.,

Similarly, substituting x1 = 0 and x1 = 1 also reduces Eq. (3.25) to an identity and thus the theorem is
proved.

If we now apply the expansion theorem with respect to variable x2 to each of the two terms in Eq.
(3.25), we obtain

Page 46 of 169
The expansion of the function about the remaining variables yields the dis-junctive normal form. In a
similar manner, repeated applications of the dual expansion theorem, Eq. (3.26), to f (x1, x2, . . . , xn)
about its variables x1, x2, . . . , xn yield the conjunctive normal form.

A simpler and faster procedure for obtaining the canonical sum-of-products form of a switching
function is summarized as follows.
1. Examine each term; if it is a minterm, retain it, and continue to the next term.
2. In each product that is not a minterm, check the variables that do not occur; for each xi that does
not occur, multiply the product by (xi + xi ).
3. Multiply out all products and eliminate redundant terms.

The canonical product-of-sums form is obtained in a dual manner by expressing the function as a
product of factors and adding the product xi xi to each factor in which the variable xi is missing. The
expansion into canonical form is obtained by repeated applications of Eq. (3.14).

In some instances, it is desirable to transform a function from one form to another. This transformation
can be accomplished by writing down the truth table and using the previously described techniques. An
alternative method, which is based on the involution theorem (x ) = x, is illustrated by the following
example.

Example Determine the canonical sum-of-products form for T (x, y, z) = x y + z + xyz. Applying rules
1–3, we obtain

Example Let us determine the canonical product-of-sums form of

Using the above procedure,

Example Find the canonical product-of-sums form for the function

Using the involution theorem,

The complement T’ consists of those minterms that are not contained in the expression for T , i.e.,

Page 47 of 169
Algebraic Simplification of Digital Logic Gates, Properties of XOR Gates,
The EXCLUSIVE OR, denoted ⊕, is a binary operation on the set of switching elements. It assigns
value 1 to two arguments if and only if they have complementary values; that is, A ⊕ B = 1 if either A
or B is 1 but not when both A and B are 1. It is evident that the EXCLUSIVE-OR operation assigns to
each pair of elements its modulo-2 sum; consequently, it is often called the modulo-2 addition
operation. The following properties of the EXCLUSIVE OR are direct consequences of its definition:

In general, the modulo-2 addition of an even number of elements whose value is 1 gives 0 and the
modulo-2 addition of an odd number of elements whose value is 1 gives 1. The usefulness of the
modulo-2-addition operation will become evident in subsequent chapters, and especially in the analysis
and design of linear sequential machines.

Logic Gates
Logic gates are fundamental building blocks of digital systems. Logic gate produces one output level
when some combinations of input levels are present. & a different output level when other combination
of input levels is present. In this, 3 basic types of gates are there. AND OR & NOT

The interconnection of gates to perform a variety of logical operation is called Logic Design. Inputs &
outputs of logic gates can occur only in two levels.1,0 or High, Low or True , False or On , Off. A table
which lists all the possible combinations of input variables & the corresponding outputs is called a
Truth Table. It shows how the logic circuits output responds to various combinations of logic levels at
the inputs. Level Logic, a logic in which the voltage levels represent logic 1 & logic 0.Level logic may
be Positive Logic or Negative Logic. In Positive Logic the higher of two voltage levels represent logic
1 & Lower of two voltage levels represent logic 0.In Negative Logic the lower of two voltage levels
represent logic 1 & higher of two voltage levels represent logic 0.

In TTL (Transistor-Transistor Logic) Logic family voltage levels are +5v, 0v.Logic 1 represent +5v &
Logic 0 represent 0v.

AND Gate:
It is represented by “.” (dot) . It has two or more inputs but only one output. The output assumes the
logic 1 state only when each one of its inputs is at logic 1 state. The output assumes the logic 0 state
even if one of its inputs is at logic 0 state. The AND gate is also called an All or Nothing gate.

Page 48 of 169
Symbol: Truth Table: Boolean Expression:
Y= A.B

IC 7408 contains 4 two input AND gates


IC 7411 contains 3 three input AND gates
IC 7421 contains 2 four input AND gates

OR Gate:
It is represented by “+” (plus) It has two or more inputs but only one output. The output assumes the
logic 1 state only when one of its inputs is at logic 1 state. The output assumes the logic 0 state even if
each one of its inputs is at logic 0 state. The OR gate is also called an Any or All gate. Also called an
inclusive OR gate because it includes the condition both the inputs can be present
Symbol Truth Table Boolean Expression
Y=A+B

IC 7432 Contains 4 two input OR gates.

NOT Gate:
It is represented by “ ―” (bar). It is also called an Inverter. It has only one input and one output.
Whose output always the compliment of its input. The output assumes logic 1 when input is logic 0 and
output assume logic 0 when input is logic 1.
Symbol Truth Table
Boolean Expression
A Y
0 1 Y = A’ or
1 0
Logic circuits of any complexity can be realized using only AND, OR , NOT gates. Using these 3
called AND-OR-INVERT i.e, AOI Logic circuits.

Universal Gates
The universal gates are NAND, NOR. Each of which can also realize Logic Circuits Single handedly.
NAND-NOR called Universal Building Blocks.. Both NAND-NOR can perform all the three basic
logic functions. AOI logic can be converted to NAND logic or NOR logic.

NAND Gate:
NAND assumes Logic 0 when each of inputs assume logic 1.
NAND gate mean NOT AND i.e, AND output is NOTed.
Page 49 of 169
NAND→AND & NOT gates
Symbol Truth Table

Boolean Expression:
Y =(A .B)’

Bubbled OR gate: Bubbled OR gate is OR gate with inverted inputs. The output of this is same as
NAND gate.
Y=A‘+ B‘=(AB)‘

Bubbled NAND Gate: Bubbled NAND gate is NAND gate with inverted inputs. The output of this is
same as OR gate.
Y=A‘. B‘= ((A+B)‘)’ = A + B

NOR Gate:
NOR assumes Logic 1 when each of inputs assume logic 0.
NOR gate is NOT gate with OR gate. i.e, OR gate is NOTed.
NOR gate mean NOT OR i.e, OR output is NOTed.
NOR→OR & NOT gates
Symbol Truth Table
A BY
0 0 1
0 1 0 Boolean Expression
1 0 0 Y = (A+B)’
1 1 0

Bubbled AND gate: AND gate with inverted inputs. The AND gate with inverted inputs is called as
Bubbled AND gate. So a NOR gate is equivalent to a Bubbled AND Gate. A Bubbled AND gate is also
called a negative AND gate. Since its output assumes the HIGH state only when all its inputs are in
LOW state, a NOR gate is also called active-LOW AND gate. Output Y is 1 only when both A & B are
equal to 0.i.e, only when both A‘ and B‘ are equal to 1.

Y = (A’ . B’) = (A + B)’

Page 50 of 169
Bubbled NOR Gate: Bubbled NOR gate is NOR gate with inverted inputs. The output of this is same
as AND gate.
Y=A‘+ B‘= ((A.B)‘)’ = A.B

IC 7402 is 4 two input NOR gate


IC 7427 is 3 three input NOR gate
IC 7425 is 2 four input NOR gate

Exclusive OR (XOR) Gate:


It has 2 inputs& only 1 output. It assumes output as 1 when input is not equal called anti-coincidence
gate or inequality detector.
Symbol Truth Table
Boolean Expression

The high outputs are generated only when odd number of high inputs is present. This is why x-or
function also known as odd function.

TTL IC 746 has 4 x-OR gate

CMOS IC 74C8C has 4 X-OR gates.

The EX-NOR Gate:


It is X-OR gate with a NOT gate. It has two inputs & one output logic circuit. It assumes output as 0
when one if inputs are 0 & other 1.It can be used as an equality detector because it outputs a 1 only
when its inputs are equal.
Symbol Truth Table Boolean Expression

TTL IC74LS266 contain 4 Each X-NOR gates.


CMOS 74C266 contain 4 Each X-NOR gates.
High speed CMOS IC 74HC266 contain 4 each X-NOR gates.

Page 51 of 169
Multilevel NAND/NOR realizations.
Two level implementation:
Case (I): The implementation of a logic expression such that each one of the inputs has to pass through
only two gates to reach the output is called Two-level implementation.
 Both SOP , POS forms result in two-level logic
 Two level implementation can be with AND, OR gates or only NAND or with only NOR gates
 Boolean expression with only NAND gates requires that the function be in SOP form.
xample: Implement the Function F= AB+CD using AND-OR Logic and NAND-NAND Logic

(a) AND – OR Logic (b) NAND – NAND Logic

Example: The implementation of the form: F=XY‘+X‘Y+Z using AND-OR logic and NAND- NAND
logic .

(a) AND – OR Logic (b) NAND – NAND Logic

Case (II): The implementation of Boolean expressions with only NOR gates requires that the function
be in the form of POS form.

Example: Implementation of the function (A+B)(C‘+D‘)using OR-AND logic and NOR- NOR logic.

(a) OR – AND Logic (b) NOR – NOR Logic

Example: Implementation of the function

using OR-AND logic and NOR- NOR logic.

Page 52 of 169
(a) OR – AND Logic (b) NOR – NOR Logic

Summary:

Page 53 of 169
Unit II
Minimization of Switching Functions
Introduction:
The aim in simplifying a switching function f (x1, x2, . . . , xn) is to find an expression g(x1, x2, . . . , xn)
which is equivalent to f and which minimizes some cost criteria. There are various criteria to determine
minimal cost. The most common are:
1. the minimum number of appearances of literals (recall that a literal is a variable in complemented
or uncomplemented form);
2. the minimum number of literals in a sum-of-products (or product-of-sums) expression;
3. the minimum number of terms in a sum-of-products expression, provided that there is no other such
expression with the same number of terms and fewer literals.
In subsequent discussions, we shall adopt the third criterion and restrict our attention to the sum-of-
products form. Of course, dual results can be obtained by employing the product-of-sums form instead.
Note that the expression xy + xz + x’ y’ is minimal according to criterion 3, although it may be written
as x(y + z) + x’ y ‘, which requires fewer literals.

Consider the minimization of the function f (x, y, z) given below. A combination of the first and
second product terms yields x’ z ‘(y + y ‘) = x’ ‘z . Similarly, combinations of the second and third,
fourth and fifth, and fifth and sixth terms yield a reduced expression for f :

This expression is said to be in an irredundant form, since any attempt to reduce it, either by deleting
any of the four terms or by removing a literal, will yield an expression that is not equivalent to f . In
general, a sum-of-products expression, from which no term or literal can be deleted without altering its
logic value, is called an irredundant, or irreducible, expression.
The above reduction procedure is not unique, and a different combination of terms may yield different
reduced expressions. In fact, if we combine the first and second terms of f , the third and sixth, and the
fourth and fifth, we obtain the expression

In a similar manner, by combining the first and fourth terms, the second and third, and the fifth and
sixth, we obtain a third irredundant expression,

While all three expressions are irredundant, only the latter two are minimal. Consequently, an
irredundant expression is not necessarily minimal, nor is the minimal expression always unique. It is,
therefore, desirable to develop procedures for generating the set of all minimal expressions, so that the
appropriate one may be selected according to other criteria (e.g., the distribution of gate loads, etc.).

Minimization with theorems:


Axioms and Laws of Boolean algebra:
Axioms and postulates of Boolean Algebra are a set of Logical expressions that are accepted without
proof and upon which a set of useful theorems are built.
Page 54 of 169
Axioms for AND operation: Axioms for OR operation: Axioms for NOT operation:
4. 0 . 0 = 0 1. 0 + 0 = 0 1. 1’ = 0
5. 0 . 1 = 0 2. 0 + 1 = 1 2. 0’ = 0
6. 1 . 0 = 0 3. 1 + 0 = 1
7. 1. 1 = 1 4. 1 + 1 = 1
a. Laws - Complement b. Laws – AND operation: c. Laws –OR operation:
operation: A . 0 = 0 (Null law) A+0=A
0’ = 1 A .1 = A A+1=A
1’ = 0 A.A=A
A+A=A
If A = 0, then A’ = 1 A . A’ = 0
If A = 1, then A’ = 0 A + A’ = 1
(A’)’ = A
d. Commutative law
i. A+B=B+A
This law states that A OR B is same as B OR A.

A
B
A+B = B+A
B A

Truth Table:
A B A+B B+A
0 0 0 0
0 1 1 1
1 0 1 1
1 1 1 1

ii. A.B = B.A


This law states that A . B is same as B . A.
A B
A.B = B.A
B A

Truth Table:
A B A.B B.A
0 0 0 0
0 1 0 0
1 0 0 0
1 1 1 1
Commutative law can be extended to any number of variables.

Page 55 of 169
e. Associative law:
(A + B) + C = A + ( B + C)
A OR B ored with C is same as A ORed Truth Table:
with B OR C/ A B C A+ (A+B)+C B+C A +
B (B+C)
0 0 0 0 0 0 0
0 0 1 0 1 1 1
0 1 0 1 1 1 1
The above logic is same as the below 0 1 1 1 1 1 1
one.
1 0 0 1 1 0 1
1 0 1 1 1 1 1
1 1 0 1 1 1 1
1 1 1 1 1 1 1
Similarly, (A.B).C = A. (B.C)
This law can be extended to any number of variables.
f. Distributive law:
a. A ( B + C ) = A.B + A.C
This law states that ORing of several variables and ANDing the result with a single variable is
equivalent to ANDing that single variable with each of the several variables and then ORing the
products.
Truth Table:
A B C B+C A(B+C) AB AC AB+AC
0 0 0 0 0 0 0 0
0 0 1 1 0 0 0 0
0 1 0 1 0 0 0 0
0 1 1 1 0 0 0 0
1 0 0 0 0 0 0 0
1 0 1 1 1 0 1 1
1 1 0 1 1 1 0 1
1 1 1 1 1 1 1 1

b. A + BC = (A+B).(A+C)
This law states that ANDing of several variables and ORing the result with a single variable is
equivalent to ORing that single variable with each of the several variables and then ANDing the
same.
Proof:
RHS : (A+B)(A+C) = A.A + A.C+ A.B + B.C = A + AC + AB + BC = A ( 1 + C + B) + BC
= A + BC = LHS
Hence Proved.

Page 56 of 169
g. Redundant Literal Rule: h. Idempotence Law:
A + A’B = A + B A.A = A
Proof: If A= 0, then 0.0 = 0 = A
A + A’B = (A+A’)(A+B) by If A=1, then 1.1 = 1 = A
Distributive law This law states that ANDing of a variable
= 1. (A+B) = A + B with itself is equal to that variable only.
Similarly, A ( A’+ B) = AB
Proof: A (A’ + B) = A.A’ + A.B = 0 + Similarly, A+A = A
AB = AB If A=0, then A+A = 0 + 0 = 0 = A
If A = 1, then A + A = 1 + 1 = 1 = A
This law states that ORing of a variable
with itself is equal to that variable only.
i. Absorption law:
a. A + A.B = A
This law states that ORing of a variable (A) with ANDing of that variable AND another
variable B is equal to that variable itself (A).
Truth table:
A B A.B A+A.B
0 0 0 0
0 1 0 0
1 0 0 1
A + A.B = A(1+B) = A.1 = A
1 1 1 1
i.e. A + any logic = A
b. A(A+B) = A
This law states that ANDing of a variable (A) with the one of that variable (A) ORed with
another variable (B) is equal to that variable itself (A).

Truth table:
A B A+B A(A+B)
0 0 0 0
0 1 1 0
1 0 1 1
1 1 1 1

A(A+B) = A.A + A.B = A (1+B) = A

Page 57 of 169
j. Consensus Theorem:
i) AB + A’C + BC = AB + A’C ii) (A+B)(A’+C)(B+C) = (A+B)(A’+C)
Proof : Proof:
LHS = AB + A’C + BC ( A + A’) LHS = (A+B)(A’+C)(B+C)
= AB + A’C + BCA + BCA’ = (A.A’ + AC + A’B+ BC)(B+C)
= AB (1+C) + A’C (1 + B) = (AC+A’B+BC)(B+C)
= AB + A’C = RHS = ABC+AC+A’B+A’BC+BC+BC
Similarly, AB + A’C + BCD = AB + A’C =AC+A’B+BC
RHS = (A+B)(A’+C)
=A.A’+AC+A’B+BC
=AC+A’B+BC
LHS = RHS
k. Transposition Theorem:
AB + A’C = (A+C)(A’+B)
Proof:
RHS = (A+C)(A’+B)
= A.A’ + A.B + A’.C + BC
= 0 + A’C+AB+BC
= A’C + AB+ BC (A + A’)
= AB + ABC +A’C + A’BC
= AB + A’C = LHS

l. DeMorgan’s Theorem:
(X+Y)’ = X’.Y’
This law states that the complement of a sum of variables is equal to product of their individual
complements. This law can be represented using logic gates as
Truth Table:
X Y X+Y (X+Y)’ X’ Y’ X’.Y’
0 0 0 1 1 1 1
0 1 1 0 1 0 0
1 0 1 0 0 1 0
1 1 1 0 0 0 0

It shows that the NOR gate is equivalent to a bubbled AND gate.


Similarly, (X.Y)’ = X’ + Y’
This law states that complement of the product of variables is equal to the sum of their individual
complements. Schematically, (X.Y)’ = X’ + Y’ is represented as

Page 58 of 169
Truth Table:
X Y (X.Y)’ X’ Y’ X’ + Y’
0 0 1 1 1 1
0 1 1 1 0 1
1 0 1 0 1 1
1 1 0 0 0 0

From the above truth table, column 3 is equal to column 6. Hence, (AB)’ = A’ + B’
Steps to be followed to DeMorganize a given function:
Identify the function as a SOP or POS.
Complement the individual terms and change the SOP to POS or vice-versa.
Check the individual terms whether they require DeMorganization.
If so, repeat it again till there is no such requirement.

Examples: Apply De Morgan’s theorem to get the complement of the following Boolean functions:
i. F = (A + B’).(C + D’)
To get F’, Change the sign and take complements iii. F= {(X.Y’)’}.Y’ + Z
F’ = (A + B’).(C + D’) F’ =[ {(X.Y’)’}.Y’ + Z]’
= (A + B’)’ + (C + D’)’ = [ {(X.Y’)’}.Y’]’ . Z’
= A’.B + C’.D = [{(X.Y’)’}’ +Y] . Z’
=[X.Y’+Y].Z’
ii. F = [(AB)’] (CD +E’F) . {(AB)’ + (CD)’}]’ =(X + Y).Z’
F’ = [(AB)’]’ + (CD +E’F)’ + {(AB)’ + (CD)’}’
= AB + {(CD)’.(E’F)’}+ (AB.CD) iv. Simplify F =[ ( X’ + Z) . (XY)’]’
= AB + (C’ +D’).(E+F’) + ABCD Ans: F =[ ( X’ + Z) . (XY)’]’
= AB(1+CD) + (C’ +D’).(E+F’) = [ ( X’ + Z)]’ + [(XY)’]’
=AB + (C’ +D’).(E+F’) = X.Z’ + XY
=X ( Y + Z’)
Duality:
An algebraic expression in Boolean algebra which is obtained from any valid expression by interchanging OR &
AND operation and replacing ‘1’ by ‘0’ and ‘0’ by ‘1’ is also valid. This property is called Duality principle.
For example,
1) x+1 = 1, then its duality is x.0 = 0 3) x + x’ = 1, then its duality is x.x’=0
2) x+x = x, then its duality is x.x = x 4) x + xy = x, then its duality is x.(x+y) = x

Simplify the following Boolean functions using theorems:


1) A + B + AC + AB = A. AB
= A ( 1 + C + B) + B = AB
=A+B 3) A’B + (A+B)
as 1 + any Boolean expression = 1. =A’B + A + B
2) A (AB + B) = B (A’ +1) +A
= A { B ( A + 1)} = B + A = A+B.
Page 59 of 169
4) AB’C + AB’ + C = AB’ + C (as 1 + C = 1)
= AB’ (C + 1) + C
5) AB + C + (A+B+C)
= A + AB + B + C + C
= A (1+B) + B + C (as C + C = C)
=A+B+C (as 1 + B = 1)

6) (A+C+D)(A+C+D’)(A+C’+D)(A+B’)
= (A.A + A.C + A.D’ +A.C+ C.C+CD’+A.D+C.D+D.D’).(A+C’+D).(A+B’)
=(A +A.C+A.D’+A.C+ C + CD’ +C.D +0)(A+C’+D).(A+B’) as A.A =A & D.D’= 0
={A(1+C+D’+C) + C (1 +D’+D)}.(A+C’+D).(A+B’)
=(A + C)(A+C’+D).(A+B’) as 1 + any function = 1
= (A.A + AC’ + AD + AC + C.C’ + CD).(A+B’)
= (A + AC’ + AD + AC + 0+CD)(A+B’)
={A(1+C’+D+C)+CD}.(A+B’)
=(A+CD).(A+B’)
=A+AB’ + ACD + B’CD
= A(1+B’+CD) + B’CD
=A + B’CD

II: Find the complement of the following:


1) (BC’ + A’D)(AB’+CD’) 2) AB’ + C’D’
Ans: 1). The complement is
{(BC’ + A’D)(AB’+CD’)}’
= (BC’ + A’D)’ + (AB’+CD’)’ Ans: 2). The complement is
=(BC’)’.(A’D)’ + (AB’)’.(CD’)’ (AB’ + C’D’)’
=(B’+C)(A+D’)+(A’+B).(C’+D) = (AB’)’ . (C’D’)’
=(AB’+ AC + B’D’+CD’) + (A’C’ + = (A’+B).(C+D)
A’D + BC’ + BD) = A’C + A’D + BC + BD.
=AB’ + AC + A’C’ + A’D + BD + B’D’
+ CD’ + BC’

The Karnaugh Map (K-MAP) Method:


For any complex digital logic circuit the algebraic expression will be also very complex. And from this
expression implementation of hardware is also very difficult and cost effective. So if we can minimize
the expression then it will reduce the complexity and also reduce the number of gate used to make the
hardware. The expression can be minimized by implementing Boolean algebra and also minimized by
another method which is called the map method. And the map method is very simple and
straightforward procedure for minimizing Boolean functions. This map method is also known as
the Karnaugh map or K-map
Karnaugh map can be drawn depending upon the number of variables present in that expression. It may
be
 Two variable Karnaugh map.
 Three variable Karnaugh map.
 Four variable Karnaugh map.
Page 60 of 169
In the case of a minterm Karnaugh map, ‘1’ is placed in all those squares for which the output is ‘1’,
and ‘0’ is placed in all those squares for which the output is ‘0’. 0s are omitted for simplicity. An ‘X’ is
placed in squares corresponding to ‘don’t care’ conditions. In the case of a maxterm Karnaugh map, a
‘1’ is placed in all those squares for which the output is ‘0’, and a ‘0’ is placed for input entries
corresponding to a ‘1’ output. Again, 0s are omitted for simplicity, and an ‘X’ is placed in squares
corresponding to ‘don’t care’ conditions.
The choice of terms identifying different rows and columns of a Karnaugh map is not unique for a
given number of variables. The only condition to be satisfied is that the designation of adjacent rows
and adjacent columns should be the same except for one of the literals being complemented. Also, the
extreme rows and extreme columns are considered adjacent.
Important points to be remembered to group inside Karnaugh map.
 Biggest decimal number in the given function decides, which K-Map is to be used. For instance, a
single variable can define only two decimal values 0 and 1, with maximum value as 1. Two
variables can define 22 =4 values, 0, 1, 2 and 3, with maximum value as 3. So if a given function
has 4 as the biggest decimal number, it cannot be defined by two variables. We need to use 3
variables because by using 3 variables, we can have 23 = 8 decimal values with 7 as the maximum
value.
 Try to cover all 1′s even if they become part of more than 1 loop.
 Look for the biggest loop at first. So if a K-Map has an Octet, it should be circled first, followed by
quads if any, followed by pairs if any.
 Pair eliminates 1 variable, Quad eliminates 2 variables and an Octet eliminates 3 variables.
 While looping, one visualize folding the K-Map like a paper and can loop 1′s present in left most
and right most columns of the same row.
 Also visualize overlapping K-Map in case of 5 and 6 variable K-Maps.
 Fold and overlap the K-Map only in horizontal and vertical direction but not in diagonal.
 ‘Don’t care’ entries can be used in accounting for all of 1-squares to make optimum groups. They
are marked ‘X’ in the corresponding squares. It is, however, not necessary to account for all ‘don’t
care’ entries. Only such entries that can be used to advantage should be used.

A decimal numerical value is assigned to each cell and the labeling of the cells is done in such a
manner that only one variable changes at a time. A ‘0’ denotes a complemented variable and “1” an un-

Page 61 of 169
complemented variable. K-Map can be created for 3-variable, 4-variable, 5-variable and so on. A k-
variable K-Map has 2k cells. Below diagram is of a 3-variable K-Map:

At a time only one variable is changing from complemented to un-complemented & vice-versa as we
move from one cell to next.
Looping adjacent 1’s for simplification
The expression for output Y can be simplified by properly combining those squares in the K-Map
which contain 1s. The process of combining those 1s is called looping.
Pairs – Looping groups of Two 1s
Any adjacent pair of cells marked by a 1 in a K-Map can be combined into one term and one variable is
eliminated which is changing i.e. from A to A’ or B’ to B etc.Any single logical 1 on the map
represents AND function. The total expression corresponding to the logical 1s of a map are the OR
function (sum) of the various variable terms, which covers all the logical 1 in the map.
F = Σ(2,3) = AB’ + AB = A (B’ + B) = A
An example of 2-variable K-Map. A 2-variable K-Map will have 22 = 4 cells.

Boolean expression derived from K-Map = A


Since variable B is changing from B’ to B, it is eliminated right away.
Quad – Looping groups of Four 1s
Four cells that are marked as a 1, they can be combined into one term and two variables can be
eliminated. A group of four 1s that are horizontal or vertical or form a square in the K-Map is called a
Quad.
Octet – Lopping groups of Eight 1s
A group of eight 1s that are adjacent to each other is called an octet. When an octet is looped in a four
variable map, 3 of 4 variables are eliminated because only one variable remains unchanged.
Some important definitions:
Minimal Sum
A sum of products (SOP) expression such that no SOP expression for Y has fewer product terms and
any SOP expression with the same number of product terms has at least as many literals. This is what
we are trying to produce through the use of Karnaugh maps.
Implicant
A normal product term that implies Y.
Example: For the function Y = AB + ABC + BC, the implicants are AB, ABC, and BC because if any
one of those terms are true, then Y is true.
Prime Implicant

Page 62 of 169
An implicant of Y such that if any variable is removed from the implicant, the resulting term does not
imply Y.
Example: Y=AB+ABC+BC
Ans:
Prime Implicants: AB, BC. Not a prime implicant: ABC
ABC is not a prime implicant because the literal A can be removed to give BC and BC still implies Y.
Conversely AB is not a prime implicant because you can't remove either A or B and have the
remaining term still imply Y.
In truth tables the prime implicants are represented by the largest rectangular groups of ones that can
be circled. If a smaller subgroup is circled, the smaller group is an implicant, but not a prime
implicant.
PI Theorem
A minimal sum is a sum of prime implicants.
Distinguished 1-Cell
An input combination that is covered by 1 prime implicant. In terms of Karnaugh maps, distinguished
1-cells are 1's that are circled by only 1 prime implicant.
Essential Prime Implicant
A prime implicant that that includes one or more distinguished one cells. Essential prime implicants are
important because a minimal sum contains all essential prime implicants.
3 Variable K-Map:
A 3-variable K-Map will have 23 = 8 cells. The number of variables is decided by the biggest decimal
number in a given function. A function F which has maximum decimal value of 7, can be defined and
simplified by a 3-variable Karnaugh Map.

Boolean Table for 3 Variables

3-Variable K-Map

Page 63 of 169
First cell is denoted by 0, second by 1 and then third by 3 and not by 2. This is because, A’BC
(ANDing of first row A’ and third column BC) corresponds to decimal number 3 in the boolean table.
Similarly, second row, third column ABC is denoted by 7 and not by 6.
Example of 3-Variable K-Map
Given function, F = Σ (1, 2, 3, 4, 5, 6)
Since the biggest number in this function is 6, it can be defined by 3 variables.
Draw K-Map for this function by writing 1 in cells that are present in function and 0 in rest of the cells.

apply rules for simplifying K-Map. So, first look for an octet i.e. 8 adjacent 1′s. There is none, then
look for a quad i.e. 4 adjacent 1′s. Again, there is none, hence look for pairs. There are 3 pairs circled in
red.
(1,3) – A’C (Since B is the changing variable between these two cells, it is eliminated)
(2,6) – BC’ (Since A is the changing variable, it is eliminated)
(4, 5) – AB’ (Since C is the changing variable, it is eliminated)
Thus, F = A’C + BC’ + AB’.

Page 64 of 169
4-Variable K-Map
A 4-variable K-Map will have 24 = 16 cells. A function F which has maximum decimal value of 15,
can be defined and simplified by a 4-variable Karnaugh Map.

Boolean Table for 4 Variables 4-variable K-Map

Example 1 of 4-Variable K-Map


Given function, F = Σ (0, 4, 6, 8, 10, 15)
Since, the biggest number is 15, 4 variables are required to define this function.
Draw K-Map for this function by writing 1 in cells that are present in function and 0 in rest of the cells.

Applying rules of simplifying K-Map, there are no Octets and Quads. There are 3 pairs, circled in red.
(0, 4) – A’C'D’ (Since B is the changing variable between these two cells, it is eliminated)
(4, 6) – A’BD’ (Since C is the changing variable, it is eliminated)
(8, 10) – AB’D’ (Since C is the changing variable, it is eliminated)

Page 65 of 169
There is 1 in cell 15, which cannot be looped with any adjacent cell, hence it can not be simplified
further and left as it is.
15 = ABCD
Thus, F = A’C'D’ + A’BD’ + AB’D’ + ABCD

Example 2 of 4-Variable K-Map


Given function, F = Σ (0, 1, 3, 5, 6, 9, 11, 12, 13, 15)
Since, the biggest number is 15, 4 variables are required to define this function.
Draw K-Map for this function by writing 1 in cells that are present in function and 0 in rest of the cells.

Applying rules of K-Map, there is no octet. There are 2 quads and there are 3 pairs.
(1, 5, 13, 9) – C’D (Since A and B are changing variables, they are eliminated)
(9, 11, 13, 15) – AD (Since B and C are changing variables, they are eliminated)
(0, 1) – A’B'C’ (Since D is the changing variable, it is eliminated)
(1, 3) – A’B'D (Since C is the changing variable, it is eliminated)
(12, 13) – ABC’ (Since D is the changing variable, it is eliminated)
There is 1 in cell 6, which cannot be looped with any adjacent cell, hence it can not be simplified
further and left as it is.
6 = A’BCD’
Thus, F = C’D + AD + A’B'C’ + A’B'D + ABC’ + A’BCD’

Example 3 of 4-Variable K-Map


Given function, F = Σ (0, 2, 3, 4, 5, 7, 8, 9, 13, 15)
Since, the biggest number is 15, 4 variables are required to define this function.
Draw K-Map for this function by writing 1 in cells that are present in function and 0 in rest of the cells.

Page 66 of 169
Applying rules of simplifying K-Map, there is no octet. There are 1 quad and 3 pairs.
(5, 7, 13, 15) – BD (Since A and C are changing variables, they are eliminated)
(0, 4) – A’C'D’ (Since B is the changing variable, it is eliminated)
(2, 3) – A’B'C (Since D is the changing variable, it is eliminated)
(8, 9) – AB’C’ (Since D is the changing variable, it is eliminated)
Thus, F = BD + A’C'D’ + A’B'C + AB’C’

Example 4 of 4-Variable K-Map


Given function, F = Σ (0, 3, 4, 6, 7, 9, 12, 14, 15)
Since, the biggest number is 15, 4 variables are required to define this function.Draw K-Map for this
function by writing 1 in cells that are present in function and 0 in rest of the cells.

Applying rules of simplifying K-Map, there is no octet. There are two quads and two pairs.
(4, 6, 12, 14) – BD’ (Since A and C are changing variables, they are eliminated)
(6, 7, 14, 15) – BC (Since A and D are changing variables, they are eliminated)
(0, 4) – A’C'D’ (Since B is the changing variable, it is eliminated)
(3, 7) – A’CD (Since B is the changing variable, it is eliminated)
There is 1 in cell 9, which cannot be looped with any adjacent cell, hence it cannot be simplified further
and left as it is.
9 – AB’C'D
Thus, F = BD’ + BC + A’C'D’ + A’CD + AB’C'D

Page 67 of 169
Karnaugh Map Examples to find Prime Implicants, Distinguished 1-Cells, Essential Prime
Implicants, Minimal Sums
In the following examples the distinguished 1-cells are marked in the upper left corner of the cell with
an asterisk (*). The essential prime implicants are circled in blue, the prime implicants are circled
in black, and the non-essential prime implicants included in the minimal sum are shown in red.

Example 1
Prime Implicants: 5
Distinguished 1-Cells: 2
Essential Prime Implicants: 2
Minimal Sums: 1

Y = A'CD' + AC'D + BCD

Example 2
Prime Implicants: 7
Distinguished 1-Cells: 2
Essential Prime Implicants: 2
Minimal Sums: 1

Y = B'D' + AD' + A'C'D + BCD

Page 68 of 169
5-Variable K-Map

A 5-variable K-Map will have 25 = 32 cells. A function F which has maximum decimal value of 31,
can be defined and simplified by a 5-variable Karnaugh Map.
Boolean Table For 5 Variables

5-Variable K-Map
In above Boolean table, from 0 to 15, A is 0
and from 16 to 31, A is 1. A 5-variable K-Map
is drawn as below.

Loop octets, quads and pairs between these two


squares. Visualize second square on first square
and figure out adjacent cells.

Page 69 of 169
Example 1 of 5-Variable K-Map
Given function, F = Σ (1, 3, 4, 5, 11, 12, 14, 16, 20, 21, 30)
Since, the biggest number is 30, 5 variables are required to define this
function.
Draw K-Map for this function by writing 1 in cells that are present in
function and 0 in rest of the cells.

Applying rules of simplifying K-Map, there is no octet. There is one quad


that is obtained by visualizing second square on first, there are 4 adjacent
cells – 4,5,20 and 21. The octet is highlighted by a blue connecting line.
There are 5 pairs. Similar to quad, there is one pair between two squares
which is highlighted by the blue connecting line.
(4, 5, 20, 21) – B’CD’ (Since A & E are the changing variables, it is
eliminated)
(12, 14) – A’BCE’ (Since D is the changing variable, it is eliminated)
(14, 30) – BCDE’ (Since A is the changing variable, it is eliminated)
(3, 11) – A’C'DE (Since B is the changing variable, it is eliminated)
(16, 20) – AB’D'E’ (Since C is the changing variable, it is eliminated)
(1, 3) – A’B'C’E (Since D is the changing variable, it is eliminated)
Thus, F = B’CD’ + A’BCE’ + BCDE’ + A’C'DE + AB’D'E’ + A’B'C’E

Example 2 of 5-Variable K-Map


Given function, F = Σ (0, 1, 2, 3, 8, 9, 16, 17, 20, 21, 24, 25, 28, 29, 30, 31)
Since, the biggest number is 30, 5 variables are required to define this
function.Draw K-Map for this function by writing 1 in cells that are present
in function and 0 in rest of the cells.

Page 70 of 169
Applying rules of simplifying K-Map, there are 2 octets. First one is in
square 2 circled in red. Another octet is between 2 squares highlighted by
blue connecting lines. There are 2 quads between each of the squares.
(16, 17, 20, 21, 28, 29, 24, 25) – AD’ (Since B, C and E are changing
variables, they are eliminated)
(0, 1, 8, 9, 16, 17, 24, 25) – C’D’ (Since A, B and E are changing variables,
they are eliminated)
(0, 1, 2, 3) – A’B'C’ (Since D and E are changing variables, they are
eliminated)
(28, 29, 30, 31) – ABC (Since D and E are changing variables, they are
eliminated)
Thus, F = AD’ + C’D’ + A’B'C’ + ABC

6-Variable K-Map

A 6-variable K-Map will have 26 = 64 cells. A function F which has


maximum decimal value of 63, can be defined and simplified by a 6-
variable Karnaugh Map.
For 6 variables
 A = 0 for decimal values 0 to 31 and A = 1 for 31 to 63.
 B = 0 for decimal values 0 to 15 and 32 to 47. B = 1 for decimal
values 16 to 31 and 48 to 63.
A 6-variable K-Map is drawn as below:

Visualize each of these squares one on another and figure out adjacent cells.
Example 1 of 6-Variable K-Map
Given function, F = Σ (0, 2, 4, 8, 10, 13, 15, 16, 18, 20, 23, 24, 26, 32, 34,
40, 41, 42, 45, 47, 48, 50, 56, 57, 58, 60, 61)
Since, the biggest number is 61, 6 variables are required to define this
function.

Page 71 of 169
Draw K-Map for this function by writing 1 in cells that are present in
function and 0 in rest of the cells.

Applying rules of simplifying K-Map, there is one loop which has 16 1′s –
containing 1′s at all the corners of all 4 squares. We obtain it by visualizing
all the 4 squares over one another but only in horizontal or vertical direction
(not diagonal) and figuring out adjacent cells. All the 1′s in corners are
circled in green.
There are 4 pairs, one in fourth square at bottom-right and other 3 are
between the squares and are highlighted by blue connecting line.
(0, 2, 8, 10, 16, 18, 24, 26, 32, 34, 40, 42, 48, 50, 56, 58) – D’F’ (A, B, C
and E are changing variables, so they are eliminated)
(41, 45, 57, 61) – ACE’F (B & D are changing variables, so they are
eliminated)
(13, 15, 45, 47) – B’CDF (A & E are changing variables, so they are
eliminated)
(0, 4, 16, 20) – A’C'E’F’ (B & D are changing variables, so they are
eliminated)
(56, 57, 60, 61) – ABCE’ (D and F are changing variables, so they are
eliminated)
There is 1 in cell 23, which cannot be looped with any adjacent cell, hence it
cannot be simplified further and left as it is.
23 = A’BC’DEF
Thus, F = D’F’ + ACE’F + B’CDF + A’C'E’F’ + ABCE’ + A’BC’DEF

Page 72 of 169
DON’T CARE CONDITIONS
Functions that have unspecified output for some input combinations are
called incompletely specified functions. Unspecified minterms of functions
are called ‘don’t care’ conditions. We simply don’t care whether the value of
0 or 1 is assigned to F for a particular minterm. ßDon’t care conditions are
represented by X in the K-Map table. Don’t care conditions play a central
role in the specification and optimization of logic circuits as they represent
the degrees of freedom of transforming a network into a functionally
equivalent one.
Example of the Use of a Karnaugh Map with Don’t Care Conditions
Design a circuit with four inputs D, C, B, A that are natural 8421-binary
encoded with D as the most-significant bit. The output F is true if the month
represented by the input (0,0,0,0 = January, 1011 = December) is a
vacation month . Vacation is on Christmas, Easter, July, birthday
(September), or friend’s birthday (May). Since Easter can occur in either
March or April, we have to include both months.
Step 1: The truth table
Construct a truth table. Note that binary input 1100 to 1111 (12 to 15) do
not represent valid months and cannot occur. Although the output F should
be 0 for these months since the output does not represent a vacation month, it
does not matter whether we choose F as 0 or 1. So, we put x in the output
column to represent don’t care and we can later choose x as 0 or 1 to
simplify the logic.
D C B A MONTH F
0 0 0 0 JAN 0
0 0 0 1 FEB 0
0 0 1 0 MAR 1
0 0 1 1 APR 1
0 1 0 0 MAY 1
0 1 0 1 JUN 0
0 1 1 0 JULY 0
0 1 1 1 AUG 0
1 0 0 0 SEP 1
1 0 0 1 OCT 0
1 0 1 0 NOV 0
1 0 1 1 DEC 1

Page 73 of 169
1 1 0 0 X
1 1 0 1 X
1 1 1 0 X
1 1 1 1 X
Step 2 The Karnaugh map
put 1s in the squares where F = 1, an x in don’t care squares and leave the
remaining squares (that contain a zero) empty.

Step 3 The simplified Karnaugh map


The next step is to simplify the Karnaugh map by creating the smallest
number of the largest groups of 1s. All 1s must be included in at least one
group. Any 1 can be in more than none group. x can be treated as a 1 if we
can use it to make a larger group.

This solution is not unique because one could have created other groupings
of 1s (but none simpler than this).

Step 4 Read the sum-of-product terms from the Karnaugh map


F = DB’A’ + CA’ + DBA’ + DC’B’

Page 74 of 169
Quine McCluskey Tabulation Method:
The Quine McCluskey tabulation method is a very useful and convenient
tool for simplification of Boolean functions for large numbers of variables.
The Karnaugh map method is a very useful and convenient tool for
simplification of Boolean functions as long as the number of variables does
not exceed four. But for case of large number of variables, the visualization
and selection of patterns of adjacent cells in the Karnaugh map becomes
complicated and too much difficult. For those cases Quine McCluskey
tabulation method takes vital role to simplify the Boolean expression. The
Quine McCluskey tabulation method is a specific step-by-step procedure to
achieve guaranteed, simplified standard form of expression for a function.
Consider the Boolean expression F= (0,1,2,3,5,7,8,10,14,15) and To
minimize by Quine McCluskey tabulation method.
 To start with we have to make table and kept all the numbers in same
group whose binary numbers containing equal 1s. Like 1,2,8
(0001,0010,1000) are in same group because all has equal 1s in their
binary number. See in below table

 Add another column to right side of that table naming 1st Now between
two groups depending upon number of 1s, we have to find similar
number with only one position change to 0 to 1. See the binary number
of 1 (0001) from first group and 3 (0011) from second group. Both the
numbers are similar only second bit position from LSB change 0 to 1. So
in new column we should write (1, 3) 00-1 (in place of number change
we put “–“on that). In this way we have to check entire table and make
new column accordingly.

Page 75 of 169
 Again add another column to right side of that table naming 2nd and now
between two groups from 1st column, we have to find similar number
with only one position change to 0 to 1. See the binary number of (0,1)
(000-) and (2,3) (001-). Both the numbers are similar only second bit
position from LSB change 0 to 1. So in new column we should write (0,
1, 2, 3) 00– (in place of number change we put “–“ on that). In this way
we have to check entire table and make new column accordingly.

 Mark those combinations that are used in 2nd Like for first one (0,1,2,3),
the used combinations are 0,1 and 2,3 from 1st column.

 The final table is drawn for getting the simplified Boolean expression.
The table is drawn with all combination of 2nd column and unused
portion of 1st

Page 76 of 169
 Strike of the rows for which cross is found in more than one column,
i.e.first row is striked since 0, 2 are available in row 2 similarly 1,3 are
available in 3rd row , But don’t strike the rows which has only one cross
in a column.

 Now we can get the simplified Boolean expression from above table and
it’s correspond 2nd column value. We have to take uncut row with its
2ndcolumn value and convert it with ABCD variable. Like if 0 is found
take complement value, 1 for uncomplement value and “–“for no
variable.
0, 8,2,10 (- 0 – 0) =B’D’
1,3,5,7 (0 – – 1) = A’D
14, 15 (1 1 1 -) =ABC
Hence F =B’D’+A’D+ABC
Partially Specified Expressions: These are the expressions which not are
not completely specified i.e, the output is not specified for all the
combinations of the input .These expressions involve the don’t care
conditions and simplified like expressions with don’t care conditions.

Multi-output minimizations:

The multi output minimization is getting multiple outputs from a set of given
inputs and utilizing the intermediate stages of the expression as a common
wherever required. Example : BCD to 7 segment driver, where the input is a
4 bit a number and the outputs are 7 driving individual LEDs of the 7
segment.

Page 77 of 169
Unit III
PART A: Design of Combinational Circuits

Design using Conventional Logic gates, Data Selectors, Encoders, Priority


Encoder, Decoders, comparators, Adders, multiplexers, De-multiplexers,
realization of switching functions using MUX, Parity generators and code
converters. Static Hazards and Hazard Free Realizations.

Memory Elements and Programmable Logic Devices:


Types of Memory Elements (RAM and ROM). Basic PLDs - ROM, PROM, PLA
and PAL. Realization of Switching functions using PLDs.

Comparators
An n-bit comparator is a circuit that compares the magnitude of two numbers X
and Y. It has three outputs f1, f2, and f3, such that: f1 = 1 if (if and only if ) X > Y ; f2
= 1 if X = Y ; f3 = 1 if X < Y . As an example, consider an elementary 2-bit
comparator, as in Fig. 5.4a.
The circuit has four inputs x1, x2, y1 and y2, where x1 and y1 denote the most
significant digit of X and Y, respectively. The logic equations may be determined
with the aid of the map in Fig. 5.4b, where the values 1, 2, and 3 are entered in
appropriate cells to denote, respectively, f1 = 1, f2 = 1, and

The circuit for f1 is shown in Fig. 5.4c. Similar circuits are obtained for f2
and f3.
The reader can verify that X >Y, i.e. f1 = 1, when the most significant bit of
X is larger than that of Y , i.e., x1 > y1, or when the most significant bits are
equal but the least significant bit of X is larger than that of Y , namely, x1 =
y1 and x2 > y2. In a similar way, we can determine the conditions for f2 = 1
and f3 = 1.

Page 78 of 169
This line of reasoning can be further generalized to yield the logic
equations for a 4-bit comparator.

A 4-bit comparator is shown in Fig. 5.5a. It has 11 inputs, four representing


X, four representing Y , and three connected to the outputs f1, f2, and f3 of the
preceding 4-bit stage. Three such stages can be connected in cascade, as
shown in Fig. 5.5b, to obtain a 12-bit comparator. Initial conditions are
inserted at the inputs of the comparator corresponding to the least significant
bits in such a way that the outputs of this comparator will depend only on the
values of its own x’s and y’s.

Data selectors
Multiplexer is essentially an electronic switch that can connect one out of n
inputs to the output. The most important application of the multiplexer is as
a data selector. In general, a data selector has n data input lines D0, D1, . . . ,
Dn−1, m select digit inputs s0, s1, . . . , sm−1, and one output. The m select
digits form a binary select number ranging from 0 to 2m − 1, and when this
number has the value k then Dk is connected to the output. Thus this circuit
selects one of n data input lines, according to the value of the select number,
and in effect connects it to the output. Clearly, the number of select digits
must equal m = log2 n, so that it can identify all the data inputs.
Data selectors have numerous applications. For example, they may be used
to connect one out of n input sources of a device to its output. As we shall
subsequently show, data selectors may also be used to implement all
Boolean functions.

A block diagram for a data selector with eight data input lines is shown in
Fig. 5.6a. The select number consists of the three digits s2s1s0. Thus, for
example, when s2s1s0 = 101 then D5 is to be connected to the output, and so

Page 79 of 169
on. The Enable (or Strobe) input “enables” or turns the circuit on. A logic
diagram for this data selector is shown in Fig. 5.6b. Such a unit provides the
complement z of the output as well as the output z itself. The Enable input
turns the circuit on when it assumes the value 0.
Implementing switching functions with data selectors
An important application of data selectors is the implementation of arbitrary
switching functions. As an example, we shall show how functions of two
variables can be implemented by means of the data selector of Fig. 5.7.
Clearly, in this circuit, if s = 0 then z assumes the value of D0 and if s = 1
then z assumes the value of D1. Thus, z = sD1 + s D0. Now, suppose that we
want to implement the EXCLUSIVE-OR operation A ⊕ B. This can be
accomplished by connecting variable A to the input s and variables B and B
to D0 and D1, respectively. In this case z = AB + A B = A ⊕ B. Similarly, if
we want to implement the NAND operation z = A + B then we connect
variable A to s and variable B to D1; D0 is connected to a constant 1. Clearly,
z =AB + A 1 = A + B .

In a similar manner, a judicial choice of inputs will implement any of the 16


different two-variable functions (see Table 3.6). In general, to implement an n-
variable function we require a data selector with n − 1 select inputs and 2n−1 data
inputs. Hence, for example, to implement all three-variable functions we require
a data selector with two select inputs, s1 and s2, and 23−1 = 4 data inputs, D0, D1,
D2, and D3. The output of such a data selector is
z = s1s2D0 + s1s2D1 + s1s2D2 + s1s2D3.
The reader can verify that, if we connect variables A and B to s1 and s2,
respectively, and variables C and C to D0 and D3, respectively, and assign
constants 1 to D1 and 0 to D0 then the circuit will realize the function
z =A B C + AB + ABC = AC + B C.
In general, then, to implement an n-variable function we assign n − 1
vari-ables to the select inputs, one to each such input. The last variable and
the constants 0 and 1 are assigned to the data inputs in such a way that
together with the select input variables they will yield the required function.
Such an implementation is usually possible when at least one variable is
available in both its complemented as well as its uncomplemented form;
otherwise, a larger data selector may be required. Implementations of
functions of five or more variables are usually accomplished by means of a
multi-level arrangement of several smaller standard data selectors.
Priority Encoders:

Page 80 of 169
A priority encoder is a device with n input lines and log2 n output lines.
The input lines represent units which may request service. When two lines pi
and pj , such that i > j , request service simultaneously, line pi has priority
over line pj . The encoder produces a binary output code indicating which of
the input lines requesting service has the highest priority. An input line pi
indicates a request for service by assuming the value 1. A block diagram for
an eight-input three-output priority encoder is shown in Fig. 5.8a.

The truth table for this encoder is shown in Fig. 5.8b. In the first row, only
p0 requests service and, consequently, the output code should be the binary
number zero to indicate that p0 has priority. This is accomplished by setting
z4z2z1 = 000. The fourth row, for example, describes the situation where p3
requests service while p0, p1, and p2 each may or may not request service
simultaneously. This is indicated by an entry 1 in column p3 and don’t-cares
in columns p0, p1, and p2. No request of a higher priority than p3 is present at
this time. Since in this situation p3 has the highest priority, the output code
must be the binary number three. Therefore, we set z1 and z2 to 1 while z4 is
set to 0. (Note that the binary number is given by N = 4z4 + 2z2 + z1.) In a
similar manner the entire table is completed.
From the truth table, we can derive the logic equations for z1, z2, and z4.
Starting with z4, we find that
z4 = p4p5p6p7 + p5p6p7 + p6p7 + p7.
This equation can be simplified to
z4 = p4 + p5 + p6 + p7.
For z2 and z1, we find
z2 = p2p3p4p5p6p7 + p3p4p5p6p7
+ p6p7 + p7 = p2p4p5 + p3p4p5 +
p6 + p7,

Page 81 of 169
z1 = p1p2p3p4p5p6p7 + p3p4p5p6p7 +
p5p6p7 + p7 = p1p2p4p6 + p3p4p6 +
p5p6 + p7.

An implementation of such an encoder is given in Fig. 5.8c. In this encoder,


the inputs are given in complemented form. The circuit also has an Enable
signal and contains an output z0 that indicates whether any requests are
present. Specifically, z0 = 0 if there is no request and z0 = 1 if there are one
or more requests present. It is possible to combine several such encoders, by
means of external gating, to handle more than eight inputs

Decoders
A decoder is a combinational circuit with n inputs and at most 2n outputs. Its
characteristic property is that for every combination of input values, only one
output value will be equal to 1 at any given time. Decoders have a wide
variety of applications in digital technology. They may be used to route input
data to a specified output line, as, for example, is done in memory
addressing, where input data are to be stored in (or read from) a specified
memory location. They can be used for some code conversions. Or they may
be used for data distribution, i.e., demultiplexing, as will be shown later.
Finally, decoders are also used as basic building blocks for implementing
arbitrary switching functions.

Figure 5.9a illustrates a basic 2-to-4 decoder. Clearly, if w and x are the
input variables then each output corresponds to a different minterm of two
variables. Two such 2-to-4 decoders plus a gate-switching matrix can be
connected, as shown in Fig. 5.9b, to form a 4-to-16 decoder. Switching
matrices are very widely used in the design of digital circuits.

Not all decoders have exactly 2n outputs. Figure 5.10 describes a decimal
decoder that converts information from BCD to decimal. It has four inputs
w,

Page 82 of 169
x, y, and z, where w is the most significant and z the least significant digit,
and 10 outputs, f0 through f9, corresponding to the decimal numbers. In
designing this decoder, we have taken advantage of the don’t-care
combinations, f10 through f16, as can be verified by means of the map in Fig.
5.10b. Another implementation of decimal decoders is by means of a partial-
gate matrix, as shown in Fig. 5.11.
A decoder with exactly n inputs and 2n outputs can also be used to
implement any switching function. Each output of such a decoder realizes
one distinct min term. Thus, by connecting the appropriate outputs to an OR
gate, the required function can be realized. Figure 5.12 illustrates the
implementation of the function f (A, B, C, D) = (1, 5, 9, 15) by means of a
complete decoder, i.e., one with n inputs and 2n outputs.
A decoder with one data input and n address inputs is called a
demultiplexer. It directs the input data to any one of the 2 n outputs, as
specified by the n-bit

Page 83 of 169
input address. A block diagram for a demultiplexer is shown in Fig. 5.13. A
demultiplexer with four outputs is shown in Fig. 5.3.When larger-size
decoders are needed, they can usually be formed by inter-connecting several
smaller decoders with some additional logic.

Adders_sutractors:

Digital computers perform a variety of information-processing tasks. Among


the functions en countered are the various arithmetic operations. The most
basic arithmetic operation is the addition of two binary digits. This simple
addition consists of four possible elementary operations:
0+ 0 = 0, 0 + 1 = 1,1 + 0 = 1, and 1 +1 = 10. The first three operations
produce a sum of one digit, but when both augend and addend bits are equal
10 1 . the binary sum consists of two digits. The higher significant bit of this
result is called a carry, When the augend and addend numbers contain more
significant digits the carry obtained from the addition of two bits is added to
the next higher order pair of significant bits . A combinational circuit that
performs the addition of two bits is called a half adder, One that performs
the addition of three bits (two significant bits and a previous carry) is full
adder. The names of the circuits stern from the fact that two half adders can
be employed to implement a full adder. A binary adder- subtractor is a
combinational circuit that performs the arithmetic operations of addition and
subtraction with binary numbers. We will develop this circuit by means of a
hierarchical design. The half adder design is carried OUI first, from which
we develop the full adder. Connecting n full adders in cascade produces a
binary adder for two c -bit numbers. The subtraction circuit is included in a
complementing circuit.
Half Adder
From the verbal explanation of a half adder. we find that this circuit needs
two binary inputs and two binary outputs. The input variables designate the
augend and addend bits; the output variables produce the sum and carry. We
assign symbols .r and y to the two inputs and S (for sum) and C (for carry) to
the outputs. The truth table for the half adder is listed in Table 4.3.The C
output is I only when both inputs are 1. The S output represents the least
significant bit of the sum. The simplified Boolean functions for the two
outputs can be obtained directly from the truth table. The simplified sum-of-

Page 84 of 169
products expressions are S = x 'y + xy',C = x )'The logic diagram of the half
adder implemented in sum of products is shown in Fig. 4.5 (a),It can be also
implemented with an exclusive-OR and an AND gate as shown in Fig.
45(b).This form is used to show that two half adders can be used to construct
a full adder.

A full adder is a combinational circuit that forms the arithmetic sum


of three bits. It consists of three inputs and two out puts. Two of the input
variables, denoted by x and y, represent the two significant bits to be added.
The third input c represents the carry from the previous lower significant
position. two outputs are necessary because the arithmetic sum of three
binary digits ranges in value from0 to 3 and binary 2 or 3 needs two digit.
The two outputs are designated by the symbols S for sum and C for carry.
The binary variable S gives the value of the least significant bit of the sum.
The binary variable C gives the output carry. The truth table of the
full adder is listed in table 4.4.
The eight rows under the input variables designate all possible combinations
of the three variables. The output variables are determined from the
arithmetic sum of the input bits. When all input bits are 0, the output is
0.The S output is equal to 1 when only one input is equal to 1 or when all
three inputs are equal 1. The C output has a carry of 1 if two or three inputs
are equal to 1. The input and output bits of the combinational circuit have
different interpretations at various stages of the problem. On the one hand.
physically. the binary signals of the inputs are considered binary digits to be
added arithmetically to form a two-digit sum at the output. On the
other hand the same binary values are considered as variables of Boolean
functions when expressed will the truth table or when the circuit is
implemented with logic gates. The map for the output, of the full adder are
shown in Fig. 4.6. The simplified expressions are

The logic diagram for the full adder implemented in sum-of-products form is
shown in Fig. 4.7. It can also be implemented with two half adders and one
OR gate. as shown in Fig.4.8.The S output from the second half adder is the
exclusive-OR of z and the output of the first half adder giving

Page 85 of 169
Binary Adder

A binary adder is a digital circuit that produces the arithmetic sum of two
binary numbers. It can be constructed with full adders connected in cascade.
with the output carry from each full adder connected to the input carry of the
next full adder in the chain. Figure 4.9 shows the interconnection of four
full-adder (FA) circuits to provide a four-bit binary ripple carry adder. The
augend bits of A and the addend bits of B are designated by subscript
numbers from right to left with subscript 0 denoting the least significant bit.
The carries are connected in a chain through the full adders. The input carry
to the adder is Co. and it ripples through the full adders to the output carry
C4 The S outputs generate the required sum bits. An two-bit adder requires n
full adders with each output carry connected to the input carry of the next
higher order full adder.
To demonstrate with a specific example. consider the two binary numbers A
= 101 1 and B = 00 11. Their sum S = 1110 is formed with the four-bit adder
as follows:

The bits are added with full adders. starting from the least significant
position (subscript 0). 10 to form the sum bit and carry bit. The input carry
Co in the least significant position must be 0. The value of C;-+l in a given
significant position is the output cart)' of the full adder. This value is
transferred into the input carry of the full adder that adds the bits one higher
significant position to the left. The sum bits are thus generated starting from
the rightmost position and are available as soon as the corresponding
previous carry bit is generated. All the carries must be generated for the
correct sum bits to appear at the outputs.

Page 86 of 169
The four-bit adder is a typical example of a standard component. It can be
used in many applications involving arithmetic operations. Observe that the
design of this circuit by the
classical method would require a truth table with 29 = 5 12 entries. since
there are nine inputs to the circuit. By using an iterative method of cascading
a standard function, it is possible to obtain a simple and straightforward
implementation

Carry Propagation
The addition of two binary numbers in parallel implies that all the bits of the
augend and addend are available for computation at the same time. As in any
combinational circuit, the signal must propagate through the gates before the
correct output sum is available in the output terminals. The total propagation
time is equal to the propagation delay of a typical gate times the number of
gate levels in the circuit. The longest propagation delay time in an adder is
the time it takes the carry to propagate through the full adders. Since each bit
of the sum output depends on the value of the input carry the value of Sj at
any given stage in the adder will be in its stead y-state final value only after
the input carry to that stage has been propagated. In this regard consider
output S3 in Fig. 4.9. Inputs A3 and BJ are available as soon as input signals
are applied to the adder. However, input carry C3 does no (settle to its final
value until C2 is available from the previous stage.
Similarly, C2 has to wait for C1and so on down 10 Co- Thus, only after the
carry propagates and ripples through all stages will the last output 5J and
carry Col settle to their final correct value . The number of gate levels for the
carry propagation can be found from the circuit of the fulladder. The circuit
is redraw n with different labels in Fig. 4.10 for convenience. The input and

output variables use the subscript i to denote a typical stage of the adder .
The signals at P; and G, settle to their steady -state values after they
propagate through their respective gates, These two signs are common to all
full adder and depend only on the input augend and addend bits.
The signal from the input carry Cj 10 the out put carry Cj- 1 propagates
through an AND gate and an OR gate. which contribute two gate levels. If
there are four full adders in the adder. the output carry C4 would have 2 x 4
=- 8 gate levels from Co to C4' For an n-bit adder. there are 2n gate levels
for the carry to propagate from input to output.
The carry propagation lime is an import ant attribute of the adder because it
limits the speed

Page 87 of 169
with which two numbers are added . Although the adder for that matter any
combinational circuits will always have some value at its output terminals
the outputs will not be correct unless the signals are given enough time to
propagate through the gates connected from the inputs 10 the outputs. Since
all other arithmetic operations are implemented by successive additions the
time consumed during the addition process is critical.
An obvious solution for reducing the carry propagation del ay time is to
employ faster gate s with reduced delays. However, physical circuits have
limit to their capability. Another solution is to increase the complexity o f
the equipment in such a way that the carry delay time is reduced. There are
several techniques for reducing the carry propagation time in a parallel
adder. The most widely used technique employs the principle o f carry look
ahead IOR ie. Consider the circuit of the full adder shown in Fig. 4.10. If
we define 1'01.'0 new binary variables

Gi, is called a carry generate and it produces a carry of 1 when both A, and
B, are 1. regardless f the input carry)' C,. Pi’s called carry propagate because
it determines whether a carry into rage i will propagate into stage i .... I (i.e ..
whether an assertion of Ci will propagate to an assertion of Ci-tl. We now
write the Boolean functions for the carry )' outputs o f each stage and
substitute the value of each C; from the previous equations:

Since the Boolean function for each out put carry is expressed in sum-of-
products form. each function can be implemented with one level of AND
gates followed by an OR gate (or by a two- level NAND).1lle three Boolean
functions for C t. C2• and C) are implemented in the carry lookahead
generator shown in Fig. 4 .11. shows that this circuit can add in less time
because C1 doe s not have to wait for C2 and C. to propagate: in fact. C" is
propagated at the same time as C 1 and C2. This gain in speed of operation
is achieved at to be expense of additional complexity (hardware)

Page 88 of 169
The construction of a four-bit adder with a carry lookahead scheme is shown
in Fig. 4.12. Each sum output requires two exclusive-OR gates. The output
of the first exclusive-OR gate generates the 11 variable, and the AND gate
generates the G1variable. The carries are propagated through the carry look
ahead generator (similar 10 that in Fig. 4. 11) and applied as inputs to the
second exclusive-OR gate. All output carries are generated after a delay
through two levels of gates. Thus, outputs 5 1through 53 have equal
propagation delay times. The two-level circuit for the output carry C4 is not
shown. This circuit can easily be derived by the equation substitution
method.

Page 89 of 169
DECODERS

Discrete quantities of information are represented in digital systems by


binary codes. A binary code of n bits is capable of representing up to 2"
distinct elements of coded information. A decoder is a combinational circuit
that converts binary information from" input Hence to a maxi mum of 2"
unique output line c. If the bit coded information is unused combination the
decoder may have fewer than 2" outputs. The decoders presented here are
called n-to-m-line decoders, where m :s: 2". Their purpose is to generate
me2" (or fewer) minterms of n input variables. The name decoder is used in
conjunction with other code converters. such as a BCD-to-seven-segment
decoder. As an example consider the three-to-eight-line decoder circuit of
Fig. 4.18. The three inputs are decoded into eight outputs each representing
one of the minterms of the three input variables. The three inverters provide
the complement of the inputs, and each one of the eight AND gates
generates one of the minterms. A particular application of this decoder is
binary-to-octal conversion.

Page 90 of 169
The input variables represent a binary number. and the outputs represent the eight dig its of a number in
the octal number system. However, a three-to-eight-line decoder can be used for decoding all three-bit
code to provide eight outputs, one for each element of the code. The operation of the decoder may be
clan lied by the truth table listed in Table 4.6. For each possible input combination, there are seven
outputs that are equal 10 0 and only one that is equal (0 I. The output whose value is equal 10 I
represents the minterm equivalent of the binary number currently available in the input lines. Some
decoders arc constructed with NAND gates. Since a NAND gate produces the AND operation with an
inverted output. it becomes more economical to generate the decoder minterms in their complemented
form. Furthermore decoders include one or more enable inputs to control the circuit operation. A two-
to-four-line decoder with an enable input constructed with NAND gates is shown in Fig. 4.19. The
circuit opera tes with complemented outputs and a complement enable input.

The decode r is enabled when E is equ al to 0 (i.e.. active-low enable ). As indicated by the truth table
only one output can be equal to 0 at any given time; all other outputs are equal 10 I. The output whose
value is equal to 0 represents the minterm selected by inputs A and B. The circuit is disabled when E is
equal 10 I. regardless of the values of the other two inputs. When the circuit is disabled. none of the
outputs are equal 10 0 and none of the minterms are selected .
In general a decoder may operate with complemented or un complemented outputs. The enable input
may be activated with a 0 or with a 1 signal. Some decoders have two or more enable inputs that must
satisfy a given logic condition in only to enable the circuit.
A decode r with enable input can function as a demultiplexer\ a circuit that receives information from a
single line and directs to one of 2" possible output lines. The selection of a specific output is
controlled by the bit combination of n selection lines. The decoder of Fig. 4.19 can function as a o ne-
to-four-line demultiplexer when E is taken as a data input line and A and B are taken as the selection
input s. The single input variable E has a pat h 10 all four outputs. but the input information is directed
10 only one of the output lines as specified by the bi nary combination of the two selection lines A and
B. This feature can be verified from the truth tab le of the circuit. For example, if the selection lines AB
= 10, will be the same as the input value E. while all other outputs are maintained at I. Because decoder
and demultiplexer operations are obtained from the same circuit. a decoder with an enable input is
referred to as a decoder-demultiplexer. Decoders with enable inputs can be connected together to form
a larger decoder circuit. Figure 4.20 shows two 3-to-8-line decoders with enable inputs connected to
form a 4-10-16 line decoder. When w = 0. the top decoder is enabled and the other is disabled. The
bottom decoder outputs are all O's. and the top eight outputs generate minterms 0000 to 011 1. When w
= 1 , the en able conditions are reversed: The bottom decoder outputs generate minterms 1000 to 1111,
while the outputs of the top decoder are all D's. This example demonstrates the usefulness of enable
inputs in decoders and other combinational logic components. In general enable inputs are a convenient
feature for interconnecting two or more standard components for the purpose of combining them into
a similar function with more inputs and outputs

Page 91 of 169
Combinational logic Implementation
A decoder provides the 2" minterms of n input variables. Each asserted output of the decoder is
associated with a unique pattern of input bits. Since any Boolean function can be expressed in
sum-of-minterms form a decoder that generates the minterms of the function together with an
external OR gate that forms their logical sum. provide a hardware implementation of the
function. In this way any combinational circuit with I I inputs and m outputs ca n be implemented
with an II-l<l-2"-line decoder and m OR gates. The procedure for implementing a combinational
circuit by means of a decoder and OR gates requires that the Boolean function for the circuit be-
expressed as. a sum of minterms. A decode r is then chosen that generates all the minterms of the
input variable". The inputs to each OR gate are selected from the decoder oring to the list of
minterms of each function. This procedure will be illustrated by an example that implements a
full-adder circuit. From the truth tab le of the full adder (see Table 4.4). we obtain the functions
for the combinational circuit in sum-of-minterms form :

Since there are three input s and a total of eight min terms we need
a three-to-eight-line decoder. The implement at ion is shown in Fig. 4.2 1. The- decoder generates
the eight minterms for x. y. and z. The OR gate for output S forms the logical sum of minterms 1.
2. 4. and 7. The OR gate for output C forms the logical sum of minterms 3. 5. 6. and 7. A funct ion
with a long list of minterms requires an OR gate- with a large number of inputs. A function
having a list of k. minterms can be expressed in it complemented from F' with 2" - k. minterm s.
If the number of min terms in the function is greater than 2"/2. then F' can be expressed with
fewer minterms. In such a case. it is advantageous to use a OR gate to sum the min terms of F' .
The output of the OR gate complements this sum and generates the normal output F. l f NAN D
gate s are used for the decode r. as in Fig. 4.19. then the external gates must be NAND gates
instead of OR gates. This is because a two-level AND gate circuit implements a sum-of-minterms
function equivalent to a two-level AND-OR circuit.

Page 92 of 169
4 .10 ENCODERS
An encoder is a digital circuit that performs the inverse operation of a decoder. An encoder has 2"
(or fewer) input lines and output lines. The output lines. as an aggregate. generate the binary
code corresponding 10 the input value. An example of an encoder is the octal-to-binary encoder
whose truth table is given in Table 4.7. It has eight inputs (one for each of the octal digits) and
three outputs that generate the corresponding binary number. It is assumed that only one input has
a value of 1 at any given time. The encode r can be implemented with OR gates whose inputs
are determined directly from the truth table. Output c is equal to I when the input octal digit is I.
3. 5. or 7. Output ) is I for octal digits 2. 3. 6. or 7. and output x is I for digits 4. 5. 6. or 7. These
condition s can be expressed by the following Boolean output functions:
z = Dj + D3 + D5 + D7
y = D1 + D3+ D6+ D7
.r = D4 + D5 + Do + D7
The encoder can be implemented with three OR gates. The encoder defined in Table 4.7 has the
limitation that only one input can be active at any given time. If two inputs are active
simultaneously. the output produces an undefined combination.
For example: If DJ and Db are I simultaneously the output of the encoder will be 111 because all
three outputs are equal 10 I. The output II I does not represent either binary 3 or binary 6. To
resolve this ambiguity, encoder circuits must establish an input priority to ensure that only one
input is encoded. If we establish a higher priority for inputs with higher subscript numbers, and if
both lJ) and Do are 1 at the same time, the output will be 110 because D6 has higher priority than
Dl. Another ambiguity in the octal-to-binary encode r is that an output with all O's is generated
when all the inputs arc 0; but this output is the same as when Do is equal to I. The discrepancy
can be resolved by providing one more output to indicate whether at least one input is equal 10 1 .

Page 93 of 169
Priority Encoder
A priority encoder is an encode r circuit that includes the priority function. The operation of the priority
encoder is such that if two or more inputs are equal to I at the same time. the input having the highest
priority will take precedence. The truth table of a four-input priority encoder is given in Table 4.8. In
addition to the two outputs .r and y. the circuit has a third output designated by V which is a valid bit
indicator that is set to I when one or more inputs are equal to I. If all input s are 0, there is no valid
input and V is equal to 0 .The other two outputs are not inspected when equals 0 and are specified as
don't -care conditions. Note that whereas X's in output columns represent don ' t-care conditions, the X
's in the input columns are useful for representing a truth table in condensed form. Instead of listing all
16 minterms of four variables, the truth table uses an X 10 represent either 1 or 0. For example, x100
represents the two minterms 0100 and 1100.According to Table 4.8, the higher the subscript number,
the higher the priority of the input. Input D has the highest priority. so, regardless of the values of the
other inputs.

For example.the fourth row in the table, with inputs XX10. represent s the four minterms 0010,01 10,
10 10, and 11 10. The simplified Boolean express ions for the priority encoder are obtained from the
maps. The condition for output V is an OR function of all the input variables. The priority encoder is
implemented in Fig. 4.23 according to the following Boolean
functions:

Multiplexers:
A multiplexer is a combinational circuit that selects binary information from one of many input lines
and directs it to a single output line. The selection of a particular input line is controlled by a set of
selection lines. Normally there are 2n input lines and II selection lines whose bit combinations
determine which input is selected.
A two-to-one-line multiplexer connects one of two l -bit sources to a common destination, as shown in
Fig. 4.24. The circuit has two data input lines. one output line. and one selection line S. When 5 = 0. the
Page 94 of 169
upper AND gate is enabled and 10 has a path to the output. When S = I, the lower AND gate is enabled
and 11has a path to the output. The multiplexer acts like

an electronic switch that selects one of two sources . The block diagram of a multiplexer is sometimes
depicted by a wedge-shaped symbol. as shown in Fig. 4.24(b). It suggests visually how a selected one
of multiple data sources is directed into a single destination. The multiplexer is often labeled "MUX" in
block diagrams. A four-to-one-line multiplexer shown in Fig. 4.25. Each of the four Inputs I0 through
I3 is applied 10 one input or an AND gate. Selection lines S, and So are decoded to select

particular AND gate .The outputs of the AND gate arc applied to a single OR gate that provides the
one -line output The function table lists the input that is passed to the output for each combination of
the binary selection values . To demonstrate the operation of the circuit consider the case when S 1 S0 =
10. The AND gate associated with input I2 has two of its inputs equal to 1 and the third input
connected to I2. The other three AND gates have at least one input equal to 0. which makes their
outputs equal to 0. The output of the OR gate is now equal to the value of I 2. providing a path from the
selected input to the output. A multiplexer is also called a delta selector, since it selects one of many
inputs and steers the binary information to the output line. The AND gates and inverters in the
multiplexer resemble a decoder circuit and indeed they decode the selection input lines. In general a
2"-to- l-line multiplexer is constructed from an,1-10-2"decoder by adding 2"input lines to it one to
each AND gate. T be outputs of the AND gates are applied to single OR gate. The size of a
multiplexer is specified by the number 2"of its data input line and the single output line. The two
selection lines arc implied from the 2" data lines. As in decoders, multiplexers may have an enable
input to control the operation of the unit. When the enable input is in the inactive mode the outputs are
disabled and when it is in the active the circuit functions as a normal multiplexer. Multiplexer circuit
its can he combined with common selection inputs to provide multiple-bit Selection logic.

Page 95 of 169
The circuit has four multiplexers each capable of selecting one of two input lines. Output Yo can be
selected to come from either input A0 or input B0. Similarly output Y, may haw the value of A 1or 8,.
and won. Input selection line S selects one of the lines in each of the four multiplexers. The enable
input E must be active (i.e asserted) for normal operation. Although the circuit contains two z-to-1-
line multiplexers, we are more likely to view it a-s 3 circuit that selects one of two -l-bit set of data
lines. As shown in the function table the unit is enabled when E = 0. Then. if S = 0.the four A input
have a path to the four outputs. If. by contrast. 5 I the four B inputs are applied to the outputs. 1be
outputs have 3111's when E I. regard 101’s of the value of 5.

OTH ER TWO-LEVEL IMPLEMENTATION S

The types of gates most often found in integrated circuits are NAND and NOR gates. For this reason
NAND and NOR logic implementations are the most Important from a practical point of view. Some
(but not all) NAND or NOR gates al low the possibility of a wire connection between the outputs of
two gates to provide a specific logic function. This type of logic is called wired logic. For example
open-collector TTL NAND gates when tied together perform wired AND logic.(The open-collector
TTL gale is shown in Chapter 10. Fig.10.1) The wired AND logic performed with two NAND gates is
depicted in Fig. 3.28(a). The AND gate is drawn with the lines going through the center of the gate to
distinguish it from a conventional gate. The wired-AND gate is not a physical gate but only a symbol to
designate the function obtained from the indicated wired connection. The logic function implemented
by the circuit of Fig. 3.28(a) is

A wired-logic gale does not produce a physical second-level gate. since it is just a wire connect ion.
Nevertheless, for discussion purposes, we will consider the circuits of Fig 3.28 as two-level
implementations. The first level consists of NAND(or NOR) gates and the second level has a sing le
AND(or OR) gate . The wired connection in the graphic symbol will be omitted in subsequent
discussions.

Nondegenerate Forms
It will be instructive from a theoretical point of view to find out how many two-level combinations
of gates are possible. We consider four types of gates: AND,OR,NAND and) NOR. If we assign one
type of gate for the first level and one type for the second level . We find that there are 16 possible
combinations of two- level forms. (The same type of gate ca n be in the first and second levels. as in
a NAND-NAND implementation.) Eight of these combinations are said to be degenerate forms
because they degenerate to a single operation. This can be seen from a circuit with AND gates in the
Page 96 of 169
first level and an AND gate in the second level. The output of the circuit is merely the AND function
of all input variables. The remaining eight nandegenerate forms produce an implementation in sum-
of-products form or product-of-sums form . The eight nondegenerate forms are as follows:

AND-OR
NAND-NAND
NOR-OR
OR-NAND
OR- AND
NOR-NOR
NAND-AND
AN D-NOR
The first gate listed in each of the forms constitutes a first level in the implementation. The second
gate listed is a single gate placed in the IC next level. Note that any two (forms listed on the same
line are duals of each other. The AND-OR and OR-AND form s are the basic two-level forms
discussed in Section 3.4. The NAND-NAND and NOR- NOR forms were presented in
Section3.6.The remaining four forms are investigated in this section.

AND-OR-INVERT Implementation

The two forms NAND-AND and AND-NOR are equivalent and can be treated together. Both
perform (he AND-OR- INVERT function as shown in Fig. 3.29. The AND-NOR form resembles the
AND-OR form. but with an inversion done by the bubble in the output of the NOR gate. It
implements the function

F=(AB+CD+E)'

By using the alternative graphic symbol for the NOR gate. we obtain the diagram of
Fig. 3.29(b). Note that the single variable E is not complemented, because the only change
made is in the graphic symbol of the NOR gate. Now we move the bubble from the input terminal of
tile second-level gate to the output terminals of the first-level gates. An inverter is needed for the single
variable in order to compensate for the bubble. Alternatively, the inverter can be removed. provided
that input E is complemented. The circuit of Fig. 3.29(c) is a NAND-AND form and was shown in Fig.
3.28 to implement the AND-OR- INVERT function. An AND-OR implementation requires an
expression in sum-of-products form. The AND-OR- INVERT implementation is similar, except for the
inversion. Therefore, if tile complement of the function is simplified into sum-of-products form (by
combining the D's in the map). it will be possible to implement F' with the AND-OR part of the
function. When F' passes through the always present output inversion (the INVERT pan ), it will
Page 97 of 169
generate the output F of the function. An example for the AND-OR- INVERT implementation will be
shown subsequently.

OR-AND-INVERT Implementation

The OR- NAND and NOR..QR forms per form the OR- AND-INVERT function, as shown in Fig.
3.30. The OR-NAND form resembles the OR-AND form, except for the inversion done by the
bubble in the NAND gate. It implements the function

F = [(A + S)( e + D)E]'

By using the alternative graphic symbol for the NAND gate, we obtain the diagram of
Fig. 3.30(b). The circuit in (c) is obtained by moving the small circ les from the inputs of the second-
level gate to the outputs of the first-level gates. The circuit of Fig. 3.30(c) is a NOR-OR form and was
shown in Fig. 3.28 to implement the OR-AND-INVERT function. The OR- AND-INVERT
implementation requires an express ion in product-of-sums form. If the complement of the function is
simplified into that form, we can implement F' with the OR- AND part of the function. When F' passes
through the INVERT part. we obtain the complement of F', or F, in the output.

HAZARDS

In designing asynchronous sequential circuits, care must be taken to conform with certain restrictions
and precautions 10 ensure that the circuits operate properly. The circuit must be operated in
fundamental mode with only one input changing at any time and must be free of critical races. In
addition, there is one more phenomenon. called a hazard, that may cause the circuit to malfunction .
Hazards are unwanted switching transients that may appear at the output of a circuit because different
paths exhibit different propagation delays. Hazards occur in combinational circuits, where they may
cause a temporary false output value. When they occur in asynchronous sequential circuits. hazards
may result in a transition 10 a wrong stable state. It is therefore necessary 10 check for possible hazard
Page 98 of 169
s and determine whether they can cause improper operations. If so, then steps must be taken to
eliminate their effect.

Hazards In Combinational Circuit


A hazard is a condition in which a change in a single variable produces a momentary change in output
when no change in output should occur. The circuit of Fig. 9.33(a) depicts the occurrence of a hazard.
Assume that all three inputs are initially equal to I . This causes the output of gate I 10 be I , that of gate
2 to be 0 and that of the circuit 10 be 1. Now consider a change in x 2 from I to O. Then the output of
gale I changes 10 0 and that of gate 2 changes to I, leaving the output at I. However, the output may
momentarily go 10 0 if the propagation delay through the inverter is taken into consideration. The
delay in the inverter may cause the output of gate I to change to 0 before the output of gate 2 changes to

I . In that case. both inputs


of gate 3 are momentarily equal to 0. causing the output to go to 0 for the short time during which the
input signal from X2 is delayed while it is propagating through the Inverter circuit. The circuit of Fig.
9.33(b) is a NAND implementation of the Boolean function in Fig. 9.33b) and it has a hazard for the
same reason . Because gates I and 2 are NAND gates their outputs are the complement of the outputs o
f the corresponding AND gates. When X2 changes from 1 to 0 both inputs of gate3 may be equal to 1.
cau sing the output to produce a momentary change to 0 when it should have stayed at I. The two
circuits shown in Fig. 9.33 implement the Boolean function in sum-of-products form :
Y=x1x2+x2’x3
This type of implementation may cause the output to go to 0 when it should remain a I. If, however, the
circuit is implemented instead in product-of-sums form (see Section 3.5) namely.
Y=(x1+x2)(x2’+x3)
then the output may momentarily go to 1 when it should remain 0. The first case is referred to as static-
hazard and the second case as static-hazard. A third type of hazard, known as dynamic hazard, causes
the output to change three or more times when it should change from 1 to 0 or from 0 to 1. Figure 9,34
illustrates the three type s of hazard s. When a circuit is implemented in sum-of -products form with
AND-OR gates o r with NAND gates the removal of static l-hazard guarantees that no static O-hazards
or dynamic hazard s will occur. A hazard can be detected by inspection of the map of the particular
circuit. To illustrate. Consider the map in Fig. 9.35(a). which is a plot of the (unction implemented in
Fig. 9.33 . The change in X2 from I 100 moves the circuit from minterm 111 to minterm 101 be hazard
exists because the change in input results in a different product term covering the two min terms

Page 99 of 169
Minterm 111 is covered by the product term implemented in gate I of Fig. 9.33. and minterm 101 is
covered by the product term implemented in gate 2. Whenever the circuit must move from one product
term to another. there is a possibility of a momentary interval when neither term is equal to l, giving
rise to an undesirable 0 output. The remedy for eliminating a hazard is to enclose the two mm terms in
question with another product term that overlaps both groupings. This situation is shown in the map of
Fig. 9.35(b). where the two minterms that cause the hazard are combined into one product term. The
hazard-free circuit obtained by such a configuration is shown in Fig. 9.36. The extra gate in the circuit
generates the product term XIX). In general, hazards in combinational circuits can be removed by
covering any two minterms that may produce a hazard with a product term common to both. The
removal of hazards requires the addition of redundant gates to the circuit.
Hazards In Sequential Circuits
In normal combinational-circuit design associated with synchronous sequential circuits. hazards are of
no concern, since momentary erroneous signals are not generally troublesome. However. if a
momentary incorrect signal is fed back in an asynchronous sequential circuit. it may cause the circuit to
go to the wrong stable state. This situation is illustrated in Fig. 9.37. If the circuit is in total stable
state )'x lx2 = II t and input X2 changes from I to u. the next total stable stale should be 110. However .
because of the hazard. output Y may go to 0 momentarily. If this false signal feeds back into gate 2
before the output of the inverter goes to I. the output of gate 2 will remain at 0 and the circuit will
switch to the incorrect total stable stale 010.This malfunction can be eliminated by adding an extra
gate, as is done in Fig. 9.36.

Hazards In Sequential Circuits


In normal combinational-circuit design associated with synchronous sequential circuits. hazards are of
no concern, since momentary erroneous signals are not generally troublesome. However. if a
momentary incorrect signal is fed back in an asynchronous sequential circuit. it may cause the circuit to
go to the wrong stable state. This situation is illustrated in Fig. 9.37. If the circuit is in total stable
state )'x lx2 = II t and input X2 changes from I to u. the next total stable stale should be 110. However .
Page 100 of 169
because of the hazard. output Y may go to 0 momentarily. If this false signal feeds back into gate 2
before the output of the inverter goes to I. the output of gate 2 will remain at 0 and the circuit will
switch to the incorrect total stable stale 010.This malfunction can be eliminated by adding an extra
gate, as is done in Fig. 9.36.
Implementation with SR Latches
Another way to avoid static hazards in asynchronous sequential circuits is 10 implement the circuit
with SR latches. A momentary 0 signal applied to the S or R inputs of a NOR latch will have no effect
on the state of the circuit. Similarly. a momentary I signal applied to the S and R inputs of a NAND
latch will have no effect on the state of the latch. In Fig. we observed that a two-level sum-of-products
expression implemented with NAND gates may have a static I-hazard if both inputs of gate 3 go to I,
changing the output from I to 0 momentarily. But if gate 3 is part of a latch, the momentary I signal
will have no effect on the output, because a third input to the gate will come from the complemented
side of the latch that will be equal too and thus maintain the output at I. To clarify what was just said,
consider a NAND SR latch with the following Boolean functions for S and R:
S = AB + CD
R = A'C
Since this is a NAND latch, we must apply the complemented values to the inputs:
S= (AB + CD)' = (AB)'(CD)'
R=(A'C)'

This implementation is shown in Fig. 9.38(a). S is generated with two NAND gates and one AND gate.
The Boolean function for output Q is
Q = (Q'S)' =[Q'(AB)'(CD)']'
This function is generated in Fig. 9.38(b) with two levels of NAND gates. If output Q is equal to I. then
Q' is equal to. If two of the three inputs go momentarily to I, the NAND gate associated with output Q
will remain at I because Q' is maintained ill O. Figure 9.38(b) shows a typical circuit that can be used
to construct asynchronous sequential circuits. The two NAND gates forming the latch normally have
two inputs. However, if the 45 5 or functions contain two or more product terms when expressed as a
sum of products, then the corresponding NAND gate of the SR latch will have three or more inputs.
Thus the two terms in the original sum-of-products expression for5 are AD and CD and each is

Page 101 of 169


implemented with a NAND gate whose output is applied to the input of the NAND latch. In this way,
each slate variable requires a two-level circuit of NAND gates. The first level consists of NAND gates
that implement each product term in the original Boolean expression of S and R. The second level
forms the cross-coupled connection of the SR latch with inputs that come from the outputs of each
NAND gate in the first level.
Essential Hazards

Thus far, we have considered what are known as static and dynamic hazards. Another type of hazard
that may occur in asynchronous sequential circuits is called an essential hazard. This type of hazard is
caused by unequal delays along two or more paths that originate from the same input. An excessive
delay through an inverter circuit in comparison to the delay associated with the feedback path may
cause such a hazard. Essential hazards cannot be corrected by adding redundant gates as in static
hazards. The problem that they impose can be corrected by adjusting the amount of delay in the
affected path. To avoid essential hazards each feedback loop must be handled with individual care to
ensure that the delay in the feedback path is long enough compared with delays of other signals that
originate from the input terminals. This problem tends to be specialized, as it depends on the particular
circuit used and the size of the delays that are encountered in its various paths.

Page 102 of 169


Unit – IV

Part A: Synthesis of Symmetric Networks


Relay Contacts, Analysis and Synthesis of Contact Networks, Symmetric Networks, Identification of
Symmetric Functions and realization of the same.
Relay Contacts: Combinational networks can be constructed by semiconductor switches (like BJTs /
FETs) or Mechanical switches (Relay contacts). The comparisons of these two types are given below:
Relay contacts Semiconductor switches
Bidirectional (as the contact is a mechanical type, the Unidirectional.
flow can be in both the directions of the switch)
Slow ( relay switch ON & OFF times are large) Fast

Not useful in computer applications, as speed of Can be used in all applications.


operation is important. Useful in slow applications
like Traffic lights control, elevators etc.

Relay operation:
A Relay is a electromechanical device, which contains a coil and one or more contacts. When the coil
is excited by applying the rated voltage, the coil will become an electromagnet and changes the
position of the contact by attracting it. If the excitation is removed, the contact will go back to normal
position due to a spring action. The different types of contacts are NO (Normally open), NC ( Normally
closed) and Transfer contact ( Change over ). The symbols are as given below.

Using relays, implementing some Basic logic functions is given below:

Analysis and Synthesis of Contact Networks:


Analysis of two terminal contact network means, the determination of its transfer function. The
networks which have more than two terminals, the transfer function is determined for each pair of
terminals. The Synthesis is the converse of its analysis. The desired network performance is specified
by a switching expression and the corresponding circuit is derived.

Page 103 of 169


a. Analysis of series parallel networks:
As shown above, if two contacts are in series, T = x.y (an AND logic) and if two contacts are in
parallel, T = x + y (an OR logic). If the parameter is to be in complemented function, NC contact is
used and if it is non-complemented, NO contact is used. Any switching function can be implemented
using these series parallel networks as described below:

Ex: Analyze the below switching logic circuit.


In the upper portion, two parallel circuits are there each with a serial circuit. It is y'.z + z'.y. This is in
series with w'. The switching function is (y'.z + z'. y) . w'
The lower portion has three parallel paths and the switching function is w + y' + (z' . x')
The upper portion and the lower portion are in parallel and these two parallel paths are in series with x'.
Hence, the transfer function of the given circuit is
T = x' . {(y'.z + z'. y) . w' + w + y'+ z' . x'}
By simplifying the above,
T= w'x'y'z + w'x'yz' + wx' + wy' + wx'z'
= x' (w+y'+z')
The simplified circuit is

b. Analysis of non-series-parallel networks:


The analysis given above will not be applicable for non-series-parallel networks ( i.e. bridge type
networks). The analysis is done in two ways – i.e using tie sets and using cut sets.

Ex: Analyze the following circuit using tie sets:


The analysis is done by tracing all paths from one terminal to the other. The path from terminal i to
terminal j is possible in different ways, which are known as tie sets. If any one or more of these paths
are ON, the path from input to output exists. These paths are
w.x, w.v.z, y.v.x, and y.z.

Hence, T = wx + wvz+ yvx + yz.


In this case, we get the expression in SOP form.
Analysis of the above network using cut sets:

Page 104 of 169


It is the duality of the above analysis. Here, all cut paths are shown, so that if any of these paths are
open (OFF), the output is 0. Dotted lines are through the network contacts, so as to separate all possible
links between the inputs and the outputs. The transfer function in this case is POS form by expressing
these cuts as sums. These cuts are : w+y, x+z, w+v+z, x+v+y and the transfer function is
T = (w+y).(x+z).(w+v+z).(x+v+y).

Synthesis of Contact networks:


The requirements of the required network are to be expressed in the form of switching network. Based
on the description of the requirement, truth table is to be prepared and the same is to be simplified and
a corresponding series parallel network is obtained. A minimal function gives cost effective solution. A
minimal function means – minimum number of literals and using transfer contacts (change over
contacts – meaning both complement and non-complement contacts). Once the minimal series-parallel
function is achieved, the same is realized using the Relay contacts, as explained above.

Symmetric Networks:
Definitions: A switching function of n variables f(x1,x2,........xn) is called symmetric or totally
symmetric if and only if it is invariant under any permutation of variables ;
It is called partially symmetric in the variables xi,xj, where {xi,xj} is a subset of {x1,x2,........xn} , if
and only if the interchange of the variables xi, xj leaves the function unchanged.
Example: f(a,b,c)=a’b’c+ab’c’+a’bc’ is symmetric because interchange between variables doesn’t
change the function. Where as
f(a,b,c) = a’b’c+ab’c’ is partially symmetric in the variables a and c.
The variables in which a function is symmetric are called the variables of symmetry.
A symmetric function is denoted Sa1,a2...ak(x1,x2,........xn), where S designates the property of
symmetry, the superscripts a1,...ak designate the a-numbers, and (x1,x2,........xn) designates the
variables of symmetry.
Example-1: The function f(a,b,c)=a’b’c+a’bc’+ab’c’ assumes the value ‘1’when and only when one out
of its three variables is ‘1’.
This function is denoted as S1(a,b,c), similarly the symmetric function S 1,3 (a,b,c) is
f(a,b,c)=abc+a’b’c+ab’c’+a’bc’
Definition-2: Let f1(a,b,c,d) = S 0,2,4 (a,b,c,d) and f2(a,b,c,d) = S 3,4 (a,b,c,d)
then f3(a,b,c,d) = f1+f2 = S 0,2,3,4 (a,b,c,d) and f4(a,b,c,d) = f1*f2 = S 4 (a,b,c,d).
The complement of the symmetric function is also a symmetric function whose a-numbers are
included in the set {0,1..n}and not included in the original function.
for example S’ 0,2,4 (a,b,c,d) = S 1,3 (a,b,c,d) for set of {0,1,2,3,4}.

Representation of symmetric functions:

Page 105 of 169


The basic network for symmetric function is shown below. Network is drawn for four variables and it
can be extended for n variables. It is a multi output network consisting of a single input and 5 output
numbered from 0 to 4 .
• Network which realizes symmetric function is called symmetric network. Contacts of symmetric
network are arranged in such a way that input can propagate in two directions.
-From bottom to top
-And from left to right
Contacts of the operated relays (NO type) shifts input upward to successive level, while contacts of the
un- operated relays (NC type) shifts input to the right.
These properties of the symmetric function make it possible to simplify the network in the various
ways.
Realization of Symmetric functions:
Switch Realization of the symmetric function:

Lattice Realization of symmetric function:


Example of symmetric network, Function realization and Synthesis:

Let us realize symmetric function S1,4(a,b,c,d)


-It is necessary to join output terminals labeled 1 and 4 .
-It is required to delete all unused terminal.
-Minimal network of the function can be achieved
-This simplified network represents the Boolean function f=a’b’c’d+a’b’cd’+a’bc’d’+ab’c’d’+abcd.
This simplification of symmetric network is called synthesis of symmetric network.

Synthesis of symmetric network:

Page 106 of 169


Observation:

Mirror image function of any symmetric function is symmetric for same negated variable for which
original function is symmetric.
Identification of Symmetric Functions:
The procedure for identifying symmetric function is as follows:
1. Obtain column sums
a. if more than two different sums occur (case 1), the function is not symmetric.
b. If two different sums occur (case 2), compare the total of these two sums with the number of rows in
the table. If they are not equal (case 2a), the function is not symmetric. If they are equal (case 2b),
complement the columns corresponding to either one of the column sums (preferably the one of fewer
occurrences) and continue to step 2.
c. If all column sums are identical (case 3), compare their sum with one-half the number of rows in the
table. If they are not equal (case 3a), continue to step 2. If they are equal, continue to step 3.
2. Obtain row sums and check each for sufficient occurrence, that is if a is one row sum and n is the
number of variables, then that row sum much n!/(n-a)!a! Times.
a. If any row sum does not occur the required number of times, the function is not symmetric.
b. If all row sums occur the required number of times, the function is symmetric, its a-numbers are
given by the different row sums in column a. and its variables of symmetry are given at the top of the
table.
3. Obtain row sums and check each for sufficient occurrence.
a. If all row sums occur the required number of times, the function is symmetric.
b. If any row sum does not occur the required number of times, expand the function about any of its
variables – that is, find functions g and h such that f = x'g + xh. Write g and in tabular form and find
their column sums. Determine all variable complementations required for the identifications of
symmetries in g and h. Test f under the same variable complementations. If all row sums occur the
required number of times, f is symmetric; if any row sum does not occur the required number of times,
f is not symmetric.

Page 107 of 169


Examples:

1. f(x,y,z) = ∑ (1,2,4,7).

The truth table is

x y z Row Sum The sum of all columns is equal (case 3). Their
sum is 6 which is not equal to ½ of the number
0 0 1 1
of row (case 3a). Go to step 2: Check for the
0 1 0 1 row sums for sufficient occurrence i.e. n!/(n-
1 0 0 1 a)!a! I.e 3! / (3-1)!.1! = 3 and 3! / (3-3)!.3! = 1.
1 1 1 3 Both the row sums occur the required number
of times, so the function is symmetrical and can
2 2 2 Column Sum
be expressed as S1,2 (x,y,z).

2. f(w,x,y,z) = ∑(0,1,3,,5,8,10,11,12,13,15).
The truth table is rows (case 2b). We have to complement x and y
w x y z Row Sum ( whose column sum is less)

0 0 0 0 0 w x' y' z Row Sum


0 0 0 1 1 0 1 1 0 2
0 0 1 1 2
0 1 1 1 3
0 1 0 1 2
0 1 0 1 2
1 0 0 0 1
0 0 1 1 2
1 0 1 0 2
1 1 1 0 3
1 0 1 1 3
1 1 0 0 2 1 1 0 0 2
1 1 0 1 3 1 1 0 1 3
1 1 1 1 4 1 0 1 0 2
6 4 4 6 Column Sum 1 0 1 1 3
1 0 0 1 2
The sum of columns is not equal and these are
two different sums (case 2). The sum of these 6 6 6 6 Column sum
two sums is 10 which is equal to the number of
Go to step 2 : Check for the row sums for sufficient occurrence i.e. n!/(n-a)!a! I.e 4! / (4-2)!.2! = 6 and
4! / (4-3)!.3! = 4. Both the row sums occur the required number of times, so the function is symmetric
and can be expressed as f(w,x,y,z) = S2.,3 (w,x',y',z).

If columns w and x are complemented, then the function can be expressed as:
f(w,x,y,z) = S1,2 (w',x,y,z').

Page 108 of 169


Part B: Threshold Logic
Threshold Element, Capabilities and Limitations of Threshold logic, Elementary Properties, Synthesis of
threshold networks (Unate function, Linear seperability, Identification and realization of threshold
functions, Map based synthesis of two-level Threshold networks).

Introduction:
Threshold elements are another type of switching elements. Logic Circuits constructed using threshold
elements usually consists of fewer elements and simpler interconnections compared to conventional gates.
The conventional gates logic is specified by Boolean algebra where as the logic with threshold gates are
specified by mathematical equations. All basic logical gates and universal gates can be implemented using a
Threshold gate (XOR gate cannot be implemented using a single Threshold gate). Also, some simpler
Boolean logics can be implemented using a single Threshold gate.

Threshold Elements:
A threshold element has n two-valued inputs x1, x2,..., xn and a single two valued output y. Its internal
parameters are a threshold T where as each weight w i associated with a particular input variable xi. These
values of T and xi may be any real, finite, positive or negative numbers. The relation of a threshold element
is defined as follows:

where the sum and product operations are conventional arithmetic ones. The sum is called the
weighted sum of the element.
Threshold element Symbol:

Example:
Write the input output relation of the Threshold gate given below and obtain the switching funtion for the
same.

Ans: The inputs are the three x1, x2 and x3 with multiplication factors -1, 2 and 1. The Threshold value T is
½. The output Y is calculated as for the following table.

Page 109 of 169


For the output Y, 1 is entered if the weighted sum is greater than ½. Otherwise, 0 is entered.
Y = f ( x1, x2, x3) = ∑(1,2,6,7) = x1'.x3 + x2 ( after simplification).
A majority gate is a special type of Threshold element. A Three input majority gate produces an output
value 1 if a majority of its inputs (i.e. two or three ) are one. It implements a majority function.
A minority gate produces an output value 1 if a majority of its inputs are at 0. It implements a minority
function.

Capabilities and Limitations of Threshold Logic:


The Threshold elements are more powerful than conventional gates. Their higher capability is due to the
ability of a single threshold element to realize a larger class of functions than a realizable one by any single
conventional gate. Any type of conventional gate can be realized using a Threshold gate with different
weights and Threshold value. For Example, a two input NAND gate can be realized by a single Threshold
element with weights -1, -1 and Threshold value T = -3/2. Similarly, an OR gate can be realized with unity
weights and T = 1/2.
A Threshold gate realizing a NAND gate:

Because of wide range of weights and threshold values, a large class of switching functions can be realized
by a single Threshold element. However, every switching function cannot be realized by a single Threshold
element.

Example: A function that cannot be realizable by a single Threshold element:


Consider the function f(x1, x2, x3, x4) = x1.x2 + x3.x4 with weights w1, w2, w3 and w4 and Threshold value T.
Then the output value of this element must be 1 for each of the input combination x 1x2x3'x4' and x1'x2 'x3x4
and the output is to be 0 for x1'x2x3'x4 and x1x2'x3x4'. Thus

The above two requirements are conflicting and no Threshold element can be realized for the above
function.
Hence, if a switching function 'f' is given, it has to be checked first, whether it is realizable with a single
Threshold function and if it is realizable, then appropriate weights and the Threshold value is to be
calculated. This is done by identifying 2 n linear simultaneous inequalities from the truth table and solving
them. For the input combinations for which f=1, the weighted sums have to exceed or equal to T and for
f=0, the weighted sums have to be less than T. If a solution ( not necessarily unique) to the above
Page 110 of 169
inequalities exists, it provides values for the weights and Threshold value. If, no solution exists then f is not
a Threshold function.
Example: For the function f(x1, x2, x3) = ∑(0, 1, 3) . Check whether this function is realizable and if so,
find the weights.
Ans: The truth table for the above function is

As per the combinations 2 and 4, T must be negative and w 1 and w2 are also to be negative. From
combinations 3 and 5, w2 has to be greater than w1. From combination 1, w3 is greater than or equal to T.
Hence, the relations are

where w3 may be positive. If the weights are restricted to integer values with smallest magnitudes, then
w2 = -1, w1 = -2, T = -1/2, and w3 =1.
With the above weights and Threshold value, all the combinations are satisfied and hence, f is a Threshold
function.

A switching function that can be realized by a sing threshold element is called a threshold function.

Limitations of Threshold logic: The limitation is its sensitivity to variations in the circuit parameters.
Therefore, the maximum number of inputs and the Threshold value T are to be restricted and care is to be
taken to increase the difference between the values of the weighted sums.

Elementary Properties
Property 1: For a given Threshold function, if one of the input is complemented, the same function can be
realized by a single Threshold element by negating the weight of that inverted input and subtracting the
value of the weight of that inverted input from the Threshold value T.
Consider a function f(x1, x2, .., xj, .. , xn) which is realized by V1 = {w1, w2, .., wj, .., wn; T}and if xj input is
complemented, then, f(x1, x2, .., xj', .., xn) can be realized by V2 = {w1, w2, …, -wj, .., wn; T-wj}.
The above property gives various other conclusions like the following:

Property 2: If a function is realizable by a single threshold element, then, by an appropriate selection of


complemented and uncompleted input variables, it is possible to obtain a realization by an Threshold
element whose weights have the desired sign changes and changing the Threshold value.
Property 3: If a function is realizable by a single threshold element, then it is realizable by an element with
only positive weights ( by having input variables in bot complemented and uncompleted forms).
Property 4: If a function f(x1, x2, .., xn) is realizable by a single threshold element whose weight Threshold
vector V1 = { w1, w2, …, wn; T}, then its complement f'(x1, x2, .., xn) is realized by a single Threshold
element with weight-Threshold vector V2 = { -w1, -w2, …, -wn; -T}.

Page 111 of 169


Synthesis of Threshold Networks
The methods for the identification and realization of threshold functions and the synthesis of networks of
threshold elements ( called as threshold networks) is done in the following ways:

 Unate function
A function f(x1, x2, .. xn) is said to be positive in a variable x, if there exists a disjunctive or conjunctive
expression for the function in which xi appears only in uncompleted form. A function f(x1,x2,..xn) is said to
be negative in xi if there exists a disjunctive or conjunctive expression for f in which x i appears only in
complemented form. If f is either positive or negative in xi, then it is said to be unate in xi. No variable
appears in both its complemented and uncompleted form.

Ex 1: The function f= x1x2' + x2x3' is positive in x1 and negative in x3 but is not unate in x2.
If a function f(x1, x2, .., xn) is unate in each of its variables, then it is called Unate function.
Ex 2: The function f = x1'x2 + x1.x2.x3' is unate because by simplification, f = x1'.x2 + x2.x3' has no variable
in both its complemented and uncompleted form.
Ex 3: The function f = x1. x2' + x1'.x2 is clearly not unate.
If f (x1, x2, …, xn) is positive in xi, then it can be expressed as

If f (x1, x2, .., xn) is negative in xi, then it can be expressed as

Hence, the existence of two such functions, g1 and h1 ( g2 and h2), is a necessary and sufficient condition for
f to be positive (negative) in xi.

Geometric representation of Unate functions:


Unate functions can be represented in a better way as a cube. For an n variable function, an n cube with 2 n
vertices is used, each vertices representing a min term. A line is drawn between every pair of vertices that
differ in just one variable.If the function assumes 1 value, that vertices is called true vertices and if the
function assumes 0 value, that vertices is called false vertices.
Ex: Function f = x'.y' + x.z, then the Geometric representation is as follows:
It is a three variable function. Hence, 3 dimensional cube is required. The bolder lines connecting the two
pairs of true vertices i.e. the pair (1,1,1) and (1,0,1) and the pair (0,0,1) and (0,0,0) represent the cubes xz
and x'y' respectively.

A Three-cube representing f=x'y' + xz.

 Linear seperability
If an n cube representation for threshold function with vertices, with a linear equation

Page 112 of 169


w1x1 + w2x2 + … + wnxn = T correspond to an (n-1) dimensional hyper plane that cuts the n-cube, then
f=0 when w1x1 + w2x2 + … + wnxn < T and
f=1 when w1x1 + w2x2 + … + wnxn >= T
and the hyper plane separates the true vertices from the false ones.
A switching function whose true vertices can be separated by a linear equation from its false ones is called a
linearly separable function and the functional property that makes such a separation is called a linearly
separable function.

 Identification and Realization of Threshold Functions:


To find out whether a given function is a Threshold function and if it is, to calculate the values of the
weights and Threshold value:
a. Test for the Unateness of the given function. Get the minimal form of the given switching function (both
its complement and non-complement of any variable does not exist).
Ex: Given function f = x1.x2.x3'.x4 + x2.x3'.x4'
simplifying it, f = x1.x2.x3' + x2.x3'.x4'
This is a unate function, as no variable exists in both complement and non-complement form.

b. Convert the given function into another function which has all variables in non-complement form only.
ϕ = x1.x2.x3 + x2.x3.x4
c. Find out all minimal True and maximal false vertices of ϕ.
There are two minimal vertices (1, 1, 1, 0) and (0, 1, 1, 1).
The false vertices are (1, 1, 0, 1), (1, 0, 1, 1) and (0, 1, 1, 0).
d. Check whether the function ϕ is linearly separable and if it so, find an appropriate set of weights and
threshold, which is necessary to determine the coefficients of the separating hyper plane.
In the above example, p = 2 and q = 3, there are six inequalities and these are:

From the above, the following are the constraints that must be observed.

Letting w1 = w4 = 1 and w2 = w3 = 2, then T must be smaller than 5 but larger than 4. Selecting T=9/2, then
the weight-threshold vector for ϕ V = { 1, 2, 2, 1; 9/2}.
e. convert this weight-threshold vector to one that corresponds to the original function f.
Then the weight-threshold vector will be V = { 1,2,-2,-1; 9/2 -2-1} = { 1,2,-2,-1; 3/2}

 Map based synthesis of two level Threshold networks:


If a given switching function cannot be realized with a single Threshold function, then this non-threshold
function will be decomposed into two or more factors of which each is a Threshold function.

Page 113 of 169


For a function of 3 or 4 variables, the identification problem is solved by detecting certain which are
admissible for Threshold functions, as given below:

If the 1-cells of the given function follow any of the above patterns, then that function can be realized using
a Threshold element. Otherwise, the 1-cells pattern is divided into two admissible patterns.
Ex: Let f (x1, x2, x3, x4) = sigma (2, 3, 6, 7, 10, 12, 14, 15)
The map for this function exhibiting two admissible patterns is

The threshold elements for realizing each of the admissible patterns are as below, as per the realization of
the Threshold function described above.

The Threshold logic realization for the function f is

The weight of g in the second element is determined by computing the minimal weighted sum that can
occur in the second element when g has the value 1. Since f must have the value 1 whenever g does, this
minimal weighted sum must be larger than the threshold of the second element. In this case, the minimal
weighted sum is w g, and it occurs when x1 = x2 = 0 and x3 = x4 = 1. Clearly, wg must be larger than 5/2 and,
therefore, the value wg = 3 has been selected.

Page 114 of 169


UNIT V
Part A: Sequential Machines Fundamentals
Introduction:
In combinational circuits, the output depends upon the present inputs only. In sequential circuits, the
outputs depend not only upon the present inputs, but also upon the previous conditions.
In sequential circuits, memory elements (generally Flip-flops) are used as storage elements, to keep the
previous conditions. This stored condition is given as feedback at the input to generate the next
condition.

Block diagram of a Sequential circuit


Sequential circuits are of two types: Asynchronous sequential circuits where no clock is used for state
transfer and Synchronous Sequential circuits where clock is used for operation.

State table:
In Sequential circuits, the effect of all previous inputs on the outputs is represented by a state of the
circuit. Thus, the output of the circuit at any time depends upon its current state and input. These also
determine the next state of the circuit .The relationship that exists among the inputs, outputs, present
states and next states can be specified by either a State Table or the State Diagram.

The state Table representation of a sequential circuit consists of three sections labeled present state,
next state and output. The present state designates the state of Flip-flops before the occurrence of a
clock pulse. The next state shows the states of Flip-flops after the clock pulse, and the output section
lists the value of the output variables during the present state.
An example of a state table is as follows:

State Assignment:
State Assignment procedures are concerned with methods for assigning binary values to states in such a
way as to reduce the number of states of the sequential circuit. This is helpful when a sequential circuit
is viewed from its external input-output terminals. Such a circuit may follow a sequence of internal

Page 115 of 169


states, but the binary values of the individual states may be of no consequence as long as the circuit
produces the required sequence of outputs for any given sequence of inputs.

Finite State Model – Basic Definitions


A sequential logic function has a “memory” feature and takes into account past inputs in order to
decide on the output. The Finite State Machine is an abstract mathematical model of a sequential logic
function. It has finite inputs, outputs and number of states.

The sequence of operations are defined by a state table or state diagram. The example of a state
diagram for different types of flip-flops is shown below:
SR Flip-flop: T Flip-flop:

JK Flip-flop:

D Flip-flop:

Memory elements and their Excitation Functions;


In sequential circuits, memory elements are required to preserve the previous state status, so that this
can be used for the generation of the next stats. Generally used memory elements are Flip-flops. SR, D,
T and JK flip-flops are studied here with symbol, circuit diagram using NAND gates, Timing Diagram,
Truth table, excitation table and characteristic equation for each of these Flip-flops.

1. SR Flip-flop (Set-Reset Flip-flop)

Symbol
Timing Diagram

Page 116 of 169


Circuit diagram: Truth table:

Operation: It has two inputs S and R.


When ever clock is there, if S = 1 ( set is given) and R = 0, the output goes to 1.
If S=0, R = 1( Reset is given), the output goes to 0.
If S=0, R=0, ( either Set is not given, Reset is not given), the output will not change and will be same
as the earliesr stage.
If S=1, R=1, indicating to Set and Reset the Flip-flop, Output cannot be predicted. This condition
should not be given.
Excitation table
Qn Qn+1 S R Remarks
0 0 0 X Reset the output / No
change
0 1 1 0 Set the output Characteristic Equation:
1 0 0 1 Reset the output Qn+1 = S + R’.Qn

1 1 X 0 Set the output / No


change

2. D Flip-flop (Data Flip-flop / Delay Flip-flop):

Symbol Timing diagram

Page 117 of 169


Circuit diagram

Truth table:

Operation:

It has one input: D. During the clock active time, the output is strobed with whatever input is given at
D.

Excitation Table:

Qn Qn+1 D Remarks

0 0 0 Data loaded as 0 Characteristic Equation:

0 1 1 Data loaded as 1

1 0 0 Data loaded as 0

1 1 1 Data loaded as 1

3. JK Flip-flop:

Timing Diagram

Symbol

Page 118 of 169


Circuit Diagram: Truth Table:

Operation:

JK Flip-flop is a combination of SR Flip-flop and T Flip-flop. It is an improvement from SR flip-flop


where 1-1 condition is used to Toggle the output.

Excitation Table:
Characteristic Equation
Qn Qn+1 J (set) K(Reset) Remarks

0 0 0 X No change / Reset

0 1 1 X Toggle / set

1 0 X 1 Toggle / Reset

1 1 X 0 No change / set

4. T Flip-flop (Toggle Flip-flop):

Timing Diagram

Symbol

Page 119 of 169


Circuit diagram:
Truth Table:

Operation:
It has one input T. When ever clock is given, the output toggles if T=1. Otherwise, there is no change
in the output.
Excitable Table:
Qn Qn+1 T Remarks

0 0 0 No change in the output


Characteristic Equation:
0 1 1 Output toggles

1 0 1 Output toggles

1 1 0 No change in the output

Clock Timing
The Setup time: the setup time is the amount of time that an input signal (to the device) must be stable
(unchanging) before the clock ticks in order to guarantee minimum pulse width and thus avoid possible
metastability.
Hold time: The hold time is the amount of time that an input signal must be stable (unchanging) after
the clock tick in order to guarantee minimum pulse width and thus avoid possible metastability.

Master slave Flip-flop:


Although JK Flip-flop is an improvement on the clocked SR Flip-flop, it still suffers from timing
problem classed “Race”. In 1,1 input for J and K, if the output Q changes state before the timing pulse
of the clock input has time to go “OFF”, then output goes on ON and OFF. This is called Racing
problem. To avoid this, the timing period T must be kept as short as possible, which is difficult. Master
Slave Flip-flop solves this problem.
In Master-slave Flip-flop, two JK Flip-flops are used. The first Flip-flop transfers data during the +ve
edge of the clock cycle which is transferred to the final output by the 2 nd Flip-flop during the -ve edge
of the clock cycle. As the final output changes at the end of the clock cycle, racing is avoided.

Page 120 of 169


Circuit Diagram:

Block Diagram:

Timing Diagram:

Synthesis of Synchronous Sequential Circuits:


For Synchronous Sequential circuits, the following procedure is used for Design.
Draw the state diagram
Draw the State table (excitation table) for each output.
Draw the K -Map for each output
Draw the circuit.

A. Sequence detector:
Design of the 11011 Sequence Detector
A sequence detector accepts as input a string of bits: either 0 or 1.
Its output goes to 1 when a target sequence has been detected.
There are two basic types: overlap and non-overlap.
In an sequence detector that allows overlap, the final bits of one sequence can be the start of another
sequence.
11011 detector with overlap X 11011011011
Z 00001001001
11011 detector with no overlap Z 00001000001

Problem: Design a 11011 sequence detector using JK flip-flops. Allow overlap.

Step 1 – Derive the State Diagram and State Table for the Problem
Page 121 of 169
Step 1a – Determine the Number of States
We are designing a sequence detector for a 5-bit sequence, so we need 5 states. We label these states A,
B, C, D, and E. State A is the initial state.
Step 1b – Characterize Each State by What has been Input and What is Expected

State Has Awaiting


A -- 11011
B 1 1011
C 11 011
D 110 11
E 1101 1

Step 1c – Do the Transitions for the Expected Sequence


Here is a partial drawing of the state diagram. It has only the sequence expected. Note that the diagram
returns to state C after a successful detection; the final 11 are used again.

Note the labeling of the transitions: X / Z. Thus the expected transition from A to B has an input of 1
and an output of 0.
The transition from E to C has an output of 1 denoting that the desired sequence has been detected.
The sequence is 1 1 0 1 1.

Step 1d – Insert the Inputs That Break the Sequence The sequence is 1 1 0 1 1.

Each state has two lines out of it – one line for a 1 and another line for a 0.
The notes below explain how to handle the bits that break the sequence.

State A in the 11011 Sequence Detector


Page 122 of 169
State A is the initial state. It is waiting on a 1.
If it gets a 0, the machine remains in state A and continues to remain there while 0’s are input.
If it gets a 1, the machine moves to state B, but with output 0.

State B in the 11011 Sequence Detector


If state B gets a 0, the last two bits input were “10”.
This does not begin the sequence, so the machine goes back to state A and waits on the next 1.
If state B gets a 1, the last two bits input were “11”. Go to state C.

State C in the 11011 Sequence Detector


If state C gets a 1, the last three bits input were “111”.
It can use the last two to be the first two 1’s of the sequence 11011, so the machine stays in state C
awaiting a 0.
If state C gets a 0, the last three bits input were “110”. Move to state D.

State D in the 11011 Sequence Detector


If state D gets a 0, the last four bits input were “1100”. These 4 bits are not part of the sequence, so we
start over.
If state D gets a 1, the last four bits input were “1101”. Go to state E.

State E in the 11011 Sequence Detector


If state E gets a 0, the last five bits input were “11010”. These five bits are not part of the sequence, so
start over.
If state E gets a 1, the last five bits input were “11011”, the target sequence.
If overlap is allowed, go to state C and reuse the last two “11”.
If overlap is not allowed, go to state A, and start over.

Step 1e – Generate the State Table with Output

Step 2 – Determine the Number of Flip-Flops Required


We have 5 states, so N = 5. We need three flip-flops.

Step 3 – Assign a unique 3-bit binary number (state Assignment) to each state. Straigh forward
assignment:
A = 000
Page 123 of 169
B = 001
C = 010
D = 011
E = 100

Non-sequential assignment:
States A and D are given even numbers. States B, C, and E are given odd numbers. The assignment is
as follows.
A = 000
B = 001
C = 011 States 010, 110, and 111 are not used.
D = 100
E = 101
Step 4 – Generate the Transition Table With Output

Step 4a – Generate the Output Table and Equation


The output table is generated by copying from the table just completed.

The output equation can be obtained from inspection. As is the case with most sequence detectors, the
output Z is 1 for only one combination of present state and input. Thus we get Z = X . Y2 .Y1 ’ .Y0.
This can be simplified by noting that the state 111 does not occur, so the answer is Z = X . Y2 .Y0.
Step 5 – Separate the Transition Table into 3 Tables, One for Each Flip-Flop We shall generate a
present state / next state table for each of the three flipflops; labeled Y2, Y1, and Y0. It is important to
note that each of the tables must include the complete present state, labeled by the three bit vector
Y2Y1Y0.

Page 124 of 169


D2 = X’.Y1 + X.Y2.Y0 ’
D1 = X .Y0
D0 = X

Step 6- Decide on the type of flip-flops to be used. The problem stipulates JK flip-flops, so we use
them.

Steps 7 and 8 are skipped in this lecture.


Step 9 – Summarize the Equations
Z = X.Y2.Y0
J2 = X’.Y1 and K2 = X’ + Y0
J1 = X.Y0 and K1 = X’
J0 = X and K0 = X’ .
Step 10 – Draw the Circuit using JK Flip-flops

Here is the same design implemented with D flip-flops.

A. Serial Binary adder: Please Refer Kohavi Section : 9.1

Page 125 of 169


B. Binary Counter:
Design a 3 bit up counter using T Flip-flops:
3 bit requires 8 states. There are no inputs from the external circuits. State changes on every clock
edge.
State Diagram:

State Table: For T Flip-flops, the excitation table is


T = Q XOR Q+

Next state Maps and simplification:


In the present requirement,
QA toggles when B=C=1
QB toggles when C=1
QC togges on every clock edge.
The circuit diagram is

Page 126 of 169


C. Parity Bit Generator:
A Sequential parity checker : an 8th bit is added to each group of 7 bits such that the total number of 1
bits are 1 for odd parity. If any odd number of bits in the 8 bit block changes value, then the presence of
this error can be detected. A parity checker for serial data is designed as follows:
Block diagram:

State Diagram: State table:


S0: even number of 1s received so far
S1: odd number of 1s received so far.

Since only two states are there, one flip-flop is sufficient.


The excitation table for T flip-flop is as below: The circuit is as follows:

from this table, T is 1 whenever X is 1.

Page 127 of 169


Unit V
Part B: Counters and Shift Registers
Ripple Counter, Shift Registers and their types, Ring Counters, Twisted Ring Counters.
Introduction:
Counter: Counters are circuits that cycle through a specified number of states.
Two types of counters:
(A) synchronous (parallel) counters
(B) asynchronous (ripple) counters
Ripple counters (Asynchronous counters) allow some flip-flop outputs to be used as a source of clock
for other flip-flops.
Synchronous counters apply the same clock to all flip-flops.
Asynchronous counters: the flip-flops do not change states at exactly the same time as they do not have
a common clock pulse. These are also known as ripple counters, as the input clock pulse “ripples”
through the counter – cumulative delay is a drawback.
n flip-flops gives a MOD (modulus) 2n counter. (Note: A MOD-x counter cycles through x states.)
Output of the last flip-flop (MSB) divides the input clock frequency by the MOD number of the
counter, hence a counter is also a frequency divider.
Example:
4-bit Ripple Binary Counter (Negative Triggering):
Block Diagram using JK Flip-flops:

Timing Diagram:

Page 128 of 169


Operation:
The JK Flip-Flop is configured as T Flip-Flop. In fact, any type of Flip-Flop can be used in Counters, if
they are configured as T Flip-Flop. As both the J and K are shorted together and given to logic 1, the
Flip-Flop output toggles whenever the input clock changes from 1 to 0 (Negative-edge triggering).
Based on the given clock CLK to FF0, the Q0, and the output of FF0 changes when ever the clock goes
from 1 to 0. The Q0 (output of FF0) is given as clock to FF1. Hence, Q1, the output of FF1 changes
when even Q0 changes from 1 to 0 (after every two clock pulses) and so on. The timing diagram is
shown above.

Shift Registers and their types:


This sequential device loads the data present on its inputs and then moves or “shifts” it to its output
once every clock cycle, hence the name Shift Register.
A shift register basically consists of several single bit “D-Type Data Latches”, one for each data bit,
either a logic “0” or a “1”, connected together in a serial type daisy-chain arrangement so that the
output from one data latch becomes the input of the next latch and so on.
Data bits may be fed in or out of a shift register serially, that is one after the other from either the left or
the right direction, or all together at the same time in a parallel configuration.
The number of individual data latches required to make up a single Shift Register device is usually
determined by the number of bits to be stored with the most common being 8-bits (one byte) wide
constructed from eight individual data latches.
Shift Registers are used for data storage or for the movement of data and are therefore commonly used
inside calculators or computers to store data such as two binary numbers before they are added
together, or to convert the data from either a serial to parallel or parallel to serial format. The individual
data latches that make up a single shift register are all driven by a common clock (Clk) signal making
them synchronous devices.
Generally, shift registers operate in one of four different modes with the basic movement of data
through a shift register being:
A. Serial-in to Parallel-out (SIPO) - the register is loaded with serial data, one bit at a time, with
the stored data being available at the output in parallel form.
B. Serial-in to Serial-out (SISO) - the data is shifted serially “IN” and “OUT” of the register,
one bit at a time in either a left or right direction under clock control.
C. Parallel-in to Serial-out (PISO) - the parallel data is loaded into the register simultaneously
and is shifted out of the register serially one bit at a time under clock control.
D. Parallel-in to Parallel-out (PIPO) - the parallel data is loaded simultaneously into the
register, and transferred together to their respective outputs by the same clock pulse.

Page 129 of 169


The effect of data movement from left to right through a shift register can be presented graphically as:

Also, the directional movement of the data through a shift register can be either to the left, (left
shifting) to the right, (right shifting) left-in but right-out, (rotation) or both left and right shifting within
the same register thereby making it bidirectional.
A: Serial-in to Parallel-out (SIPO) Shift Register
4-bit Serial-in to Parallel-out Shift Register:

The operation is as follows. Lets assume that all the flip-flops (FFA to FFD) have just been RESET
(CLEAR input) and that all the outputs QA to QD are at logic level “0” ie, no parallel data output.
If a logic “1” is connected to the DATA input pin of FFA then on the first clock pulse the output of
FFA and therefore the resulting QA will be set HIGH to logic “1” with all the other outputs still
remaining LOW at logic “0”. Assume now that the DATA input pin of FFA has returned LOW again to
logic “0” giving us one data pulse or 0-1-0.
The second clock pulse will change the output of FFA to logic “0” and the output of FFB and QB
HIGH to logic “1” as its input D has the logic “1” level on it from QA. The logic “1” has now moved
or been “shifted” one place along the register to the right as it is now at QA.
When the third clock pulse arrives this logic “1” value moves to the output of FFC (QC) and so on until
the arrival of the fifth clock pulse which sets all the outputs QA to QD back again to logic level “0”
because the input to FFA has remained constant at logic level “0”.
The effect of each clock pulse is to shift the data contents of each stage one place to the right, and this
is shown in the following table until the complete data value of 0-0-0-1is stored in the register. This
data value can now be read directly from the outputs of QA to QD.
Then the data has been converted from a serial data input signal to a parallel data output. The truth
table and following waveforms show the propagation of the logic “1” through the register from left to
right as follows.

Page 130 of 169


Basic Data Movement Through A Shift Register
Clock Pulse No. QA QB QC QD
0 0 0 0 0
1 1 0 0 0
2 0 1 0 0
3 0 0 1 0
4 0 0 0 1
5 0 0 0 0
Note that after the fourth clock pulse has ended the 4-bits of data (0-0-0-1) are stored in the register and
will remain there provided clocking of the register has stopped. In practice the input data to the register
may consist of various combinations of logic “1” and “0”. Commonly available SIPO IC’s include the
standard 8-bit 74LS164 or the 74LS594.

B. Serial-in to Serial-out (SISO) Shift Register


This shift register is very similar to the SIPO above, except were before the data was read directly in a
parallel form from the outputs QA to QD, this time the data is allowed to flow straight through the
register and out of the other end. Since there is only one output, the DATA leaves the shift register one
bit at a time in a serial pattern, hence the name Serial-in to Serial-Out Shift Register or SISO.
The SISO shift register is one of the simplest of the four configurations as it has only three connections,
the serial input (SI) which determines what enters the left hand flip-flop, the serial output (SO) which is
taken from the output of the right hand flip-flop and the sequencing clock signal (Clk). The logic circuit
diagram below shows a generalized serial-in serial-out shift register.
4-bit Serial-in to Serial-out Shift Register:

This type of Shift Register acts as a temporary storage device or it can act as a time delay device for the
data, with the amount of time delay being controlled by the number of stages in the register, 4, 8, 16 etc
or by varying the application of the clock pulses. Commonly available IC’s include the 74HC595 8-bit
Serial-in to Serial-out Shift Register all with 3-state outputs.

C. Parallel-in to Serial-out (PISO) Shift Register:


The Parallel-in to Serial-out shift register acts in the opposite way to the serial-in to parallel-out one
above. The data is loaded into the register in a parallel format in which all the data bits enter their
inputs simultaneously, to the parallel input pins PA to PD of the register. The data is then read out
sequentially in the normal shift-right mode from the register at Q representing the data present at PA to
PD.

Page 131 of 169


This data is outputted one bit at a time on each clock cycle in a serial format. It is important to note that
with this type of data register a clock pulse is not required to parallel load the register as it is already
present, but four clock pulses are required to unload the data.

4-bit Parallel-in to Serial-out Shift Register:


As this type of shift register converts parallel data, such as an 8-bit data word into serial format, it can
be used to multiplex many different input lines into a single serial DATA stream which can be sent
directly to a computer or transmitted over a communications line. Commonly available IC’s include the
74HC166 8-bit Parallel-in/Serial-out Shift Registers.

D. Parallel-in to Parallel-out (PIPO) Shift Register


The final mode of operation is the Parallel-in to Parallel-out Shift Register. This type of shift register
also acts as a temporary storage device or as a time delay device similar to the SISO configuration
above. The data is presented in a parallel format to the parallel input pins PA to PD and then transferred
together directly to their respective output pins QA to QA by the same clock pulse. Then one clock
pulse loads and unloads the register. This arrangement for parallel loading and unloading is shown
below.

4-bit Parallel-in to Parallel-out Shift Register


The PIPO shift register is the simplest of the four configurations as it has only three connections, the
parallel input (PI) which determines what enters the flip-flop, the parallel output (PO) and the
sequencing clock signal (Clk).
Similar to the Serial-in to Serial-out shift register, this type of register also acts as a temporary storage
device or as a time delay device, with the amount of time delay being varied by the frequency of the
clock pulses. Also, in this type of register there are no interconnections between the individual flip-
flops since no serial shifting of the data is required.
Universal Shift Register
Today, there are many high speed bi-directional “universal” type Shift Registers available such as the
TTL 74LS194, 74LS195 or the CMOS 4035 which are available as 4-bit multi-function devices that

Page 132 of 169


can be used in either serial-to-serial, left shifting, right shifting, serial-to-parallel, parallel-to-serial, or
as a parallel-to-parallel multifunction data register, hence the name “Universal”.
These universal shift registers can perform any combination of parallel and serial input to output
operations but require additional inputs to specify desired function and to pre-load and reset the device.
A commonly used universal shift register is the TTL 74LS194 as shown below.
Shift Register Summary: to summarize about Shift Registers
(A) A simple Shift Register can be made using only D-type flip-Flops, one flip-Flop for each data
bit. Even if other Flip-Flops are to be used, then they are to be configured as D Flip-Flops.
(B) The output from each flip-Flop is connected to the D input of the flip-flop at its right.
(C) Shift registers hold the data in their memory which is moved or “shifted” to their required
positions on each clock pulse.
(D) Each clock pulse shifts the contents of the register one bit position to either the left or the
right.
(E) The data bits can be loaded one bit at a time in a series input (SI) configuration or be loaded
simultaneously in a parallel configuration (PI).
(F) Data may be removed from the register one bit at a time for a series output (SO) or removed
all at the same time from a parallel output (PO).
(G) One application of shift registers is in the conversion of data between serial and parallel, or
parallel to serial.
(H) Shift registers are identified individually as SIPO, SISO, PISO, PIPO, or as a Universal Shift
Register with all the functions combined within a single device.

Ring Counter:
Ring counter is a type of counter composed of a type of circular shift register. The output of the last
shift register is fed to the input of the first register.
There are two types of ring counters:
2) A straight ring counter connects the output of the last shift register to the first shift register input
and circulates a single one (or zero) bit around the ring. For example, in a 4-register ring counter,
with initial register values of 1000, the repeating pattern is: 1000, 0100, 0010, 0001, 1000... . Note
that one of the registers must be pre-loaded with a 1 (or 0) in order to operate properly.
Circuit Diagram:

Timing Diagram:

Page 133 of 169


A twisted ring counter, also called switch-tail ring counter, walking ring counter, Johnson
counter or Möbius counter, connects the complement of the output of the last shift register to the
input of the first register and circulates a stream of ones followed by zeros around the ring. For
example, in a 4-register counter, with initial register values of 0000, the repeating pattern is: 0000,
1000, 1100, 1110, 1111, 0111, 0011, 0001, 0000...
Johnson counters are often favoured, not just because they offer twice as many count states from the
same number of shift registers, but because they are able to self-initialize from the all-zeros state,
without requiring the first count bit to be injected externally at start-up. The Johnson counter
generates a Gray code, a code in which adjacent states differ by only one bit.Four-bit ring counter
sequences

Straight Ring/ Overbeck Counter Twisted Ring/ Johnson Counter


State Q0 Q1 Q2 Q3 State Q0 Q1 Q2 Q3
0 1 0 0 0 0 0 0 0 0
1 0 1 0 0 1 1 0 0 0
2 0 0 1 0 2 1 1 0 0
3 0 0 0 1 3 1 1 1 0
0 1 0 0 0 4 1 1 1 1
1 0 1 0 0 5 0 1 1 1
2 0 0 1 0 6 0 0 1 1
3 0 0 0 1 7 0 0 0 1
0 1 0 0 0 0 0 0 0 0

Block diagram of Twisted Ring Counter:

Note the inversion of the Q signal from the last


shift register before feeding back to the first D
input, making this a Johnson counter.
Timing Diagram:

Page 134 of 169

You might also like