Basic Logical Functions and Gates

While each logical element or condition must always have a logic value of either "0" or "1", we also need to have ways to combine different logical signals or conditions to provide a logical result. For example, consider the logical statement: "If I move the switch on the wall up, the light will turn on." At first glance, this seems to be a correct statement. However, if we look at a few other factors, we realize that there's more to it than this. In this example, a more complete statement would be: "If I move the switch on the wall up and the light bulb is good and the power is on, the light will turn on." If we look at these two statements as logical expressions and use logical terminology, we can reduce the first statement to: Light = Switch This means nothing more than that the light will follow the action of the switch, so that when the switch is up/on/true/1 the light will also be on/true/1. Conversely, if the switch is down/off/false/0 the light will also be off/false/0.

Looking at the second version of the statement, we have a slightly more complex expression: Light = Switch and Bulb and Power Normally, we use symbols rather than words to designate the and function that we're using to combine the separate variables of Switch, Bulb, and Power in this expression. The symbol normally used is a dot, which is the same symbol used for multiplication in some mathematical expressions. Using this symbol, our three-variable expression becomes: Light = Switch Bulb Power

When we deal with logical circuits (as in computers), we not only need to deal with logical functions; we also need some special symbols to denote these

functions in a logical diagram. There are three fundamental logical operations, from which all other functions, no matter how complex, can be derived. These functions are named and, or, and not. Each of these has a specific symbol and a clearly-defined behavior, as follows: The AND Gate The AND gate implements the AND function. With the gate shown to the left, both inputs must have logic 1 signals applied to them in order for the output to be a logic 1. With either input at logic 0, the output will be held to logic 0. If your browser supports the Javascript functions required for the demonstrations built into this page, you can click the buttons to the left of the AND gate drawing to change their assigned logic values, and the drawing will change to reflect the new input states. Other demonstrations on these pages will work the same way. There is no limit to the number of inputs that may be applied to an AND function, so there is no functional limit to the number of inputs an AND gate may have. However, for practical reasons, commercial AND gates are most commonly manufactured with 2, 3, or 4 inputs. A standard Integrated Circuit (IC) package contains 14 or 16 pins, for practical size and handling. A standard 14-pin package can contain four 2-input gates, three 3-input gates, or two 4-input gates, and still have room for two pins for power supply connections. The OR Gate The OR gate is sort of the reverse of the AND gate. The OR function, like its verbal counterpart, allows the output to be true (logic 1) if any one or more of its inputs are true. Verbally, we might say, "If it is raining OR if I turn on the sprinkler, the lawn will be

wet." Note that the lawn will still be wet if the sprinkler is on and it is also raining. This is correctly reflected by the basic OR function. In symbols, the OR function is designated with a plus sign (+). In logical diagrams, the symbol to the left designates the OR gate. As with the AND function, the OR function can have any number of inputs. However, practical commercial OR gates are mostly limited to 2, 3, and 4 inputs, as with AND gates. The NOT Gate, or Inverter The inverter is a little different from AND and OR gates in that it always has exactly one input as well as one output. Whatever logical state is applied to the input, the opposite state will appear at the output. The NOT function, as it is called, is necesasary in many applications and highly useful in others. A practical verbal application might be: The door is NOT locked = You may enter The NOT function is denoted by a horizontal bar over the value to be inverted, as shown in the figure to the left. In some cases a single quote mark (') may also be used for this purpose: 0' = 1 and 1' = 0. For greater clarity in some logical expressions, we will use the overbar most of the time. In the inverter symbol, the triangle actually denotes only an amplifier, which in digital terms means that it "cleans up" the signal but does not change its logical sense. It is the circle at the output which denotes the logical inversion. The circle could have

been placed at the input instead, and the logical meaning would still be the same.

The logic gates shown above are used in various combinations to perform tasks of any level of complexity. Some functions are so commonly used that they have been given symbols of their own, and are often packaged so as to provide that specific function directly. On the next page, we'll begin our coverage of these functions.

Derived Logical Functions and Gates
While the three basic functions AND, OR, and NOT are sufficient to accomplish all possible logical functions and operations, some combinations are used so commonly that they have been given names and logic symbols of their own. We will discuss three of these on this page. The first is called NAND, and consists of an AND function followed by a NOT function. The second, as you might expect, is called NOR. This is an OR function followed by NOT. The third is a variation of the OR function, called the Exclusive-OR, or XOR function. As with the three basic logic functions, each of these derived functions has a specific logic symbol and behavior, which we can summarize as follows:

The NAND Gate The NAND gate implements the NAND function, which is exactly inverted from the AND function you already examined. With the gate shown to the left, both inputs must have logic 1 signals applied to them in order for the output to be a logic 0. With either input at logic 0, the output will be held to

logic 1. The circle at the output of the NAND gate denotes the logical inversion, just as it did at the output of the inverter. Also in the figure to the left, note that the overbar is a solid bar over both input values at once. This shows that it is the AND function itself that is inverted, rather than each separate input. As with AND, there is no limit to the number of inputs that may be applied to a NAND function, so there is no functional limit to the number of inputs a NAND gate may have. However, for practical reasons, commercial NAND gates are most commonly manufactured with 2, 3, or 4 inputs, to fit in a 14-pin or 16-pin package. The NOR Gate The NOR gate is an OR gate with the output inverted. Where the OR gate allows the output to be true (logic 1) if any one or more of its inputs are true, the NOR gate inverts this and forces the output to logic 0 when any input is true. In symbols, the NOR function is designated with a plus sign (+), with an overbar over the entire expression to indicate the inversion. In logical diagrams, the symbol to the left designates the NOR gate. As expected, this is an OR gate with a circle to designate the inversion. The NOR function can have any number of inputs, but practical commercial NOR gates are mostly limited to 2, 3, and 4 inputs, as with other gates in this class, to fit in standard IC packages.

The Exclusive-OR, or XOR Gate The Exclusive-OR, or XOR function is an interesting and useful variation on the basic OR function. Verbally, it can be stated as, "Either A or B, but not both." The XOR gate produces a logic 1 output only if its two inputs are different. If the inputs are the same, the output is a logic 0. The XOR symbol is a variation on the standard OR symbol. It consists of a plus (+) sign with a circle around it. The logic symbol, as shown here, is a variation on the standard OR symbol. Unlike standard OR/NOR and AND/NAND functions, the XOR function always has exactly two inputs, and commercially manufactured XOR gates are the same. Four XOR gates fit in a standard 14-pin IC package.

The three derived functions shown above are by no means the only ones, but these form the basis of all the others. On the next page we will look at how the XOR function is derived. Then we will begin our look at practical applications for logic gates in various combinations, to see just how these simple gates can be combined to perform every possible operation in a computer.

Deriving the XOR Function
On the previous page we stated that the Exclusive-OR, or XOR function can be described verbally as, "Either A or B, but not both." In the realm of digital logic there are several ways of stating this in a more detailed and precise format. We won't go here into such devices as Truth tables and graphic representations. We will stick with the more complete verbal statement, "NOT A and B, or A and NOT B." The circuit required to implement this description is shown below:

The practical problem with the circuit above is that it contains three different kinds of gates: AND, OR, and NOT. While this illustrates a practical application using all three of the basic gate types, it is cumbersome to construct on a printed circuit board. There are commercial packages which contain four XOR gates, but often only a single XOR function is wanted in a given application. What is also wanted is a way to create that function with a single IC package. This can easily be done with a single quad two-input NAND gate, as shown in the circuit below:

There are many ways in which the simple logic gates we have examined can be combined to perform useful functions. Some of these circuits produce outputs which are only dependent upon the current logic states of all inputs. These are called combinational logic circuits. Other circuits are designed to actually remember the past states of their inputs, and to produce outputs based

on those past signals as well as the current states of their inputs. These circuits can act in accordance with a sequence of input signals, and are therefore known as sequential logic circuits. In these pages, we will look first at combinational circuits. Then we will move on to sequential circuits. If you wish to skip immediately to sequential circuits, use the navigational links at the top of this page to select the type of circuit you would like to examine.

Adding Binary Numbers
A key requirement of digital computers is the ability to use logical functions to perform arithmetic operations. The basis of this is addition; if we can add two binary numbers, we can just as easily subtract them, or get a little fancier and perform multiplication and division. How, then, do we add two binary numbers? Let's start by adding two binary bits. Since each bit has only two possible values, 0 or 1, there are only four possible combinations of inputs. These four possibilities, and the resulting sums, are:
0 0 1 1 + + + + 0 1 0 1 = 0 = 1 = 1 = 10

Whoops! That fourth line indicates that we have to account for two output bits when we add two input bits: the sum and a possible carry. Let's set this up as a truth table with two inputs and two outputs, and see where we can go from there. INPUTS A 0 0 1 1 B 0 1 0 1 OUTPUTS CARRY SUM 0 0 0 1 0 1 1 0 Well, this looks familiar, doesn't it? The Carry output is a simple AND function, and the Sum is an Exclusive-OR. Thus, we can use two gates to add these two bits together. The resulting circuit is shown below.

OK, we've got a good start on this circuit. However, we're not done yet. In a computer, we'll have to add multi-bit numbers together. If each pair of bits can produce an output carry, it must also be able to recognize and include a carry from the next lower order of magnitude. This is the same requirement as adding decimal numbers -- if you have a carry from one column to the next, the next column has to include that carry. We have to do the same thing with binary numbers, for the same reason. As a result, the circuit to the left is known as a "half adder," because it only does half of the job. We need a circuit that will do the entire job. To construct a full adder circuit, we'll need three inputs and two outputs. Since we'll have both an input carry and an output carry, we'll designate them as CIN and COUT. At the same time, we'll use S to designate the final Sum output. The resulting truth table is shown to the right. Hmmm. This is looking a bit messy. It looks as if COUT may be either an AND or an OR function, depending on the value of A, and S is either an XOR or an XNOR, again depending on the value of A. Looking a little more closely, however, we can note that the S output is actually an XOR between the A input and the half-adder SUM output with B and CIN inputs. Also, the output carry will be true if any two or all three inputs are logic 1. What this suggests is also intuitively logical: we can use two half-adder circuits. The first will add A and B to produce a partial Sum, while the second will add CIN to that Sum to produce the final S output. If either half-adder produces a carry, there will be an output carry. Thus, COUT will be an OR function of the half-adder Carry outputs. The resulting full adder circuit is shown below. INPUTS A B CIN 0 0 0 0 1 1 1 0 0 1 1 0 0 1 0 1 0 1 0 1 0 OUTPUTS COUT 0 0 0 1 0 1 1 S 0 1 1 0 1 0 0






The circuit above is really too complicated to be used in larger logic diagrams, so a separate symbol, shown to the right, is used to represent a onebit full adder. In fact, it is common practice in logic diagrams to represent any complex function as a "black box" with input and output signals designated. It is, after all, the logical function that is important, not the exact method of performing that function.

Now we can add two binary bits together, accounting for a possible carry from the next lower order of magnitude, and sending a carry to the next higher order of magnitude. To perform multibit addition the way a computer would, a full adder must be allocated for each bit to be added simultaneously. Thus, to add two 4-bit numbers to produce a 4-bit sum (with a possible carry), you would need four full adders with carry lines cascaded, as shown to the right. For two 8-bit numbers, you would need eight full adders, which can be formed by cascading two of these 4-bit blocks. By extension, two binary numbers of any size may be added in this manner. It is also quite possible to use this circuit for binary subtraction. If a negative number is applied to the B inputs, the resulting sum will actually be the

difference between the two numbers. We'll look at this subject in more detail in the page on Negative Numbers and Binary Subtraction. In a modern computer, the adder circuitry will include the means of negating one of the input numbers directly, so the circuit can perform either addition or subtraction on demand. Other functions are commonly included in modern implementations of the adder circuit, especially in modern microprocessors.

Negative Numbers and Binary Subtraction
We have seen how simple logic gates can perform the process of binary addition. It is only logical to assume that a similar circuit could perform binary subtraction. If we look at the possibilites involved in subtracting one 1-bit number from another, we can quickly see that three of the four possible combinations are easy and straight-forward. The fourth one involves a bit more:
0 1 1 0 0 0 1 1 = = = = 0 1 0 1, with a borrow bit.

That borrow bit is just like a borrow in decimal subtraction: it subtracts from the next higher order of magnitude in the overall number. Let's see what the truth table looks like. INPUTS A 0 0 1 1 B 0 1 0 1 OUTPUTS BORROW 0 1 0 0 AB 0 1 1 0 This is an interesting result. The difference, A-B, is still an Exclusive-OR function, just as the sum was for addition. The borrow is still an AND function, but is A'B instead of AB. What we'd like to do, now, is find an easy way to use the binary adder to perform subtraction as well. We already have half of it working: the difference output. Can we simply invert the A input so the AND gate will have the right signals? No, we can't, because that would invert the sense of the Exclusive-OR function.

What would be really nice is to convert B to the negative equivalent of its value, and then use the basic

As we discovered when looking at binary counters, once a full count is obtained, the next clock pulse will cause the counter to read zero again. Likewise if we set up a counter to count backwards, the first clock pulse will cause the count to go from all zeroes to all ones. Thinking along these lines, we can see that the binary number 1111 might represent the decimal number 15, or it could represent the number -1. On the right is the counting sequence for a 4-bit binary number, with decimal equivalents expressed in two ways. First we have the unsigned counting sequence, where all numbers are assumed to be positive. Then we see the signed sequence, which includes both positive and negative numbers. Looking at the two decimal counting sequences, we note two factors right away: 1. The positive signed numbers are the same as their unsigned counterparts. 2. Negative signed numbers all correspond to the most significant bit of the binary number being a logic 1. Because positive numbers are the same in both sequences, they can be used together without difficulty. We only need to keep track of how we want to define the system. And the fact that negative numbers all have the binary MSB = 1 is helpful because the MSB can immediately be used to identify the sign of the number. Indeed, the binary MSB is commonly known as the sign bit. The use of this bit to distinguish between positive and negative numbers also allows us to divide the counting sequence evenly between positive and negative

Binary Unsigned Signed Decimal Decimal 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7 -8 -7 -6 -5 -4 -3 -2 -1

numbers. Now we need to look at the relationship between the binary numbers for positive and negative versions of the same magnitude. Suppose we invert each bit of 0001 (+1) to get 1110 (-2). If we then increment the result, we get 1111 (-1), which is what we wanted. Will this relationship hold for all negative numbers? In fact, it does work, as you can determine for yourself. To form the negative of any number, first complement all bits of the number. The result is the one's complement of the original number. Then, add 1 to the result, thus forming the two's complement of the original number. Arithmetic involving such

Now that we have an easy way to obtain the negative of any number, we can convert our original 4-bit adder circuit to an adder/subtractor. By leaving the inputs unchanged, we get the result of A + B. But if we invert B and add 1 with the low-order Cin, we get the result of A - B. We can use Exclusive-OR gates, as shown to the right, to control whether we will add or subtract on any given occasion. With a control input of 0, the XOR gates will leave the B input number unchanged, and will also apply a logic 0 as the initial input carry. This is exactly what we want in order to add the two numbers. However, if we apply a logic 1 to the control input, the XOR gates will invert the B input number to form its one's complement, and will also add 1 through the initial input carry. This changes B to its two's complement.

Thus, the output result will actually be A - B. (Note that in two's complement addition, the output carry is ignored. You can also think of it as an inverted "borrow" bit rather than as a carry, so that a carry of 1 corresponds to a borrow of 0. That logic also holds for the input carry, which also represents an input borrow bit of 0.) When we add or subtract signed numbers, we need to introduce a new concept: overflow. Overflow occurs when the result has the wrong sign bit for the operation that was performed. For example, if we add two positive numbers (7 and 6), we should get a positive result (13). However, using 4-bit binary numbers, we would add 0111 to 0110 and get 1101 as the result. In signed notation, this is a result of -3, not +13. Therefore, an overflow has occurred, where the result would have to have more bits than the original two numbers. This is not as much of a problem as you might think. An 8-bit number can have signed values in the range -128 to +127. A 16-bit signed number may hold any value from -32,768 to +32,767. These ranges are sufficient for most practical applications. Where they are not, modern computers can easily use 32-bit numbers (±2.14 × 109) or 64-bit numbers (±9.22 × 1018) for the purpose. If we add a positive number to a negative number, overflow cannot occur. Likewise, if we are subtracting two numbers of the same sign, overflow is impossible. But if we add like-signed numbers or subtract unlike-signed numbers, we must be aware of the possibility of overflow, and recognize when it occurs. Modern microprocessors are designed to recognize and report when overflow occurs in any arithmetic operation.

The Two-Input Multiplexer
One circuit I've received a number of requests for is the multiplexer circuit. This is a digital circuit with multiple signal inputs, one of which is selected by separate address inputs to be sent to the single output. It's not easy to describe without the logic diagram, but is easy to understand when the diagram is available. A two-input multiplexer is shown below.

The multiplexer circuit is typically used to combine two or more digital signals onto a single line, by placing them there at different times. Technically, this is known as time-division multiplexing. Input A is the addressing input, which controls which of the two data inputs, X0 or X1, will be transmitted to the output. If the A input switches back and forth at a frequency more than double the frequency of either digital signal, both signals will be accurately reproduced, and can be separated again by a demultiplexer circuit synchronized to the multiplexer. This is not as difficult as it may seem at first glance; the telephone network combines multiple audio signals onto a single pair of wires using exactly this technique, and is readily able to separate many telephone conversations so that everyone's voice goes only to the intended recipient. With the growth of the Internet and the World Wide Web, most people have heard about T1 telephone lines. A T1 line can transmit up to 24

individual telephone conversations by multiplexing them in this manner.

A very common application for this type of circuit is found in computers, where dynamic memory uses the same address lines for both row and column addressing. A set of multiplexers is used to first select the row address to the memory, then switch to the column address. This scheme allows large amounts of memory to be incorporated into the computer while limiting the number of copper traces required to connect that memory to the rest of the computer circuitry. In such an application, this circuit is commonly called a data selector. Multiplexers are not limited to two data inputs. If we use two addressing inputs, we can multiplex up to four data signals. With three addressing inputs, we can multiplex eight signals. If you would like to see a demonstration of a four-input multiplexer, you can follow this link. This demonstration requires 64 separate images, each approximately 4K bytes in size, so it will take a little while to load. For this reason, it is not included in the list of digital pages at the top of each page. An eight-input multiplexer would require either 2048 separate images or a rather complex implementation of dynamic HTML; therefore it will not be included on these pages.

The 1-to-2 Line Decoder/Demultiplexer
The opposite of the multiplexer circuit, logically enough, is the demultiplexer. This circuit takes a single data input and one or more address inputs, and selects which of multiple outputs will receive the input signal. The same circuit can also be used as a decoder, by using the address inputs as a binary number and producing an output signal on the single output that matches the binary address input. In this application, the data input line functions as a circuit enabler — if the circuit is disabled, no output will show activity regardless of the binary input number. A one-line to two-line decoder/demultiplexer is shown below.

This circuit uses the same AND gates and the same addressing scheme as the two-input multiplexer circuit shown in these pages. The basic difference is that it is the inputs that are combined and the outputs that are separate. By making this change, we get a circuit that is the inverse of the two-input multiplexer. If you were to construct both circuits on a single breadboard, connect the multiplexer output to the data IN of the demultiplexer, and drive the (A)ddress inputs of both circuits with the same signal, you would find that the initial X0 input would be transmitted to OUT0 and the X1 input would reach only OUT1. The one problem with this arrangement is that one of the two outputs will be inactive while the other is active. To retain the output signal, we need to add a latch circuit that can follow the data signal while it's active, but will hold the last signal state while the other data signal is active. An excellent circuit for this is the D (or Data) Latch. By placing a latch after each output and using the Addressing input (or its inverse) to control them, we can maintain both output signals at all times. If the Address input changes much more rapidly than the data inputs, the output signals will match the inputs faithfully.

Like multiplexers, demultiplexers are not limited to two data signals. If we use two addressing inputs, we can demultiplex up to four data signals. With three addressing inputs, we can demultiplex eight signals. The demonstration of the 2-to-4 line decoder/demultiplexer is much smaller than the demo for the

four-input multiplexer, because it has fewer independent input signals. With one data input and two addressing inputs, the decoder/demultiplexer only needs 8 images for the full demonstration.

Boolean Algebra
One of the primary requirements when dealing with digital circuits is to find ways to make them as simple as possible. This constantly requires that complex logical expressions be reduced to simpler expressions that nevertheless produce the same results under all possible conditions. The simpler expression can then be implemented with a smaller, simpler circuit, which in turn saves the price of the unnecessary gates, reduces the number of gates needed, and reduces the power and the amount of space required by those gates. One tool to reduce logical expressions is the mathematics of logical expressions, introduced by George Boole in 1854 and known today as Boolean Algebra. The rules of Boolean Algebra are simple and straight-forward, and can be applied to any logical expression. The resulting reduced expression can then be readily tested with a Truth Table, to verify that the reduction was valid. The rules of Boolean Algebra are:.

AND Operations (·)
0·0 1·0 0·1 1·1 0+0 1+0 0+1 1+1 = = = = = = = = 0 0 0 1 0 1 1 1

A·0 A·1 A·A A·A' A+0 A+1 A+A A+A' A''

= = = = = = = =

0 A A 0 A 1 A 1

OR Operations (+)

NOT Operations (')
0' = 1 1' = 0

= A

Associative Law
(A·B)·C = A·(B·C) = A·B·C (A+B)+C = A+(B+C) = A+B+C

Distributive Law
A·(B+C) = (A·B) + (A·C)

A+(B·C) = (A+B) · (A+C)

Commutative Law
A·B = B·A A+B = B+A


AB = A·B A·B+C = (A·B) + C A+B·C = A + (B·C)

DeMorgan's Theorem
(A·B)' = A' + B' (A+B)' = A' · B'


The Basic RS NAND Latch
In order for a logical circuit to "remember" and retain its logical state even after the controlling input signal(s) have been removed, it is necessary for the circuit to include some form of feedback. We might start with a pair of inverters, each having its input connected to the other's output. The two outputs will always have opposite logic levels. The problem with this is that we don't have any additional inputs that we can use to change the logic states if we want. We can solve this problem by replacing the inverters with NAND or NOR gates, and using the extra input lines to control the circuit. The circuit shown below is a basic NAND latch. The inputs are generally designated "S" and "R" for "Set" and "Reset" respectively. Because the NAND inputs must normally be logic 1 to avoid affecting the latching action, the inputs are considered to be inverted in this circuit. The outputs of any single-bit latch or memory are traditionally designated Q and Q'. In a commercial latch circuit, either or both of these may be available for use by other circuits. In any case, the circuit itself is:

For the NAND latch circuit, both inputs should normally be at a logic 1 level. Changing an input to a logic 0 level will force that output to a logic 1. The same logic 1 will also be applied to the second input of the other NAND gate, allowing that output to fall to a logic 0 level. This in turn feeds back to the second input of the original gate, forcing its output to remain at logic 1. Applying another logic 0 input to the same gate will have no further effect on this circuit. However, applying a logic 0 to the other gate will cause the same reaction in the other direction, thus changing the state of the latch circuit the other way. Note that it is forbidden to have both inputs at a logic 0 level at the same time. That state will force both outputs to a logic 1, overriding the feedback latching action. In this condition, whichever input goes to logic 1 first will lose control, while the other input (still at logic 0) controls the resulting state of the latch. If both inputs go to logic 1 simultaneously, the result is a "race" condition, and the final state of the latch cannot be determined ahead of time.

This circuit has quite a number of limitations, and can be improved in many ways as you'll see shortly. However, it does have a very practical application almost without changes. Any mechanical switch experiences a phenomenon called "contact bounce." Whenever you press the button or change the switch position, the physical contacts will flex a little, causing them to make and break

several times before settling down. You don't notice this when turning on a light in your home, but digital circuits are fast enough that they do notice this behavior and transmit it faithfully. If you are trying to test a new digital circuit by sending it one clock pulse at a time, this will cause all sorts of headaches. The solution is to use an SPDT (Single-Pole, Double-Throw) pushbutton or switch, as shown in the figure to the right. Normally, the switch is of the breakbefore-make type, so there will be some part of the switch motion when all three contacts are disconnected from each other. Now, the unconnected input is held at a logic 1 through its resistor (an electronic component that allows an electrical connection without causing a dead short), while the connected input is held at logic 0 by the direct connection through the switch. When the switch is moved to the other setting or the button is pressed, the very first contact will cause the latch to change state, but additional bounces will have no further effect. This eliminates the contact bounce and sends a single, clean digital transition to the next circuit. All of the interactive digital demonstrations behave in a debounced fashion, and would use this type of circuit if constructed physically.

One problem with the basic RS NAND latch is that the input levels need to be inverted, sitting idle at logic 1, in order for the circuit to work. It would be helpful, as well as more intuitive, if we had normal inputs which would idle at logic 0, and go to logic 1 only to control the latch. This much we can do simply by placing inverters at the inputs. However, there is another problem we need to address: How to control when the latch is allowed to change state, and when it is not. This is necessary if we have a group of latches and want to be sure they all change state (or not) at the same time. We'll see how both of these concerns can be easily addressed on the next page.

The Basic RS NOR Latch
While most of our demonstration circuits use NAND gates, the same functions can also be performed using NOR gates. A few adjustments must be made to allow for the difference in the logic function, but the logic involved is quite similar. The circuit shown below is a basic NOR latch. The inputs are generally designated "S" and "R" for "Set" and "Reset" respectively. Because the NOR

inputs must normally be logic 0 to avoid overriding the latching action, the inputs are not inverted in this circuit. The NOR-based latch circuit is: For the NOR latch circuit, both inputs should normally be at a logic 0 level. Changing an input to a logic 1 level will force that output to a logic 0. The same logic 0 will also be applied to the second input of the other NOR gate, allowing that output to rise to a logic 1 level. This in turn feeds back to the second input of the original gate, forcing its output to remain at logic 0 even after the external input is removed. Applying another logic 1 input to the same gate will have no further effect on this circuit. However, applying a logic 1 to the other gate will cause the same reaction in the other direction, thus changing the state of the latch circuit the other way. Note that it is forbidden to have both inputs at a logic 1 level at the same time. That state will force both outputs to a logic 0, overriding the feedback latching action. In this condition, whichever input goes to logic 0 first will lose control, while the other input (still at logic 1) controls the resulting state of the latch. If both inputs go to logic 0 simultaneously, the result is a "race" condition, and the final state of the latch cannot be determined ahead of time.

One problem with the basic RS NOR latch is that the input signals actively drive their respective outputs to a logic 0, rather than to a logic 1. Thus, the S input signal is applied to the gate that produces the Q' output, while the R input signal is applied to the gate that produces the Q output. The circuit works fine, but this reversal of inputs can be confusing when you first try to deal with NOR-based circuits.

The Clocked RS NAND Latch
By adding a pair of NAND gates to the input circuits of the RS latch, we accomplish two goals: normal rather than inverted inputs, and a third input

common to both gates which we can use to synchronize this circuit with others of its kind. The clocked RS NAND latch is shown below. The clocked RS latch circuit is very similar in operation to the basic latch you examined on the previous page. The S and R inputs are normally at logic 0, and must be changed to logic 1 to change the state of the latch. However, with the third input, a new factor has been added. This input is typically designated C or CLK, because it is typically controlled by a clock circuit of some sort, which is used to synchronize several of these latch circuits with each other. The output can only change state while the CLK input is a logic 1. When CLK is a logic 0, the S and R inputs will have no effect. The same rule about not activating both the S and R inputs simultaneously holds true: if both are logic 1 when the clock is also logic 1, the latching action is bypassed and both outputs will go to logic 1. The difference in this case is that if the CLK input drops to logic 0 first, there is no question or doubt -- a true race condition will exist, and you cannot tell which way the outputs will come to rest. The example circuit on this page reflects this uncertainty. For correct operation, the selected R or S input should be brought to logic 1, then the CLK input should be made logic 1 and

then logic 0 again. Finally, the selected input should be returned to logic 0. The clocked RS latch solves some of the problems of basic RS latch circuit, and allows closer control of the latching action. However, it is by no means a complete solution. A major problem remaining is that this latch circuit could easily experience a change in S and R input levels while the CLK input is still at a logic 1 level. This allows the circuit to change state many times before the CLK input returns to logic 0. One way to minimize this problem is to keep the CLK at logic 0 most of the time, and to allow only brief changes to logic 1. However, this approach still cannot guarantee that the latch will only change state once while the clock signal is at logic 1. This signal must have a certain duration to make sure all latches have time to respond to it, and in that time, most latches can respond to multiple changes. A better way is to make sure that the latch can only change its outputs at one instant of the clock cycle. The next page will demonstrate a circuit which solves this problem handily, by changing states only on a particular transistion, or edge, of the clock signal.

The Edge-Triggered RS FlipFlop
To adjust the clocked RS latch for edge triggering, we must actually combine two identical clocked latch circuits, but have them operate on opposite halves of the clock signal. The resulting circuit is commonly called a flip-flop, because its output can first flip one way and then flop back the other way. The clocked RS latch is also sometimes called a flip-flop, although it is more properly referred to as a latch circuit. The two-section flip-flop is also known as a master-slave flip-flop, because the input latch operates as the master section, while the output section is slaved to the master during half of each clock cycle. The edge-triggered RS NAND flip-flop is shown below.

The edge-triggered RS flip-flop actually consists of two identical RS latch circuits, as shown above. However, the inverter connected between the two CLK inputs ensures that the two sections will be enabled during opposite halfcycles of the clock signal. This is the key to the operation of this circuit. If we start with the CLK input at logic 0 as initially depicted above, the S and R inputs are disconnected from the input (master) latch. Therefore, any changes in the input signals cannot affect the state of the final outputs. When the CLK signal goes to logic 1, the S and R inputs are able to control the state of the input latch, just as with the single RS latch circuit you already examined. However, at the same time the inverted CLK signal applied to the output (slave) latch prevents the state of the input latch from having any effect here. Therefore, any changes in the R and S input signals are tracked by the input latch while CLK is at logic 1, but are not reflected at the Q and Q' outputs. When CLK falls again to logic 0, the S and R inputs are again isolated from the input latch. At the same time, the inverted CLK signal now allows the current state of the input latch to reach the output latch. Therefore, the Q and Q' outputs can only change state when the CLK signal falls from a logic 1 to logic 0. This is known as the falling edge of the CLK signal; hence the designation edge-triggered flip-flop.

By going to a master-slave structure and making the flip-flop edgetriggered, we have made sure that we can precisely control the moment when all flip-flops will change state. We have also allowed plenty of time for the

master latch to respond to the input signals, and for those input signals to change and settle following the previous change of state. There is still one problem left to solve: the possible race condition which may occur if both the S and R inputs are at logic 1 when CLK falls from logic 1 to logic 0. In the example above, we automatically assume that the race will always end with the master latch in the logic 1 state, but this will not be certain with real components. Therefore, we need to have a way to prevent race conditions from occurring at all. That way we won't have to figure out which gate in the circuit won the race on this particular occasion. The solution is to add some additional feedback from the slave latch to the master latch. The resulting circuit is called a JK flip-flop.

The JK Flip-Flop
To prevent any possibility of a "race" condition occurring when both the S and R inputs are at logic 1 when the CLK input falls from logic 1 to logic 0, we must somehow prevent one of those inputs from having an effect on the master latch in the circuit. At the same time, we still want the flip-flop to be able to change state on each falling edge of the CLK input, if the input logic signals call for this. Therefore, the S or R input to be disabled depends on the current state of the slave latch outputs. If the Q output is a logic 1 (the flip-flop is in the "Set" state), the S input can't make it any more set than it already is. Therefore, we can disable the S input without disabling the flip-flop under these conditions. In the same way, if the Q output is logic 0 (the flip-flop is Reset), the R input can be disabled without causing any harm. If we can accomplish this without too much trouble, we will have solved the problem of the "race" condition. The circuit below shows the solution. To the RS flip-flop we have added two new connections from the Q and Q' outputs back to the original input gates. Remember that a NAND gate may have any number of inputs, so this causes no trouble. To show that we have done this, we change the designations of the logic inputs and of the flip-flop itself. The inputs are now designated J (instead of S) and K (instead of R). The entire circuit is known as a JK flip-flop.

In most ways, the JK flip-flop behaves just like the RS flip-flop. The Q and Q' outputs will only change state on the falling edge of the CLK signal, and the J and K inputs will control the future output state pretty much as before. However, there are some important differences. Since one of the two logic inputs is always disabled according to the output state of the overall flip-flop, the master latch cannot change state back and forth while the CLK input is at logic 1. Instead, the enabled input can change the state of the master latch once, after which this latch will not change again. This was not true of the RS flip-flop. If both the J and K inputs are held at logic 1 and the CLK signal continues to change, the Q and Q' outputs will simply change state with each falling edge of the CLK signal. (The master latch circuit will change state with each rising edge of CLK.) We can use this characteristic to advantage in a number of ways. A flip-flop built specifically to operate this way is typically designated as a T (for Toggle) flip-flop. The lone T input is in fact the CLK input for other types of flip-flops. The JK flip-flop must be edge triggered in this manner. Any level-triggered JK latch circuit will oscillate rapidly if all three inputs are held at logic 1. This is not very useful. For the same reason, the T flip-flop must also be edge triggered. For both types, this is the only way to ensure that the flip-flop will change state only once on any given clock pulse.

Because the behavior of the JK flip-flop is completely predictable under all conditions, this is the preferred type of flip-flop for most logic circuit designs. The RS flip-flop is only used in applications where it can be guaranteed that both R and S cannot be logic 1 at the same time.

At the same time, there are some additional useful configurations of both latches and flip-flops. In the next pages, we will look first at the major configurations and note their properties. Then we will see how multiple flipflops or latches can be combined to perform useful functions and operations.

The D Latch
One very useful variation on the RS latch circuit is the Data latch, or D latch as it is generally called. As shown in the logic diagram below, the D latch is constructed by using the inverted S input as the R input signal. The single remaining input is designated "D" to distinguish its operation from other types of latches. It makes no difference that the R input signal is effectively clocked twice, since the CLK signal will either allow the signals to pass both gates or it will not. For comparison, you can review the RS NAND latch circuit if you wish. Use the "Back" button or equivalent to return here. In the D latch, when the CLK input is logic 1, the Q output will always reflect the logic level present at the D input, no matter how that changes. When the CLK input falls to logic 0, the last state of the D input is trapped and held in the latch, for use by whatever other circuits may need this signal. Because the single D input is also inverted to provide the signal to reset the latch, this latch circuit cannot experience a "race" condition caused by all inputs being at logic 1 simultaneously. Therefore the D latch circuit can be safely used in any circuit. Although the D latch does not have to be made edge triggered for safe operation, there are some applications where an edge-triggered D flip-flop is desirable. This can be accomplished by using a D latch circuit as the master

section of an RS flip-flop, as shown on this page. Both types are useful, so both are made commercially available. Except for the change in input circuitry, a D flip-flop works just like the RS flip-flop.

With all of these different types of latches and flip-flops, the logic diagrams we have been using have gotten rather large, especially for the edge-triggered flip-flops. Fortunately, it really isn't necessary to follow and understand the inner workings of any of these circuits when they are used in larger applications. Instead, we use a set of very simple symbols to represent each type of latch or flip-flop in larger logical circuits. That is the subject of a separate page on Flip-Flop Symbols. The edge-triggered D flip-flop is easily derived from its RS counterpart. The only requirement is to replace the R input with an inverted version of the S input, which thereby becomes D. This is only needed in the master latch section; the slave remains unchanged. One essential point about the D flip-flop is that when the clock input falls to logic 0 and the outputs can change state, the Q output always takes on the state of the D input at the moment of the clock edge. This was not true of the RS and JK flip-flops. The RS master section would repeatedly change states to match the input signals while the clock line is logic 1, and the Q output would reflect whichever input most recently received an active signal. The JK master section would receive and hold an input to tell it to change state, and never change that state until the next cycle of the clock. This behavior is not possible with a D flip-flop. The edge-triggered D NAND flip-flop is shown below.

Flip-Flop Symbols
Although the internal circuitry of latches and flip-flops is interesting to watch on an individual basis, placing all of those logic symbols in a diagram involving multiple flip-flops would rapidly generate so much clutter that the overall purpose of the diagram would be lost. To avoid this problem, we use the "black-box" approach. This is actually just one step further that the "black-box" approach we used in specifying logic gate symbols to represent specific clusters of electronic components — now we are using one symbol to represent a cluster of logic gates connected to perform a specific function. Some typical flip-flop symbols are shown below:

As you have no doubt noticed, the symbols above are nearly identical — only the inputs vary. This is typical of the "black-box" approach. However, there is one other variation, as shown to the right. In each of the symbols above, the clock input is marked by the small angle, rather than by the letters CLK. That little angle marker actually provides two pieces of information, rather than one. First, of course, it marks the clocking input. Second, it specifies that these are edge-triggered flip-flops. The D latch shown to the right uses a rounded marker for the clock input. This signifies that the circuit is controlled by the clock level, not the clock edge. In fact, the symbol to the right would normally be used for the D latch circuit shown separately. If we change that rounded input to a sharp angle, it would indicate an edge-triggered master-slave D flip-flop.

Any of these symbols may be modified according to their actual use within the larger circuit. For example, if only the Q output is used, it may well be the only output shown. Some flip-flops incorporate master preset or reset inputs, which bypass the clock and the master section of an edge-triggered flip-flop and force the output to an immediate known state. This is often used when a circuit comprised of many flip-flops is first powered up, so that all circuits will start in a known state. It is very seldom that a flip-flop will actually be used alone. Such circuits are far more useful when grouped together and acting in concert. There are two general ways in which flip-flops may be interconnected to perform useful functions: counters and registers. When we're done with individual flip-flops, we'll go on to counters and then look at registers.

Converting Flip-Flop Input Types

Sometimes it just happens that you need a particular type of flip-flop for a specific application, but all you have available is another type. This often happens with an application needing T flip-flops, since these are not generally available in commercial packages. Rather, it is necessary to re-wire an available type to perform as a T device. Fortunately, this is not hard. We've already seen that a JK flip-flop with its J and K inputs connected to a logic 1 will operate as a T flip-flop. Converting an RS flip-flop involves a bit more, as shown to the right. However, the simple feedback connections shown will ensure that the S and R inputs will always tell the flip-flop to change state at each clock pulse. Converting a D flip-flop to T operation is quite similar; the Q' output is connected back to the D input.

Another conversion that is required on occasion is to convert an RS flip-flop to D operation. This change eliminates the possibility of an illegal input condition, which could otherwise cause spurious results in some applications. In this case, we do need to add an inverter to supply the R input signal, as shown to the left.

A much more complicated circuit, shown to the right, is the gating structure needed to convert a D flip-flop to JK operation. This circuit implements the logical truth that D = JQ' + K'Q. This input circuit is actually used more frequently than you might think. CMOS flip-flops are typically constructed as D types because of the nature of their internal operation. Commercial CMOS JK flip-flops, such as the 4013, then add this circuit to the input in order to get JK operation. This approach eliminates the internal latching effect, or "ones catching," that occurs with the general JK master-slave flip-flop. The J and K input signals must be present at the time the clock signal falls to logic 0, in order to affect the new output state.

D Flip-Flop Using NOR Latches
This circuit utilizes three interconnected RS latch circuits, as shown. This example uses NOR gates, but NAND gates can easily be used to perform the same function. The two input latch circuits essentially store the D and D' signals separately, and apply those stored signals to the output latch. While the CLK input is a logic 0, changes to the D input can only affect the state of the lower gate of the lower input latch circuit. The other gates are locked into their output states by their other interconnections.

When CLK goes to logic 1, it inherently forces the outputs of the two middle input gates to logic 0. This effectively isolates the output latch from any input changes. Note that at this time, one or the other of the two input latches will be in an illegal state, depending on the state of the D input. This illegal state overrides the latching action of that input circuit. Now, when CLK falls to logic 0, whichever input latch was in an illegal state will abruptly resume its latching action, and will at once control the state of the output latch. In this manner, the circuit is still an edge-triggered flip-flop that will take on the state of the D input at the moment of the falling clock edge.

CMOS Flip-Flop Construction
CMOS technology allows a very different approach to flip-flop design and construction. Instead of using logic gates to connect the clock signal to the master and slave sections of the flip-flop, a CMOS flip-flop uses transmission

gates to control the data connections. (See the CMOS gate electronics page for a closer look at the transmission gate itself.) The result is that a controllable flip-flop can be built with only inverters and transmission gates — a very small and simple structure for an IC. The basic CMOS D flip-flop is shown below.

A Basic Digital Counter
One common requirement in digital circuits is counting, both forward and backward. Digital clocks and watches are everywhere, timers are found in a range of appliances from microwave ovens to VCRs, and counters for other reasons are found in everything from automobiles to test equipment. Although we will see many variations on the basic counter, they are all fundamentally very similar. The demonstration below shows the most basic kind of binary counting circuit. In the 4bit counter to the right, we are using edgetriggered master-slave flip-flops similar to

those in the Sequential portion of these pages. The output of each flipflop changes state on the falling edge (1-to-0 transistion) of the T input. The count held by this counter is read in the reverse order from the order in which the flip-flops are triggered. Thus, output D is the high order of the count, while output A is the low order. The binary count held by the counter is then DCBA, and runs from 0000 (decimal 0) to 1111 (decimal 15) . The next clock pulse will cause the counter to try to

increment to 10000 (decimal 16) . However, that 1 bit is not held by any flip-flop and is therefore lost. As a result, the counter actually reverts to 0000, and the count begins again. In future pages on counters, we will use a different input scheme, as shown to the left. Instead of changing the state of the input clock with each click, you will send one complete clock pulse to the counter when you click the input button. The button image will

reflect the state of the clock pulse, and the counter image will be updated at the end of the pulse. For a clear view without taking excessive time, each clock pulse has a duration or pulse width of 300 ms (0.3 second). The demonstratio n system will ignore any clicks that occur within the duration of the pulse.

A major problem with the counters shown on this page is that the individual flip-flops do not all change state at the same time. Rather, each flip-flop is used to trigger the next one in the series. Thus, in switching from all 1s (count = 15) to all 0s (count wraps back to 0), we don't see a smooth transistion. Instead, output A falls first, changing the apparent count to 14. This triggers output B to fall, changing the apparent count to 12. This in turn triggers output C, which leaves a count of 8 while triggering output D to fall. This last action finally leaves us with the correct output count of zero. We say that the change of state "ripples" through the counter from one flip-flop to the next. Therefore, this circuit is known as a "ripple counter." This causes no problem if the output is only to be read by human eyes; the ripple effect is too fast for us to see it. However, if the count is to be used as a selector by other digital circuits (such as a multiplexer or demultiplexer), the

ripple effect can easily allow signals to get mixed together in an undesirable fashion. To prevent this, we need to devise a method of causing all of the flipflops to change state at the same moment. That would be known as a "synchronous counter" because the flip-flops would be synchronized to operate in unison. That is the subject of the next page in this series.

A Synchronous Binary Counter
In our initial discussion on counters (A Basic Digital Counter), we noted the need to have all flip-flops in a counter to operate in unison with each other, so that all bits in the ouput count would change state at the same time. To accomplish this, we need to apply the same clock pulse to all flip-flops. However, we do not want all flip-flops to change state with every clock pulse. Therefore, we'll need to add some controlling gates to determine when each flip-flop is allowed to change state, and when it is not. This requirement denies us the use of T flip-flops, but does require that we still use edgetriggered circuits. We can use either RS or JK flip-flops for this; we'll use JK flip-flops for the demonstrations on this page.

To determine the gates required at each flip-flop input, let's start by drawing up a truth table for all states of the counter. Such a table is shown to the right. Looking first at output A, we note that it must change state with every input clock pulse. Therefore, we could use a T flip-flop here if we wanted to. We won't do so, just to make all of our flip-flops the same. But even with JK flipflops, all we need to do here is to connect both the J and K inputs of this flip-flop to logic 1 in order to get the correct activity. Flip-flop B is a bit more complicated. This output must change state only on every other input clock pulse. Looking at the truth table again, output B must be ready to change states whenever output A is a logic 1, but not when A is a logic 0. If we recall the behavior of the JK flip-flop, we can see that if we connect output A to the J and K inputs of flip-

States D C B A 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 1 0 0 0 1 1 0 0 1 1 0 0 1 0 1 0 1 0 1 0

Count 0 1 2 3 4 5 6 7 8

1 1 1 1 1 flop B, we will see output B behaving correctly. Continuing this line of reasoning, output C may change state only when both A and B are logic 1. We can't use only output B as the control for flip-flop C; that will allow C to change state when the counter is in state 2, causing it to switch directly from a count of 2 to a count of 7, and again 1

0 0 0 1 1 1

0 1 1 0 0 1

1 0 1 0 1 0

9 10 11 12 13 14






When we started our look into counters, we noted a lot of applications involving numeric displays: clocks, ovens, microwave ovens, VCRs, etc. These applications require a decimal count in most cases, and a count from 0 to 5 for some digits in a clock display. Can we use a method of gating, such as we used above in the synchronous binary counter, to shorten the counting sequence to the appropriate extent? Obviously there is a way, since digital clocks and watches do exist and do work. Starting on the next page, we'll see how.

Decimal and Shorter Counts
States Count To create a decimal counter, we need to find a way to cut

D C B A 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 1 0 0 0 1 1 0 0 1 1 0 0 1 0 1 0 1 0 1 0 0 1 2 3 4 5 6 7 8

the counting sequence short. The Truth Table to the left shows the actual counting sequence we need. Note that the counting sequence is exactly the same as for the binary counter we saw on the previous page, up through a count of 9. At that point, where the binary counter would continue on to a count of 10, the decimal counter must reset itself to a count of 0. In this sequence, flip-flops A and C are no problem. Their next states will both be logic 0 whether the next count is 10 or 0. However, flip-flop B would normally switch from logic 0 to logic 1, and must be prevented from doing so. At the same time, flip-flop D, which is at logic 1, must be made to switch back to logic 0. Hmmmm. Since flip-flop D is a logic 1 for only two counts, and only flip-flop A will change state going from count 8 to count 9 in any case, perhaps we can use the D and D' outputs, with gates, to force the desired change in sequence. This is in fact the case, as shown in the demonstration below. Note that we have applied different signals to the J and K inputs of flip-flop D. This is perfectly acceptable, and allows us to reset this flip-flop under the control of a simple two-input AND gate.






The base-6 counter (counting from 0-5) is just a shorter version of the decimal counter. Flip-flop D is no longer needed, but the logic of controlling the count is not

really different. The demonstration to the right shows a practical counter for the 10s of minutes or seconds in a digital clock. Now that we've seen counting sequences other than binary, new questions arise: Can we generate other counting sequences for special purposes? And are there other uses for counting circuits than numeric counts? We'll explore these questions on the next page.

Frequency Dividers

If we apply a fixed-frequency pulse train to a counter, rather than individual pulses coming at random intervals, we begin to notice some interesting characteristics, and some useful relationships between the input clock signal and the output signals. Consider a single flip-flop with a continuous succession of clock pulses at a fixed frequency, such as the one shown to the right. We note three useful facts about the output signals seen at Q and Q': 1. 2. 3. They are exactly inverted from each other. They are perfect square waves (50% duty cycle). They have a frequency just half that of the clock pulse train.

The duty cycle of any rectangular waveform refers to the percentage of the full cycle that the signal remains at logic 1. If the signal spends half its time at logic 1 and the other half at logic 0, we have a waveform with a 50% duty cycle. This describes a perfect, symmetrical square wave. Frequency division by an odd number is also possible. The circuit to the left is a demonstration of a divide-by-3 counter. No gates are required to control the sequence if JK flip-flops are used; feeding the output signals back to the appropriate inputs is sufficient. Of course, it is not possible to get a symmetrical (50% duty cycle) square wave with this circuit. The A output is at logic 1 for two clock pulses out of three; the B output is at logic 1 for one clock pulse out of three. Thus, duty cycles of 1/3 (33.333%) and 2/3 (66.667%) are available. This rendition of a divide-by-5 counter actually follows the normal decimal (or binary) count from zero through four. The primary control feature is the feedback from the C' output to flip-flop A's J input. This feedback prevents flip-flop A from switching from logic 0 to logic 1 in an effort to go from a count of four to a count of five. At the same time,

the C output is applied to flipflop C's K input to force flip-flop C to reset on the next clock pulse. This particular arrangement is often combined with a single flipflop in an IC package. The combination can then be used either as a normal decimal counter or as a divide-by-10 counter with a true square-wave output. If it is not necessary to maintain a standard binary counting sequence, we can often interconnect the flip-flops so as to eliminate the need for any extra gates, as shown to the left. Note that the K inputs to both flip-flops A and B are connected to logic 1. As a result, outputs A and B will remain at logic 1 for only one clock pulse at a time, and will

then reset to logic 0. Output C will toggle after B goes to logic 1. Output C has a 40% duty cycle. Outputs A and B produce two output pulses for each pulse from C, but not at equal intervals. The counting sequence is 0, 1, 2, 5, 6, 0, etc. This counter circuit actually has a flaw as shown: if it powers up in state 4 (A = 0, B = 0, C = 1), it will remain in that state and be unable to change at all. To correct this, we can disconnect C's K input from output B, and connect it to output A' instead. Now the first clock pulse will force the circuit to state 0 (000), from which the count will proceed normally. This change will not affect the normal counting sequence, because a logic 1

at the K input cannot prevent the flip-flop from changing to a logic 1, and would force C back to a logic 0 at the same time it would change anyway. Other counting sequences are also possible, of course. If a need exists to have two or more signals in a particular frequency relationship with each other, some extension or variation on the circuits shown here can be designed to supply the need.

Counting in Reverse
Now that we've seen normal counting, let's see how we can count backwards. Countdowns are required in a wide range of applications, including everyday tasks. Remaining cooking time for either a conventional oven or microwave oven is commonly displayed this way, as distinguished from the time display that shows when the oven isn't doing something else. Here we will modify the standard ripple counter to make it count backwards instead. In the 4bit counter to the right, we are still using edgetriggered master-slave flip-flops similar to those in the Sequential portion of these pages. The output of each flipflop changes

state on the falling edge (1-to-0 transistion) of the T input. However, note that in this case each T input is triggered by the Q' output of the prior flipflop, rather than by the Q output. As a result, each flip-flop will change state when the prior one changes from 0 to 1 at its Q output, rather than changing from 1 to 0. Because of this, the first pulse will cause the counter to change state from 0000 to 1111. Since this circuit counts downwards instead of upwards, we are left with a possible question: if the initial state of all zeroes represents a count of zero, what is the value represented by the next state of all ones? Is it 15 this time? Or would it be more true to call this state -1 for this circuit?

This in turn leads us to an essential question: how can we deal with negative numbers in binary notation? We must be able to do it, since negative numbers have meaning and must be dealt with mathematically. Since negative numbers fall in the province of mathematical computation, we will deal with them in their own page, on Negative Numbers and Binary Subtraction.

The Johnson Counter
In some cases, we want a counter that provides individual digit outputs rather than a binary or BCD output. Of course, we can do this by adding a decoder circuit to the binary counter. However, in many cases it is much simpler to use a different counter structure, that will permit much simpler decoding of individual digit outputs. For example, consider the counting sequence to the right. It actually resembles the behavior of a shift register more than a counter, but that need not be a problem. Indeed, we can easily use a shift register to implement such a counter. In addition, we can notice that each legal count may be defined by the location of the last flip-flop to change states, and which way it changed state. This can be accomplished with a simple two-input AND or NOR gate monitoring the output states of two adjacent flip-flops. In this way, we can use ten simple 2-input gates to provide ten decoded outputs for digits 0-9. This is known as the Johnson counting sequence, and counters that implement this approach are called Johnson Counters. We could also make an octal counter by using four flip-flops in this configuration. In fact, this is done commercially: the CMOS ICs 4017 and 4022 are counters that implement this technique easily and cheaply. There is one caveat that must be considered here: The 5-stage circuit uses five flip-flops, and therefore has 32 possible binary states, yet we only use ten states. The 4stage counter uses only eight of 16 possible states. We must include circuitry that will filter out the illegal states States A B C D E 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 Count 0 1 2 3 4 5 6 7 8 9

and force this circuit to go towards the correct counting sequence, even if it finds itself in an illegal mode when first powered up. This is not difficult, and the demonstration circuit below includes the necessary gating structure. The demonstration above initially implements only the legitimate counting The circuit the Johnson counter. To allow the CMOS sequence ofbelow is logically equivalent to for all possible illegal combinations 4017 decimalhow they although slightlyout, we would need 66 separate images for and show counter, get straightened simplified from thethe overlays, unit. each image is about 6.5K bytes in size. That's a bit much to commercial and ask of many users. However, you can see the count correction gates operating at the bottom of the counter, and see how they work. The D input to flip-flop C is not directly driven from the B output. Rather, A' and C' are ANDed together, and that combination is NORed with B'. As a result, improper bits reaching flip-flop B get blocked, and flip-flop C can only take on the correct state to reinstate the correct shifting sequence. To see this in action, click on any of the individual flip-flops in the figure. This will force a load of all remaining images and change the state of the selected flip-flop without applying a clock pulse. Then you can watch the behavior of the counter as it removes improper counting sequences. Remember that the download of the additional images may take some time, so please be patient. You can always identify an illegal counting sequence because more than one output will be high (logic 1). Since each output is enabled by a transition from 0 to 1 or from 1 to 0 in a specific position in the counter, more than one transition will produce more than one output, which is illegal in this context. Also note that in order to repeatedly invert the shifting bits as they start from flip-flop A, the E' output is fed back to the A flip-flop's D input. This shift does not constitute a second transition here; only when all bits are the same does this appear as a transition. Of course, these must necessarily be edge-triggered flip-flops clocked simultaneously. The Reset inputs are asynchronous and override the clocking signal. In addition, the CMOS ICs that serve as the model for this demonstration change state on the rising edge of the clock, so this model does the same.

The COUT signal is the Carry Out, which is a symmetrical square wave at one-tenth of the incoming Clock signal frequency. It is quite suitable for clocking a second counter of the same type, to form a multiple-digit decimal counter.

Serial-to-Parallel Shift Register
The term register can be used in a variety of specific applications, but in all cases it refers to a group of flip-flops operating as a coherent unit to hold data. This is different from a counter, which is a group of flip-flops operating to generate new data by tabulating it. In this context, a counter can be viewed as a specialized kind of register, which counts events and thereby generates data, rather than just holding the data or changing the way it is handled. More commonly, however, counters are treated separately from registers. The two are then handled as separate concepts which work together in many applications, and which have some features in common. The demonstration circuit below is known as a shift register because data is shifted through it, from flip-flop to flip-flop. If you apply one byte (8 bits) of data to the initial data input one bit at a time, and apply one clock pulse to the circuit after setting each bit of data, you will find the entire byte present at the flip-flop outputs in parallel format. Therefore, this circuit is known as a serialin, parallel-out shift register. It is also known sometimes as a shift-in register, or as a serial-to-parallel shift register. By standardized convention, the least significant bit (LSB) of the byte is shifted in first.

As you would no doubt expect, the counterpart to the shift register above is the parallel-in, serial-out shift register, somtimes called a shift-out register. That circuit is a bit more complex that the shift-in register shown above, but generally operates in a very similar fashion, as we'll see on the next page.

Parallel-to-Serial Shift Register
Where there is a need for serial-to-parallel conversion, there is also a need for parallel-to-serial conversion. The parallel-in, serial-out register (or parallelto-serial shift register, or shift-out register), however, is a bit more complex than its counterpart. Since each flip-flop in the register must be able to accept data from either a serial or a parallel source, a small two-input multiplexer is required in front of each input. An extra input line selects between serial and parallel input signals, and as usual the flip-flops are loaded in accordance with a common clock signal. The shift-out demonstration circuit below is limited to four bits so it can fit horizontally on a reasonable page. This is also a practical size for commercial ICs. A 4-bit shift register with parallel and serial inputs and outputs will fit nicely into a 14-pin DIP IC. In addition, this demonstration circuit introduces a new input button: a mode control. The button labelled "S" indicates that the shift-out register is currently in serial mode. Thus, input signals present at the serial input just above the "S" button will be shifted into the register one by one with each clock pulse. If you click on the "S" button, it will change state as expected, but will also change to a "P" to indicate that the register now operates in parallel mode. This enables you to load the entire register at once from the parallel inputs just below the multiplexers. Thus, we can have a parallel input and a serial output. The inclusion of a serial input makes it possible to cascade multiple circuits of this type in order to increase the number of bits in the total register. This is common practice in real-world circuits. Because this circuit has both parallel and serial inputs and outputs, it can serve as either a shift-in register or a shift-out register. This capability can have advantages in many cases. The least significant bit (LSB) is always available first at the serial output.

555 Application: Pulse Sequencer
One requirement in certain digital circuits is for the generation of a series of pulses in time sequence, but on different lines. The pulse widths may or may not be the same, but they must occur one after the other, and therefore cannot come from the same source. Sometimes pulses are required to overlap, or there must be a set delay following the end of one pulse before another pulse begins. Or, One pulse must begin a set time after another begins. The possible variations in timing requirements are almost endless, and many different approaches have been used to provide pulses with the necessary timing relationships. One inexpensive method that is perfectly satisfactory in many applications is to use interconnected 555 timers to generate the necessary timing intervals.

In the circuit shown above, we see three 555 timers, all configured in monostable mode. Each one, from left to right, triggers the next at the end of its timing interval. The resulting pulse timing is shown in the timing diagram to the right. The sequence starts with the falling edge of the incoming trigger pulse. That edge triggers timer A, causing output A to go high. At this point, the incoming trigger pulse can either stay low or go high; it is no longer important. Output A will remain high for the duration of its timing interval, and then fall back to its low state. At this time, it triggers timer B. Output B therefore goes high as output A falls. The same thing happens again at the end of the B timing interval; output B falls and triggers timer C. At the end of the C timing interval, the sequence is over and all timers are quiescent, awaiting the arrival of the next triggering signal.

This time, the circuit is wired so that the incoming trigger signal is applied to both timers A and B. However, B's timing interval is very short, so it triggers timer C while the A output is still high. Depending on the designed timing intervals, C can easily be triggered and finish its timing interval while A is still active. Or, C can remain active after A falls back to its quiescent state. Any number of 555 timers can be sequentially or jointly triggered with this kind of arrangement, and each timer still has its own individually-controlled timing interval. The possible combinations are endless, and independent pulses can overlap or not according to the needs of the application.

The initial triggering signal can come from any source. It can even be from another 555 timer, operating in astable mode. In that case, this kind of circuit is self-starting and will operate continuously. It is also possible to trigger this sort of circuit manually, using a momentarycontact pushbutton to provide the triggering pulse. Or it could be some sort of sensor device, triggering the circuit upon recognition of some external

condition. Thus, the circuit can be triggered by any kind of event, just as it can be used to control any kind of circuit. There is one drawback to using daisy-chained 555 timers in this fashion. Once a 555 timer in monostable mode has been triggered, it cannot be retriggered and the timing interval cannot be changed (at least without the addition of external circuitry to accomplish that). Therefore, if a second trigger pulse arrives while the input timer(s) are still active, it will be ignored. A worse possibility exists for the second example above: If the B interval has been completed but the A interval has not been completed when a new trigger pulse is received, timer B will be triggered but timer A will ignore it. If the timing relationship between the A and C pulses is critical, this could cause problems. Nevertheless, this type of circuit can be highly versatile and useful, provided appropriate precautions are taken to ensure that such errors will not occur.

Sign up to vote on this title
UsefulNot useful