You are on page 1of 5

0-8493-1737-1/03/$0.00+$1.

50
2003 by CRC Press LLC

14-

1

14

Testability Concepts

and DFT

14.1 Introduction: Basic Concepts ...........................................

14-

1
14.2 Design for Testability ........................................................

14-

3

14.1 Introduction: Basic Concepts

Physical faults

or

design errors

may alter the behavior of a digital circuit. Design errors are tackled by
redesigning the circuit, whereas physical errors can be reduced by determining appropriate operating
conditions.

1,2

There are many sources of physical faults: improper interconnections between parts, improper assem-
bly, missing parts, and erroneous parts may occur while the circuit is being manufactured. After manu-
facturing, the circuit may fail due to excessive heat dissipation or for mechanical reasons associated with
corrosions and, in general, bad maintenance.

Short-circuit faults

are those due to connections of signal
lines that must be disconnected. In addition, disconnecting lines that must be connected may cause open-
circuit faults.

1,3

Failures in the operation of digital circuits are addressed in the testing process, which is abstracted in
Fig. 14.1. Typically, the testing process determines the presence of faults. The circuit being tested is often
called the

circuit under test

(CUT). Errors are detected by applying

test patterns

on the inputs of the CUT
and analyzing the responses on its outputs. A test pattern is typically a vector of 0 and 1, and every bit
corresponds to an input of the CUT. A test pattern is generated by a

test pattern generator

(TPG) tool.
The responses are analyzed using an

output response verication

(ORV) tool. The ORV tool is a comparator
circuit.
The testing process is done periodically during the circuits life span. It is initially done after fabrication
and while the CUT is still at the wafer. Testing is also done when it is removed from the wafer, and later
it is tested as part of a

printed circuit board

(PCB).
Testing is done either at the

transistor level

or at the

logical level

. We are considering here logical-level
testing for which TPG and ORV are concerned with binary values, that is, the signals are binary values.
The components are gates and ip-ops (or latches). We do not consider

parametric testing

, which
analyzes waveforms at the transistor level. A circuit

C

= (

V

,

E

) is considered as a collection

V

of components
and

E

lines. Figure 14.2 depicts a combinational circuit at the logic level. The components represent gates.
The integer value on each circuit line indicates its label. The circuit inputs are lines 1, 2, 3, 6, 7, 23, and 24.
The test patterns may be precomputed by a pattern generator program, often referred to as an

automatic
test pattern generator

(ATPG). The goal in an ATPG program is to quickly compute a small set of test
patterns that detect all faults. The design of ATPG tools is a difcult task. Once the patterns are generated,
they are stored in the memory of an

automatic test equipment

(ATE) mechanism that applies the test
patterns and analyzes the responses using the ORV tool. In order for the ATE tools to test PCBs or
complex digital systems, they must be controlled by computer programs.

Nick Kanopoulos

Atmel, Multimedia and
Communications

1737 Book Page 1 Wednesday, January 22, 2003 8:19 AM
Copyright 2003 CRC Press, LLC

14-

2

Memory, Microprocessor, and ASIC

ATE equipment is often very expensive. Thus, some circuits are designed so that they can test them-
selves. This concept is called

built

-

in self

-

testing

(BIST). In BIST, the TPG and ORV tools are on-chip
and the concern is twofold: accuracy and hardware cost. Chapter 15 reviews popular ATPG tools and
BIST mechanisms. Furthermore, the complexity of current

application-specic integrated circuits

(ASICs)
has led to the development of sophisticated CAD tools that automate the design of BIST mechanisms.
Such tools are presented in Chapter 16.
The testing process requires fault models that precisely dene the behavior of the (logic-level) circuit.
The standard model for logical-level testing is the

stuck

-

at fault model

. This model associates two types
of faults for each line

l

of the circuit: the stuck-at 0 fault and the stuck-at 1 fault. The stuck-at 0 fault
assumes that line

l

is permanently stuck at the logic value 0. Similarly, the stuck-at 1 assumes it is stuck
at 1. The single stuck-at fault model assumes that only one such fault is present at a time. Under the
single stuck-at fault model, a circuit with


lines can have at most 2


faults. Although the stuck-at
fault model appears to be simplistic, it has been shown to be very effective, and a set of patterns that
detect all single stuck-at faults covers most (physical) faults as well.
However, the stuck-at fault model is of limited use to faults associated with delays in the operation of
the CUT. Such faults are called

delay faults

. Although it has been shown that testing for delay faults can
be theoretically reduced to testing for stuck-at faults in an auxiliary circuit, the size of the latter circuit
is prohibitively large. Instead, an alternative fault model, the path delay fault model, is applied successfully.
The path delay fault model is postponed until Chapter 16.
In order for a test pattern to detect a stuck-at fault on line

l

, it must guarantee that the complementary
logic value is applied on

l

. In addition, it must apply an appropriate logic value to each of the other lines
in the circuit so that the erroneous behavior of the circuit at line

l

is propagated all the way to an output
line. This way, the fault is observed and detected. The problem of generating a test pattern that detects
a given stuck-at fault is an intractable problem, that is, it requires algorithms whose worst-case complexity
it exponential to

O

(


), the size of the input circuit. ATPG algorithms for the stuck-at fault model
are described in Chapter 15. They are very efcient, and require seconds per stuck-at fault, even for very
large circuits.
The stuck-at fault model is easy to use, involves only 2


faults, and requires at most 2


test
patterns. Once a pattern is applied by the ATE equipment, a process called

fault simulation

is performed
in order to determine how many faults are detected by the applied test pattern. A key measure of the
effectiveness of a set of test patterns is its

fault coverage

. This is dened as the percentage of faults detected
by the set of patterns.

Fault simulation

is needed in order to determine the fault coverage of a set of test patterns. Fault
simulation is important in testing with ATE as well as in the design of the on-chip test mechanisms. Fault

FIGURE 14.1

The testing process.

FIGURE 14.2

A circuit at the logic level.

1737 Book Page 2 Wednesday, January 22, 2003 8:19 AM
Copyright 2003 CRC Press, LLC

Testability Concepts and DFT

14-

3

simulation is an inherently polynomial process for the stuck-at fault model. However, an overview of
sophisticated fault simulation techniques is presented in Chapter 16.

Exhaustive TPG

applies all possible test patterns at the circuit inputs, that is, 2

|

I

|

test patterns for a
circuit with


inputs. Instead,

pseudo-exhaustive TPG

guarantees that all stuck-at faults are covered with
less than 2

|

I

|

patterns. BIST schemes are often designed so that pseudo-exhaustive TPG is guaranteed.
(See also Chapter 15.)
However, sometimes we need to generate patterns only for a given set of stuck-at faults. This type of
TPG is called a

deterministic TPG

, and the generated test patterns must detect the predened set of test
patterns. A good pseudo-exhaustive or deterministic TPG tool must guarantee that a compact test set is
generated.
Consider a three-input NAND gate where lines

a

,

b

, and

c

are the three inputs and line

d

is the output.
There exist three directly

controllable lines

and one

observable line

. Let us describe a test pattern as a
binary vector of three values applied to lines

a

,

b

, and

c

, respectively. There are 2 4 stuck-at faults. By
applying 2

3

patterns, all the faults are covered. However, a compact test set contains at least four test
patterns. Consider the following order of pattern application. Pattern (111) is applied rst and covers
four stuck-at faults. Pattern (110) covers two additional stuck-at faults. Finally, patterns (101) and (011)
are needed to cover the last two faults. The number of applied patterns is also called the

test length

. The
problem of minimizing the test length, which guarantees 100% fault coverage, is intractable.
Heuristic methods can be applied to reduce the test length. Two faults are called

indistinguishable

if
they are detected by the same set of test patterns. Identication of indistinguishable faults is an important
concept in test set compaction.
A stuck-at fault is called

undetectable

if it cannot be detected by any pattern. Any circuit that has at
least one undetectable fault is called

redundant

. Any redundant circuit can be simplied by removing the
line that contains the undetectable fault, and possibly other lines, without changing its functionality.
In the above, the CUT was assumed to be a combinational circuit. The TPG process is signicantly
more difcult in sequential logic. In order for a stuck-at fault to be detected, a sequence of test patterns
rather than a single pattern must be applied. The process of generating sequences of pattern with ATPG
or on-chip TPGs is a tedious job. These concepts are discussed in more details in Chapter 15.

14.2 Design for Testability

Design for testability

(DFT) is applied to reduce difculties associated with the TPG process on sequential
circuits. DFT suggests that the digital circuit is designed with

built

-

in

features that assist the testing
process. The goal in DFT is to maximize fault coverage, the test pattern generation process, the time
required to apply the generated patterns, and the

built

-

in hardware overhead

. By denition, DFT is needed
for BIST where TPG and ORV are on-chip. However, the majority of the proposed DFT methods are
targeting the simplication of the ATPG process for sequential circuits and assume that ATE is used.
There are some guidelines that have been developed by experienced engineers and lead the insertion
of the built-in mechanisms so that the input sequential CUT becomes testable with ATPG tools.
1. Set the circuit at a known state before and during testing. This is achieved by a RESET control
line that is connected to the asynchronous CLEAR of each ip-op in the CUT.
2. Partition the CUT into subcircuits which are tested easier.
3. Simplify the circuit to avoid redundancies.
4. Control and observe lines on feedback paths, lines that are far from inputs and outputs, and lines
with high fan-in and fan-out.
One way to implement the rst guideline (1) is by inserting

test points

to control and observe at lines

x

that break all feedbacks. A test point on line

x

= (

x

in

,

x

out

) is a simple circuit that simulates the function

f

(

x

,

s

,

c

) =

s


(

x

+

c

). The output of this circuit feeds

x

out

. Input signals

s

and

c

are controlling. When

s

= 0 and

c

= 0, we have that

f

=

x

; that is, this combination can be used in operation mode. When

s

=
0 and

c

= 1, function

f

evaluates to 1. When

s

= 1 and

c

= 0,

f

evaluates to 0. The last two combinations

1737 Book Page 3 Wednesday, January 22, 2003 8:19 AM
Copyright 2003 CRC Press, LLC

14-

4

Memory, Microprocessor, and ASIC

can be used in the testing mode, and they guarantee that the line is fully controllable. It can be made
observable by simply allowing for a new primary output at signal

x

.
Another mechanism is to use

bypass latches

, also referred to as

bypass storage elements

(bses). These
latches are bypassed during the

operation mode

and are fully controllable and observable points in the

testing mode

. This dual functionality is easily obtained with a simple multiplexing circuitry. See also
Fig. 14.3.
In both cases, the total hardware must be minimized, subject to a lower bound on the enhancement
of the circuits testability. This optimization criterion requires sophisticated CAD tools, some of which
are described in Chapter 16.
The most popular DFT approach is the

scan design

. The approach is a variation of the bypass latch
approach discussed earlier. Instead of adding new latches, as the bypass latch approach suggests, the scan
design approach enhances every ip-op in the circuit with a multiplexing mechanism that allows for
the following. In the operation mode, the ip-op behaves as usual. In the

testing mode

, all the ip-ops
are connected to a single shift chain. The input of this chain is a single controllable point and its output
is a single observable point.
In the testing mode, each scanned ip-op is a fully controllable and observable point. Observe that
the testing phase amounts to testing combinational logic. Therefore, the ATPG (or the on-chip TPG)
needs to generate single patterns instead of sequences of patterns. Each generated pattern is serially shifted
in the scan chain. Typically, this process requires as many clock cycles as the number of ip-ops. Once
every ip-op obtains its controlling value, the circuit is turned to operation mode for a single cycle.
Now the ip-ops are disconnected from the scan chain, and at the end of the clock cycle, the ip-ops
are loaded with values that are to be observed and analyzed. Now the circuit is switched back into the
testing mode (i.e., all ip-ops form again a scan chain). At this point, the states of the ip-ops are
shifted out and are analyzed. This requires no more clock cycles than the number of ip-ops.
The described scan approach is also called

full scan

because all ip-ops in the circuit are scanned.
The advantage of the full scan approach is that it requires only two additional I/O pins: the input and
output of the scan chain, respectively. The disadvantage is that it is time-consuming due to the shift-in
and shift-out processes for each applied pattern, especially for circuits with many ip-ops. For such
circuits, it is also hardware intensive because every ip-op must have dual operation mode capability.
The hardware and the application time can be reduced by employing CAD tools. See also Chapter 16.
Another way to reduce application time and hardware cost is through

partial scan

. In partial scan,
only a subset of ip-ops is scanned. The ip-ops and their ordering in the scan also require sophisti-
cated CAD tools. The trade-off in partial scan is that the ATPG tool may have to generate test sequences
rather than single patterns. A CAD tool is needed in order to select and scan a small number of ip-

FIGURE 14.3 The structure of a bypass storage element.
1737 Book Page 4 Wednesday, January 22, 2003 8:19 AM
Copyright 2003 CRC Press, LLC
Testability Concepts and DFT 14-5
ops. This guarantees low hardware overhead and low application time. The ip-op selection must also
guarantee an upper bound on the length of any generated test sequence. This simplies the task of the
ATPG tool and has an impact on the test application time.
References
1. M. Abramovici, M.A. Breuer, and A.D. Friedman, Digital Systems Testing and Testable Design,
Computer Science Press, New York, 1990.
2. J.P. Hayes, Introduction to Digital Logic Design, Addison-Wesley, Boston, 1993.
3. P.H. Bardell, W.H. McAnney, and J. Savir, Built-In Test for VLSI: Pseudorandom Techniques, John
Wiley & Sons, New York, 1987.
1737 Book Page 5 Wednesday, January 22, 2003 8:19 AM
Copyright 2003 CRC Press, LLC

You might also like