You are on page 1of 66

Sykatiya Technologies PVT.LTD.

DFT DOCUMENT

- By Sumanth Nayak

i
Sykatiya Technologies PVT.LTD.

Contents

CHAPTER 1 .......................................................................................................................................3
ASIC DESIGN FLOW .......................................................................................................................3
1 What is ASIC? .............................................................................................................................. 3
1.2 ASIC vs FPGA .......................................................................................................................... 6
1.3 Advantages of ASIC over FPGA .............................................................................................. 7
1.4 What is DFT and Why DFT?? .................................................................................................. 7
1.5 Advantages and disadvantages of DFT ..................................................................................... 7
1.5.1 Advantages of DFT..................................................................................................................... 7
1.5.2 Disadvantages of DFT ............................................................................................................... 8
1.6 Difference between verification and Testing ............................................................................ 8
1.6.1 VLSI verification ....................................................................................................................... 8
1.6.2VLSI Testing .............................................................................................................................. 8
1.7.Types of testing ......................................................................................................................... 8
CHAPTER 2 .....................................................................................................................................10
2.1.Defect, fault, error, failure ....................................................................................................... 10
A failure is a deviation in the performance of a circuit from its specified behaviour. ..................... 10
2.2. Reasons for defect: ................................................................................................................. 10
2.3.Fabrication defects: ................................................................................................................. 11
2.4.Yield: ....................................................................................................................................... 13
2.5.1.Fault collapsing: ................................................................................................................... 14
2.5.2.Fault equivalence:................................................................................................................. 14
2.5.3.Fault Dominance: ................................................................................................................. 15
2.6.Minimized patterns: ................................................................................................................. 15
2.6.1.AND Gate: ............................................................................................................................ 15
2.6.3.NAND Gate: ......................................................................................................................... 17
2.6.4.NOR Gate: ............................................................................................................................ 17
2.7.Functional testing and Structural testing: ................................................................................ 18
Table 2.7. 1 Functional testing and Structural testing. .................................................................. 18
CHAPTER 3 .....................................................................................................................................19
3.1 Fault Excitation, Fault Propagation and Path Sensitization .................................................... 19
3.1.1 Fault Excitation ....................................................................................................................... 19

Design For Testability ii


Sykatiya Technologies PVT.LTD.

3.1.2 Fault Propagation .................................................................................................................... 19


3.1.3 Path Sensitization .................................................................................................................... 20
3.2 D-Algorithm ............................................................................................................................ 20
3.2.1 D-Algebra ................................................................................................................................ 20
3.2.2 D-frontier and J-frontier .......................................................................................................... 21
3.2.3 Idea of D-Algorithm ................................................................................................................ 21
3.2.4 Primitive D-cube Fault (PDCF) .............................................................................................. 21
3.2.5 Propagation D-cube (PDC) ..................................................................................................... 21
3.2.6 D-drive .................................................................................................................................... 21
3.2.7 Singular Cover (SC) ................................................................................................................ 22
3.2.8 Implication .............................................................................................................................. 22
3.2.9 Test cube(TC) .......................................................................................................................... 22
3.2.10 D- Algorithm Flow chart ....................................................................................................... 23
3.2.11 Backtrack ............................................................................................................................... 24
3.2.12 D-Algorithm Example:- ........................................................................................................ 25
3.3 Check-point Theorem .............................................................................................................. 26
CHAPTER 4 .....................................................................................................................................26
4.1 Fault Models ............................................................................................................................ 26
Types of Fault Models ...................................................................................................................... 27
4.2 Fault Coverage and Test Coverage ......................................................................................... 35
4.2.1 Fault Coverage ......................................................................................................................... 35
4.2.2 Test Coverage .......................................................................................................................... 35
4.3 Controllability and Observability ............................................................................................ 35
4.4 Scan: ........................................................................................................................................ 36
4.5 SCAN CELL DESIGNS ......................................................................................................... 38
gv
4.7Diff b/w Full Scan and Partial Scan ......................................................................................... 43
Full-Scan Design................................................................................................................................ 43
Partial-Scan Design ........................................................................................................................... 44
4.8 Automatic Test Pattern Generation ......................................................................................... 45
CHAPTER 5 .......................................................................................................................................46
5.1 Drc violations .......................................................................................................................... 46
Violations that prevent Scan Insertion ............................................................................................. 47
Violations that prevent data capture ................................................................................................. 48

Design For Testability iii


Sykatiya Technologies PVT.LTD.

Violations that reduce Fault Coverage ............................................................................................. 51


5.2 lockup latches .......................................................................................................................... 53
5.3Memoryshadow logic ............................................................................................................... 57
References .........................................................................................................................................63

Design For Testability iv


Sykatiya Technologies PVT.LTD.

List of Figures
figure 1. 1 asic design flow .................................................................................................................. 3
Figure 2.3. 2 Doping concentration ................................................................................................... 11
Figure 2.3. 3 Non-uniform oxide layer .............................................................................................. 12
Figure 2.3. 4 Chemical contamination ............................................................................................... 13
Figure 2.3. 5 Oxide thickness defect.................................................................................................. 13

figure 3. 1 Fault Excitation ................................................................................................................ 19


figure 3. 2 Fault Propagation ............................................................................................................. 20
figure 3. 3 Line Justification .............................................................................................................. 20
figure 3. 4 Primitive D-cube Fault ..................................................................................................... 21
figure 3. 5 D-Drive............................................................................................................................. 22
figure 3. 6 Singular Cover.................................................................................................................. 22
figure 3. 7 Implication ....................................................................................................................... 22
figure 3. 8 D- Algorithm flow chart ................................................................................................... 23
figure 3. 9 Line Justification flow chart ............................................................................................. 24
figure 3. 10 Backtrack ........................................................................................................................ 24
figure 3. 11 D-algorithm example...................................................................................................... 25

figure 4. 1.2 transistor faultmodel...................................................................................................... 30

Figure 4.5. 1 LSSD ............................................................................................................................ 41

figure 5. 1.1 memory shadow logic ................................................................................................... 58


figure 5. 2.1 forced controllability ..................................................................................................... 59
figure 5. 3.1 wrapper .......................................................................................................................... 60
figure 5. 4.1 smartwrapper ................................................................................................................. 61

Design For Testability Page 2


Sykatiya Technologies PVT.LTD.

CHAPTER 1
ASIC DESIGN FLOW

1 What is ASIC?
An ASIC (application-specific integrated circuit) is a microchip designed for a special application,
such as a particular kind of transmission protocol or a hand-held computer.

figure 1. 1 asic design flow

Design For Testability Page 3


Sykatiya Technologies PVT.LTD.

The ASIC Design flow is divided in to 3 stages;


1) FRONT END DESIGN
2) BACK END DESIGN
3) FABRICATION

FRONT END DESIGN


SPECIFICATIONS:
The first step in the ASIC Design flow is the specifications. The customer gives the
specifications to the top level manager or the ASIC Designer. The specification includes chip area,
power and speed.

MICRO ARCHITECTURE:
Now once the specifications are received the top level manager makes a micro
architecture of the design by considering the specifications. Suppose the design is to make a full
adder which consists of two half adders and standard cells, then the task is being divided and given
to lower level engineers. The top level manager communicates with the customer by finalizing the
design and the cost.
RTL DESIGN:
When the engineer gets all the specifications the design phase begins. The top level
manager divides the module and task is given to each lower level engineers. Now each lower level
engineers do their own RTL Design. This is the first stage in technology independent design. Now
once the RTL Design is being done, the succeeding phase is the Simulation.

SIMULATION:
Now once the RTL Design is being done the functionality need to be checked, so
simulation is being carried out here. There are two versions of RTL; Synthesizable and Non
synthesizable. It is advised to carry out synthesizable Verilog code. This is the last stage of
technology independent design.
SYNTHESIS:
Once simulation is being done, the test vectors verify the functionality of the
design. The synthesis is carried out in three stages namely translation, optimization and mapping.
Now from synthesis we get 2 files called Gate level Verilog list (vg file) and Synopsis Design
Constraints (SDC).The vg file is obtained by RTL coding of the implementation producing gates
and SDC deals with what is the input delay, clock period, clock uncertainty etc. Now once synthesis
is being done the succeeding phase is the Design for Testability (DFT).
DESIGN FOR TESTABILITY (DFT):
In DFT the engineer measures the controllability and measurability of the design.
It checks how far is the design achievable in terms of error rate.ATPG is Automated Test Pattern
Generator. Here .atpg,,sdc,.vg files are generated. Now this is the last phase of technology
dependent design. The semiconductor design process is categorized in to three stages namely Front

Design For Testability Page 4


Sykatiya Technologies PVT.LTD.

end design, back end design and fabrication. The various stages in SOC requirement of the back
end design is as follows.
BACK END DESIGN
DATA PREPARATION:

.The files needed for the various data preparation includes;

1).vg file : This is the Verilog gate list file


2).sdc file: This is the synopsis design constraints which synthesis tool generates.
3).Lib files: This is the library files which specifies the technology we are working. The library files
are usually slow lib, typical lib and fast lib formats each containing different PVT’s
4).Lef file: This is the library exchange format which contains the metal lead information, wire,
length and height and apart from this 4 files it also consists of an IO file (Input output file).

CHECK DESIGN AND TIME DESIGN:


Now once the data preparation is done we need to check whether the synthesis fellow
gave the right net list or not. If there is any bug in the net list then it has to be corrected and send
again. Now after the check design again timing need to be checked. The timing is verified in 5
stages pre placement, pre CTS, post CTS, post routing and post SI. The different timing paths are 1)
input to register 2) Register to Register3) Register to ouput4) the pure combinational path. If all the
paths have positive slack this means that there is no problem or bug in the design. Now the
succeeding phase is the Floor planning.
FLOOR PLANNING:
Now the specified silicon, the hard macros and soft macros need to be placed at the right
place. This is being done in floor planning. The information about number of analog and digital
blocks, its placement and identification are floor planning.

POWER PLANNING:
Suppose there are 25k components, there are some power requirements for all components,
and of a particular technology like for 180 nm the Vdd can be 1.8V. If there are 20000 components
are there in a chip only ring of Vdd is enough for the chip, but if there are 1 million components in
a chip one Vdd or one Vgs ring is not enough to trigger the component. To overcome this problem,
there are horizontal and vertical strips inserted according to the complexity. The component instead
of taking the power from Vdd and Vgs rings can take it from the stripes nearer to them. Hence,
every individual component should get the power constraints as required.
PLACE DESIGN:
The step here is the real placing of a design for a chip which is the best stage after power plan. A
Standard cell, macros or the modules from various vendors are placed onto the chip. This is called
hard placement. After this phase, an engineer again performs Time Design, which is called Pre-

Design For Testability Page 5


Sykatiya Technologies PVT.LTD.

CTS. Before Clock Tree Synthesis once again the user checks the timings and if the slack is
positive, the CTS or clock tree synthesis phase takes place.

CLOCK TREE SYNTHESIS:


In this phase we are trying to obtain the zero skew. Skew is the difference in arrival of
clock when two flip flops are interacting with each other. Here we have to make sure that all the
flip flops get the clock at the same time. In CTS we add two components called clock buffers and
clock inverters. Since we added these components again timing design need to be checked in the
succeeding phase.
ROUTING:
There are two types of routing, one is global routing and another one is detailed routing.
In global routing we find the optimized way of routing the blocks. Based on this the detailed router
go and place the three nets namely signal nets, clock nets and power nets and all these are routed
here. Once routing is being done here again STA is validated which is termed as post route time
design.
SIGNAL INTEGRITY:
When the technology is less than 45nm there comes issues like crosstalk,
transmission effect, noise etc... Here we have to make sure that there is no such issues and for that
signal integrity is being done here. Once the signal integrity is being done again static timing
analysis need to be done, which is termed as post SI timing verification.

1.2 ASIC vs FPGA


The Application Specific Integrated Circuit is a unique type of IC that is designed with a certain
purpose in mind. This type of ICs are very common in most hardware nowadays since building with
standard IC components would lead to big and bulky circuits. An FPGA (Field Programmable Gate
Array) is also a type of IC, but it does not have the programming built into it during the production.
As the name implies, the IC can be programmed by the user as long as he has the right tools and
proper knowledge.

An ASIC can no longer be altered after it gets out of the production line. That is why the designers
need to be totally sure of their design, especially when making large quantities of the same ASIC.
The programmable nature of an FPGA allows the manufacturers to correct mistakes and to even
send out patches or updates after the product has been bought. Manufacturers also take advantage
of this by creating their prototypes in an FPGA so that it can be thoroughly tested and revised in the
real world before actually sending out the design to the IC foundry for ASIC production.

ASICs have a great advantage in terms of recurring costs as very little material is wasted due to the
fixed number of transistors in the design. With an FPGA, a certain number of transistor elements
are always wasted as these packages are standard. This means that the cost of an FPGA is often
higher than that of a comparable ASIC. Although the recurring cost of an ASIC is quite low, its

Design For Testability Page 6


Sykatiya Technologies PVT.LTD.

non-recurring cost is relatively high and often reaching into the millions. Since it is non-recurring
though, its value per IC decreases with increased volume. If you analyze the cost of production in
relation to the volume, you would find that as you go lower in production numbers, using FPGA
actually becomes cheaper than using ASICs.

1.3 Advantages of ASIC over FPGA


 Cost- For very high volume designs costs comes out to be very less.
 Speed-ASICs are faster than FPGA:
 Power- ASIC required low power.
 In ASIC you can implement analog circuit, mixed signal designs
 Asic is making for a particular functionality. So optimization is more in ASIC

1.4 What is DFT and Why DFT??


DFT is a technique, which facilitates a design to become testable after production. It is the
extra logic which we put in the normal design, during the design process, which helps its post-
production testing. It is structural testing, we are not testing any functionality in DFT. Goal of DFT
is to provide the controllability and observability to the each node in the design for testing.
Controllability is the measure of difficulty to set it to a specific logic value at the primary input of
internal nodes. Observability is the measure of difficulty of values of nodes to propagate to its
primary output.
 Observability: ease of observing a node by watching external output pins of the chip.‰
 Controllability: ease of forcing a node to 0 or 1 by driving input pins of the chip
No chip will be perform 100% as per the specification, due to variation in fabrication process, while
manufacturing there is a chance of misconnection of the pin (pin shortening etc). After the
fabrication post-production testing will be done. While testing, we can find out the faults by
applying all possible input vectors but it is time consuming. Here is the need for DFT. It will help to
detect the device is faulty or not within less time. A design with DFT will help to improve the
quality of the device and it will help to reduce the time for testing as well as cost of testing. DFT
will also help to generate the test pattern for testing the chip

1.5 Advantages and disadvantages of DFT

1.5.1 Advantages of DFT


 Yield Increases
Yield=Number of acceptable chips/Total number chips fabricated
 Reducing Testing Time
 Reduce Testing Complexity
 DFT generate the necessary vectors easily
 DFT increases ability to measure the quality.

Design For Testability Page 7


Sykatiya Technologies PVT.LTD.

1.5.2 Disadvantages of DFT


 DFT increase power, area, timing and package pins
 DFT adds complication to the design flow.
 DFT adds risk to the design schedule
 Cost increases

1.6 Difference between verification and Testing

1.6.1 VLSI verification:VLSI Verification is done before manufacturing. Before even tape out.
It is done for verifying if the chip design is working as expected. In a more simple way we can say
that verification is done before the implementation of the design on actual hardware to make sure
the product works before you have created the product.
1.6.2VLSI Testing: VLSI Testing is done after manufacturing. After the chips are made, we will
look for any structural damages or mistakes in the chip. At this stage, we will check if the chip
passes the test. If it does not, we throw away the chip. If it passes, we can use(sell) the chip .To do
testing, we have to put some extra special logic into the chip before it is taped out. This is called
DFT.

1.7.Types of testing
Characterization testing
 It is also called Design debug or verification testing.
 Performed on a new design before it is sent to production
 Verify whether the design is correct and the device will meet all specifications
 Functional tests and comprehensive AC and DC measurements are made
 A characterization test determines the exact limits of device operation values

DC Parameter tests
 Measure steady-state electrical characteristics.
 DC parametric test includes short test, open test, maximum current test, leakage test, drive
current test, and threshold level test, contact test, functional & layout related test, wafer test.

AC parametric tests
 Measure transient electronic characteristics.

Design For Testability Page 8


Sykatiya Technologies PVT.LTD.

 AC parametric tests include propagation delay test, setup and hold test, functional speed
test, refresh and pause time test, and rise and fall time test.

Production testing
 Every fabricated chip is subjected to production tests
 The test patterns may not cover all possible functions and data patterns but must have a high
fault coverage of modelled faults
 The main driver is cost, since every device must be tested. Test time must be absolutely
minimized
 Test whether some device-under-test parameters are met to the device specifications under
normal operating conditions

Burn-In testing
 Ensure reliability of tested devices by testing
 Detect the devices with potential failures
 The devices with infant mortality failures may be screened out by a short-term burn-in test
by testing at elevated temperature and voltage.

System test
 Testing of the product in the environment where it is operating to ensure that it works
correctly when interconnected with other components.

Prototype test
 Testing to check for design faults during the system development phase. Diagnosis is
required.

Incoming Inspection(Acceptance Test)


 System manufacturers perform incoming inspection on the purchased devices before
integrating them into the system.
 Depending upon the context, this testing can be either similar to production testing, or more
comprehensive than production testing, or even tuned to the specific systems application.
Also, the incoming inspection may be done for a random sample with the sample size
depending on the device quality and the system requirement.
 The most important purpose of this testing is to avoid placing a defective device in a system
assembly where the cost of diagnosis may far exceed the cost of incoming inspection.

Design For Testability Page 9


Sykatiya Technologies PVT.LTD.

Functional Tests:
 They check for proper operation of a verified design by testing the internal chip nodes.
Functional tests cover a very high percentage of modeled (e.g., stuck type) faults in logic
circuits and their generation is the main topic of this tutorial. Often, functional vectors are
understood as verification vectors, which are used to verify whether the hardware actually
matches its specification.

CHAPTER 2
2.1.Defect, fault, error, failure
Defect:
A defect is the unintended difference between the implemented hardware and its intended
design. Defects occur either during manufacture or during the use of devices. The short between the
input and ground nodes.

Fault:
A representation of a defect at the abstracted function level. The faulty input line has the permanent
value 0 – stuckat 0.

Error:
A wrong output signal produced by a defective system. An error is caused by a Fault or a
design error. For input pattern (11), the output value is 1, although the correct output value is
0(NAND).

Failure:

A failure is a deviation in the performance of a circuit from its specified behaviour.

2.2. Reasons for defect:


1. Manufacturing process: missing contact, parasitic transistors,
gate-oxide shorts, oxide break-down, metal-to silicon shorts, missing or
faulty components, broken or shorted tracks, etc.
2. Process fabrication marginalities: line width variation, etc.

Design For Testability Page 10


Sykatiya Technologies PVT.LTD.

3. Material and age defects: bulk defects (cracks, crystal imperfections), surface impurities,
dielectric breakdown, electro-migration, etc.
4. Packaging: contact degradation, seal leaks, etc.
5. Environmental influence: temperature related defects, high humidity, vibration, electrical
stress, crosstalk, radiation, etc.

2.3.Fabrication defects:
1.crystalline defect

Figure 2.3. 1 Crystalline Defect

2.The doping concentration is more or less compared to the required level, defect will be occurred.
p type impurities are doped

Figure 2.3. 2 Doping concentration

Design For Testability Page 11


Sykatiya Technologies PVT.LTD.

3.Sio2 deposited over the substrate

Figure 2.3. 3 Non-uniform oxide layer

4. Photo resist layer is exposed to UV light through a mask.

Figure 1.3.3 dust particles during masking

5.Etching removes Sio2


layer which is in direct
contact with etching solution.

Design For Testability Page 12


Sykatiya Technologies PVT.LTD.

Figure 2.3. 4 Chemical contamination

6.Metal to silicon short i.e, if the oxide thickness reduces then there will be direct contact
between gate and substrate. This defect may occur at this position.

2.4.Yield:
In simple terms yield in fabrication is nothing but how many good die's are produced per wafer.
yield is usually in terms of percentages as total no of good die's produced per total no of die's
expected.A chip with no manufacturing defect is called a good chip. Fraction (or percentage) of
good chips produced in a manufacturing process is called the yield. Yield is denoted by symbol Y.

so what is a good die: this is part of wafer slice with no defects and circuit printed on it is working
as per expectation. In some cases, die produced may not have any shorts or opens in the circuit and
the circuit is working properly but not at the specifications it is designed for. let’s say you have
designed your chip to work at 2GHz, but your chip after manufacturing is working at only 1.6GHz.
then we consider only those dies working at 2GHz as good die's.

2.4.1.DPPM:

Design For Testability Page 13

Figure 2.4. 1 Yield calculation


Sykatiya Technologies PVT.LTD.

defect Parts Per Million


(DPPM) = (Defects
Observed/ Total Size of the
Sample or Population) x
1000000
Example
In a total of 500 units, 5
objects were rejected for
some reasons. Hence,
DPPM = (5 / 500 ) x 1000000
= (1/100) x 1000000
= 10000.

2.5.1.Fault collapsing:
The basic idea behind fault collapsing is to reduce the number of faults that have to be
considered during the test generation process, in turn reducing the size of the test vector set. Fault
collapsing eliminates those faults that can be detected by tests generated for some other faults.

Figure 2.5.1 fault collapsing

2.5.2.Fault equivalence:
Two faults of a Boolean circuit are called equivalent if they have exactly the same set of
tests and transform the circuit such that the two faulty circuits have identical output functions. [1]
Let take the example of a two input AND gate in Fig. 1. The stuck-at-0 faults on any input line of
this AND gate can lead to a zero output which is the same faulty result of the stuck-at-zero fault on
this gate output line, and these three faults have the same test vector (1,1), therefore, for a two input
AND gate, the two-input stuck-at-0 and one output stuck-at-0 faults are equivalent. Because of the
in distinguishability of equivalent faults, only one of them needs to be tested.

Design For Testability Page 14


Sykatiya Technologies PVT.LTD.

Figure 2.5.2. 1 Fault Equivalence

2.5.3.Fault Dominance:
If all tests of fault F1 detect another fault F2, then F2 is said to dominate F1 [1], and fault F2
can be deleted. Take the example of an AND gate, in Fig.5, to test F1, we should apply the test (0,
1). To test F2, we could apply any one of the following tests: (0,0), (0,1), (1,0), which include the
test (0,1) used to test fault F1, therefore, we can determine that F2 dominate F1 and F2 as a
dominating fault can be deleted. Normally, the gate output stuck-at fault dominates the gate input
stuck-at faults.

Figure 2.5.3. 1 Fault Dominance

2.6.Minimized patterns:

2.6.1.AND Gate:

Figure 2.6. 1 AND Gate

Design For Testability Page 15


Sykatiya Technologies PVT.LTD.

Table 1.6. 1 AND Gate Minimized patterns:


A B Y AS0 BS0 AS1 BS1 YS0 YS1

0 0 0 0/0 0/0 0/0 0/0 0/0 0/1

0 1 0 0/0 0/0 0/1 0/0 0/0 0/1

1 0 0 0/0 0/0 0/0 0/1 0/0 0/1

1 1 1 1/0 1/0 1/1 1/1 1/0 1/1

Minimized patterns = 11, 10, 01


2.6.2.OR Gate:

Figure 2.6. 2 OR Gate

Table 1.6. 2 OR Gate Minimized patterns


A B Y AS0 BS0 AS1 BS1 YS0 YS1

0 0 0 0/0 0/0 0/1 0/1 0/0 0/1

0 1 1 1/1 1/0 1/1 1/1 1/0 1/1

1 0 1 1/0 1/1 1/1 1/1 1/0 1/1

1 1 1 1/1 1/1 1/1 1/1 1/0 1/1

Design For Testability Page 16


Sykatiya Technologies PVT.LTD.

Minimized patterns = 00, 10, 01

2.6.3.NAND Gate:

Figure 2.6. 3 NAND Gate

Table 2.6. 3 NAND Gate Minimized patterns


A B Y AS0 BS0 AS1 BS1 YS0 YS1

0 0 1 1/1 1/1 1/1 1/1 1/0 1/1

0 1 1 1/1 1/1 1/0 1/1 1/0 1/1

1 0 1 1/1 1/1 1/1 1/0 1/0 1/1

1 1 0 0/1 0/1 0/0 0/0 0/0 0/1

Minimized patterns = 11, 10, 01

2.6.4.NOR Gate:

Figure 1.6. 4 NOR Gate

Design For Testability Page 17


Sykatiya Technologies PVT.LTD.

Table 1.6. 4 NOR Gate Minimized patterns


A B Y AS0 BS0 AS1 BS1 YS0 YS1

0 0 1 1/1 1/1 1/0 1/0 1/0 1/1

0 1 0 0/0 0/1 0/0 0/0 0/0 0/1

1 0 0 0/1 0/0 0/0 0/0 0/0 0/1

1 1 0 0/0 0/0 0/0 0/0 0/0 0/1

Minimized patterns = 00, 10, 01

2.7.Functional testing and Structural testing:


Table 2.7. 1 Functional testing and Structural testing.
FUNCTIONAL TESTING STRUCTURAL TESTING

Functional Testing is done before Structural Testing is done after implementation


implementation of design on actual hardware of the design on hardware

It requires more patterns 2n It requires less patterns

It takes more time It takes less time

Structural Testing is done after implementation of the design on hardware to make sure the product
works AFTER you've created it. It is done to uncover any defects in a chip to make sure defective
chip is not shipped to the customer.

Functional Testing is done before implementation of design on actual hardware to make sure the
product works BEFORE you've created the product. It is done to check whether a chip is designed
to its functional specification.

We need to apply 225 test patterns to complete the test. If we apply 1000000 patterns per second
(Mega Hz Tester), then time required is 33 Seconds per chip. In a typical scenario about 1 million
chips are to be tested in a run, thereby taking about 33000000 Seconds or 550000 Hours or 22916
Days or 62 years. So, one can understand the complexity when a circuit has 100+ inputs. So, for a
typical circuit even Functional Testing cannot be performed due to extremely high testing time.

Design For Testability Page 18


Sykatiya Technologies PVT.LTD.

Figure 2.7. 1 Functional and Structural test

CHAPTER 3

3.1 Fault Excitation, Fault Propagation and Path Sensitization

3.1.1 Fault Excitation


 In this, a stuck-at fault is activated by setting the signal driving the faulty net to an opposite
value from the fault value.

 This is necessary to ensure a behavioural difference between the good circuit and the fault

circuit.

figure 3. 1 Fault Excitation

3.1.2 Fault Propagation


 Fault effect should propagate through one or more paths to a primary output of circuit.

Design For Testability Page 19


Sykatiya Technologies PVT.LTD.

 For some fault, it is necessary to simultaneously propagate the fault effect over multipath to
test it.

figure 3. 2 Fault Propagation

3.1.3 Path Sensitization


It is a method which determines the input pattern that makes a fault controllable (triggers the fault)
and observable (makes its impact visible at the output).
It consists of three steps:-

 Fault Excitation
 Fault Propagation
 Line Justification:- To achieve desire value assign primary inputs.

figure 3. 3 Line Justification

3.2 D-Algorithm

3.2.1 D-Algebra
 Five value logic 1, 0, D, D’, X
 D = 1/0
1 In fault free circuit 0In fault circuit

Design For Testability Page 20


Sykatiya Technologies PVT.LTD.

 D’ = 0/1
0 In fault free circuit 1 In fault circuit
 X means “not specified yet”

3.2.2 D-frontier and J-frontier


 D-frontier:- A set of gates whose output value is currently X but have one or more D(or D’)
at their inputs.
 J-frontier:- A set of gates whose output value is assigned but input values have not been
decided yet.

3.2.3 Idea of D-Algorithm


 Create D-frontier (fault excitation)
 Drive D-frontier towards output (fault effect propagation)
 Justify J-frontiers
 Backtrack if any conflict occurs

3.2.4 Primitive D-cube Fault (PDCF)


 Specify minimal input conditions to gate input to produce error at gate output.

figure 3. 4 Primitive D-cube Fault

3.2.5 Propagation D-cube (PDC)


 Minimum gate input assignments requiredto propagate a D or D’ from gate input to gate
output.

3.2.6 D-drive
 Selects an element in D-frontier and attempts to propagate a D or D’ from gate input to gate
output.

Design For Testability Page 21


Sykatiya Technologies PVT.LTD.

figure 3. 5 D-Drive

3.2.7 Singular Cover (SC)


 Minimum gate input assignment for gate output =0 or =1.
 Singular Cover for AND gate

figure 3. 6 Singular Cover

3.2.8 Implication
 Forward implication:-Partially (or fully) specified input values uniquely determines the
output values.
 Backward implication:knowing the output values (and some input values) can uniquely
determine the un-specified values.
 Examples

figure 3. 7 Implication

3.2.9 Test cube(TC)


 Partially specified boolen values for testing a fault
 Notation: TC(n)=test cube at ATPG step n.
Intersection of Test Cubes
 Two bits has intersection if their logic values are not conflicting.

Design For Testability Page 22


Sykatiya Technologies PVT.LTD.

3.2.10 D- Algorithm Flow chart

figure 3. 8 D- Algorithm flow chart

 Line Justification

Design For Testability Page 23


Sykatiya Technologies PVT.LTD.

figure 3. 9 Line Justification flow chart

3.2.11 Backtrack
 When conflict,backtrack to last decision point and change choice.
 To avoid spending too much time on a fault, use backtrack limit.
 Fault is aborted if backtrack limit reached.

figure 3. 10 Backtrack

Design For Testability Page 24


Sykatiya Technologies PVT.LTD.

3.2.12 D-Algorithm Example:-

figure 3. 11 D-algorithm example

Design For Testability Page 25


Sykatiya Technologies PVT.LTD.

3.3 Check-point Theorem

STATEMENT 1:-
Fault detection in fan-out-free circuit. A test set that detects all single stuck-at faults on all
primary inputs of a fan-out-free circuit must detect all single stuck-at faults in that circuit.
Example:-

STATEMENT 2:-

A test set that detects all single stuck-at faults of the check-points(Primary inputs and Fan-outs)
of a combinational circuit detects all single stuck-at faults in that circuit.
Example:-

CHAPTER 4

4.1 Fault Models


 Fault model is the foundation of structural testing methods.

Design For Testability Page 26


Sykatiya Technologies PVT.LTD.

 Due to defects during manufacturing of integrated circuit, there is need to model the
possible faults that might occur during fabrication process, this is called fault modelling.

Why Fault Model ?


 Fault model identifies target faults for testing. Number of faults can easily be calculated.
 Fault model limits the scope of test generation
 Fault model makes testing effective& automated.
 Fault coverage can be computed for specific test patterns to measure its effectiveness.
 Fault model makes analysis possible. Associate specific defects with specific test patterns.
 A fault model is an abstraction of the error caused by a particular physical fault. The
purpose of the fault model is to simplify the testing procedure and reduce its cost, while still
retaining the capability of detecting the presence of the modelled fault.
 Fault models are necessary for generating and evaluating a set of test vectors. Generally, a
good fault model should satisfy two criteria:
o It should accurately reflect the behaviour of defects, and
o it should be computationally efficient in terms of fault simulation and test pattern
generation.

 Many fault models have been proposed but, unfortunately, no single fault model accurately
reflects the behaviour of all possible defects that can occur. As a result, a combination of
different fault models is often used in the generation and evaluation of test vectors and
testing approaches developed for VLSI devices
 The use of fault models has some advantages and also some disadvantages.

Advantages
 Technology independent.
 Works quite well in practice.

Disadvantages
 May fail to identify certain process specific faults, for instance CMOS floating gates.
 Detects static faults only.

Types of Fault Models


 Stuck at Faults
 Bridging fault
 Path-delay fault
 Transition fault
 Transistor Fault

Design For Testability Page 27


Sykatiya Technologies PVT.LTD.

o Stuck-open fault
o Stuck-short fault

 Stuck-at fault model:Faults are fixed (0 or 1) value to a net. Stuck at-0 and Stuck at-1
o Single stuck-at fault model
o Multiple stuck at fault model

Single Stuck-at Fault Model: Fanouts

figure 4.1.1stuck at model

For a given fault model there will be k different types of faults that can occur at each potential fault
site (k = 2 for most fault models). A given circuit contains n possible fault sites, depending on the
fault model. Assuming that there can be only one fault in the circuit, then the total number of
possible single faults, referred toas the single-fault model or single-fault assumption, is given by:
Number of single faults = k×n
In reality of course, multiple faults may occur in the circuit. The total number ofpossible
combinations of multiple faults, referred to as the multiple-fault model,is given by:

Stuck-At Faults
The stuck-at fault is a logical fault model that has been used successfully for decades .A stuck-at
fault affects the state of logic signals on lines in a logic circuit, including primary inputs (PIs),
primary outputs (POs), internal gate inputs and outputs, fanout stems (sources), and fanout
branches. A stuck-at fault transforms the correct value on the faulty signal line to appear to be stuck

Design For Testability Page 28


Sykatiya Technologies PVT.LTD.

at a constant logic value, either a logic 0 or a logic 1, referred to as stuck-at-0 (SA0) or stuck-at-1
(SA1), respectively. Consider the example circuit shown in Figure 1.7, where the nine signal lines
representing potential fault sites are labeled alphabetically. There are 18 (2×9) possible faulty
circuits under the single-fault assumption. Table 1.1 gives the truth tables for the fault-free circuit
and the faulty circuits for all possible single stuck-at faults. It should be noted that, rather than a
direct short to a logic 0 or logic 1 value, the stuck-at fault is emulated by disconnection of the
source for the signal and connection to a constant logic 0 or 1 value. This can be seen in Table 1.1,
where SA0 on fanout branch lined behaves differently from SA0 on fanout branch line e, while the
single SA0 fault on the fanout source line b behaves as if both fanout branches line d and line e are
SA0.

Although it is physically possible for a line to be SA0 or SA1, many other defects within a circuit
can also be detected with test vectors developed to detect stuck-at faults. The idea of N-detect
single stuck-at fault test vectors was proposed to detect more defects not covered by the stuck-at
fault model. In an N-detect set of test vectors, each single stuck-at fault is detected by at least N
different test vectors; however, test vectors generated using the stuck-at fault model do not
necessarily guarantee the detection of all possible defects, so other fault models are needed.

Transistor Faults
At the switch level, a transistor can be stuck-open or stuck-short, also referred to as
stuck-off or stuck-on, respectively. The stuck-at fault model cannot accurately reflect the behavior
of stuck-open and stuck-short faults in CMOS logic circuits because of the multiple transistors used
to construct CMOS logic gates. To illustrate this point, consider the two-input CMOS NOR gate
shown in Figure. Suppose transistor N2 is stuck-open. When the input vector AB = 01 is applied,
output Z should be a logic 0, but the stuck-open fault causes Z to be isolated from ground
(VSS). Because transistors P2 and N1 are not conducting at this time, Z keeps its previous state,
either a logic 0 or 1. In order to detect this fault, an ordered sequenceof two test vectors AB =

Design For Testability Page 29


Sykatiya Technologies PVT.LTD.

00→01 is required. For the fault-free circuit, the input 00produces Z = 1 and 01 produces Z = 0

such that a falling transition at Z appears. But, for the faulty circuit, while the test vector 00
produces Z = 1, the subsequent test vector 01 will retain Z = 1 without a falling transition such that
the faulty circuit behaves like a level-sensitive latch. Thus, a stuck-open fault in a CMOS
combinational circuit requires a sequence of two vectors for detection rather than a single test
vector for a stuck-at fault.Stuck-short faults, on the other hand, will produce a conducting path
between VDD and VSS. For example, if transistor N2 is stuck-short, there will be a conducting path
between VDD and VSS for the test vector 00. This creates a voltage divider at the output node Z
where the logic level voltage will be a function of the resistances of the conducting transistors. This
voltage may or may not be interpreted as an incorrect logic level by the gate inputs driven by the
gate with the transistor fault;however, stuck-short transistor faults may be detected by monitoring
the powersupply current during steady state, referred to as IDDQ. This technique of monitoring the
steady-state power supply current to detect transistor stuck-short faults is referred to as IDDQ
testing.

figure 4. 1.2 transistor faultmodel

Bridging Fault
A short between two elements is commonly referred to as a bridging fault. These elements can be
transistor terminals or connections between transistors and gates. The case of an element being
shorted to power (VDD) or ground (VSS) is equivalent to the stuck-at fault model; however, when
two signal wires are shorted together, bridging fault models are required. In the first bridging fault
model proposed, the logic value of the shorted nets was modelled as a logical AND or OR of the
logic values on the shorted wires. This model is referred to as the wired-AND/wired-OR bridging

Design For Testability Page 30


Sykatiya Technologies PVT.LTD.

fault model. The wired-AND bridging fault means the signal net formed by the two shorted lines
will take on a logic 0 if either shorted line is sourcing a logic 0, while the wired-OR bridging fault
means the signal net will take on a logic 1 if either of the two lines is sourcing a logic 1.

figure 4. 1.3bridging fault model

TABLE 1.3 Truth Tables for Bridging Fault Models

Path Delay
 A circuit path that fails to transition in the required time period between the launch and
capture clocks is called path delay fault.

Design For Testability Page 31


Sykatiya Technologies PVT.LTD.

 The delay defect in the circuit is assumed to cause the cumulative delay of a combinational
path to exceed some specified duration. The combinational path begins at a primary input or
a clocked flip-flop, contains a connected chain of gates, and ends at a primary output or a
clocked flip-flop. The specified time duration can be the duration of the clock period (or
phase), or the vector period. The propagation delay is the time that a signal event (transition)
takes to traverse the path. Both switching delays of devices and transport delays of
interconnects on the path contribute to the propagation delay.
 The path delay fault model tests and characterizes critical timing paths in a design.
Pathdelay fault tests exercise the critical paths at-speed (the full operating speed of the chip)
todetect whether the path is too slow because of manufacturing defects or variations.
 Path delay fault testing targets physical defects that might affect distributed regions of a
chip. Forexample, incorrect field oxide thicknesses could lead to slower signal propagation
times, whichcould cause transitions along a critical path to arrive too late.
In order to examine the timing operation of a circuit we should examine signal transitions. The
input signal consists of two vectors: Delay tests consist of vector-pairs.
• All input transitions occur at the same time in Figure 12.1. Thus, the duration of the transient
region at the input is zero. This, of course, is an idealized illustration though it closely represents
the real situation. The transient region at the output contains multiple transitions that are separated
in time. The position of each output transition depends upon the delay of some input to output
combinational path.
• The right edge of the output transition region (grey shaded area in Figure 12.1) is determined by
the last transition, or the delay of the longest combinational path activated by the current input
vector-pair. Considering all possible input vector-pairs, “the longest delay combinational path” of
the circuit is known as the critical path. There can be more critical paths than one if several paths
meet the maximum delay criterion. The delay of critical paths determines the smallest clock period
at which the circuit can function correctly.
• For a manufactured circuit to function correctly, the output transition region for any input vector-
pair must not expand beyond the clock period. Otherwise, the circuit is said to have a delay fault. A
delay fault means that the delay of one or more paths (not necessarily the critical path) exceeds the
clock period.

figure 4.1 .4 path delay model

Design For Testability Page 32


Sykatiya Technologies PVT.LTD.

Figure 4.1.5 path delay pattern

Transition Delay
A transition fault on a line makes the signal change on that line slow. The two possible faults are
slow-to-rise and slow-to-fall types. For detecting a slow-to-rise fault on a line, we take a test for a
stuck-at-0 fault on that line. This test will set the line to 1 in the fault-free circuit and propagate the
state of the line to a primaryoutput. Let us call this vector V2 and precede it with any vector V1 that
sets the line to 0. Now the vector-pair (V1, V2) is a test for the slow-to-rise transition fault on the
line. Note that V1 sets the line to 0 and V2 sets it to 1. V2 also creates an observation path to a
primary output. If the line is slow to rise then that effect will be observed as a 0 at the output
instead of the expected value of 1. The basicassumption in this test is that the faulty delay of the
signal rise has to be large, since the observation path may be, and often is, a short path. Besides, the
effects of hazards and glitches can interfere with the observation of the output value. As a result, the
tests for transition faults can detect localized (spot) delay defects of large (gross) delay amounts.
Because of sensitization of short paths these tests may fail to detect distributed defects
The transition delay fault model is used to generate test patterns to detect single-node slow-to-rise
and slow-to-fall faults. For this model, TetraMAX ATPG launches a logical transition upon
completion of a scan load operation and uses a capture clock procedure to observe the transition
results.
The transition-delay fault model is similar to the stuck-at fault model, except that it attempts to
detect slow-to-rise and slow-to-fall nodes, rather than stuck-at-0 and stuck at-1 nodes. A slow-to
rise fault at a defect means that a transition from 0 to 1 on the defect does not produce the correct
results at the maximum operating speed of the device. Similarly, a slow-to-fall fault means that a
transition from 1 to 0 on a node does not produce the correct results at the maximum operating
speed of the device.
To detect a slow-to-rise or slow-to-fall fault, the APTG process launches a transition with one clock
edge and then captures the effect of that transition with another clock edge. The amount of time
between the launch and capture edges should test the device for correct behavior at the maximum
operating speed.

Design For Testability Page 33


Sykatiya Technologies PVT.LTD.

IDDQ
Why Do IDDQ Testing?
IDDQ testing can detect certain types of circuit faults in CMOS circuits that are difficult or
impossible to detect by other methods. IDDQ testing, when used to supplement standard functional
or scan testing, provides an additional measure of quality assurance against defective devices.
IDDQ testing detects circuit faults by measuring the amount of current drawn by a CMOS device
in the quiescent state (a value commonly called “IddQ”). If the circuit has been designed correctly,
this amount of current is extremely small. A significant amount of current indicates the presence of
one or more defects in the device.
The IDDQ fault model assumes that a circuit defect will cause excessive current drain due to an
internal short circuit from a node to ground or to a power supply. For this model,TetraMAX ATPG
does not attempt to observe the logical results at the device outputs .Instead, it tries to toggle as
many nodes as possible into both states while avoiding conditions that violate quiescence, so that
defects can be detected by the excessive current drain that they cause.
IDDQ Testing Methodology
IDDQ testing is different from traditional circuit testing methods such as functional or stuck-at
testing. Instead of looking at the logical behavior of the device, IDDQ testing checks the integrity
of the nodes in the design. It does this by measuring the current drain of the whole chip at times
when the circuit is quiescent. Even a single defective node can easily cause a measurable amount of
excessive current drain. In order to place the circuit into a known state, the IDDQ test sequence
uses ATPG techniques to scan in data, but it does not scan out any data.

Design For Testability Page 34


Sykatiya Technologies PVT.LTD.

4.2 Fault Coverage and Test Coverage

4.2.1 Fault Coverage


 Fault Coverage is the most widely used quantitative measure of test set quality. Fault
coverage is defined as the percentage of detected faults out of all faults

Number of Detected faults


Fault Coverage=
Total number of possible faults
 The quality of the fault coverage depends on how well a device’s circuitry can be observed
and controlled.

4.2.2 Test Coverage


 Test coverage represents measure of test pattern quality. Test coverage is defined as the
percentage of detected faults out of detectable faults.

Number of Detected faults


Test Coverage=
Number of Detectable faults
 Number of Detectable faults = Total Faults – Undetectable Faults

4.3 Controllability and Observability


Testability is a relative measure of the effort or cost of testing a logic circuit.Entire DFT is built on
two pillars: Controllability &Observablity. Controllability for a digital circuit is defined as the
difficulty of setting a particular logic signal to a 0 or a 1. Observability for a digital circuit is
defined as the difficulty of observing the state of a logic signal.
Controllability: It is the ability to have a desired value (which would be one out of 0 or 1) at any
particular node of the design. A particular node is said to be 'controllable', if we can force a value
of either 0 or 1, on that node.
Observability: It is the ability to actually observe the value at a particular node whether it is 0 or 1
by forcing some pre-defined inputs.SoC is a colossal design and one can observe a node only via
the output ports. There is a mechanism to excite a node and then propagate that value out of the
SoC via some output port and then 'observe' it.
Ideally, it is desired to have each and every node of the design controllable and observable
These measures are important for circuit testing, because while there are methods of observing the
internal signals of a circuit, they are prohibitively expensive.Internal signals are controlled by
setting signals at primary inputs(PIs), and we must observe internal signals by arranging to
propagate their values to primary outputs(POs). The controllability and observability measures are

Design For Testability Page 35


Sykatiya Technologies PVT.LTD.

useful because they approximately quantify how hard it is to set and observe internal signals of a
circuit.
Goldstein invented an algorithm to determine the difficulty of controlling (called controllability)
and observing (called observability) signals in digital circuits. Thigpen and Goldstein implemented
a computer program to compute controllabilities and observabilities. Goldstein was the first to
propose a systematic, efficient algorithm to compute these measures, which is called SCOAP.
It is still widely used.

In the above example, if we have control the flops such that the combo cloud results in 1 at both the
inputs of AND gate, we say that the node X is controllable for 1. Similarly, if we can control any
input of AND gate for 0, we say that node X is controllable for 0.

4.4 Scan:
Scan design, the most widely used structured DFT methodology, attempts to improve testability of
a circuit by improving the controllability and observability of storage elements in a sequential
design. Typically, this is accomplished by converting the sequential design into a scan design with
three modes of operation: normalmode, shift mode, and capture mode.
Scan testing is a method to detect various manufacturing faults in the silicon. Although many types
of manufacturing faults may exist in the silicon, focus is mainly on the method to detect faults like-
shorts and opens.
Method

Add test mode control signal(s) to circuit

Design For Testability Page 36


Sykatiya Technologies PVT.LTD.

Connect flip-flops to form shift registers in test mode

Make inputs/outputs of the flip-flops in the shift register controllable and observable

A multiplexer is added at the input of the flip-flop with one input of the multiplexer acting as the
functional input D, while other being Scan-In (SI). The selection between D and SI is governed by
the Scan Enable (SE) signal.

Figure 4.4.1 scanflipflop

Advantages:
• Design automation
• High fault coverage; helpful in diagnosis
• Hierarchical – scan-testable modules are easily combined into large scan-testable systems
• Moderate area (~10%) and speed (~5%) overheads
Reason for Scan:
 Sequential circuits have poor controllability and poor observability.Sequential Circuits
achieve the best fault coverage results when all nodes in design are controllable &
observable. Test generation for sequential circuits is difficult. To make all the flip-flops
directly controllable and observable, scan is used.
 Scan design helps in identifying the design practices which affect the targeted fault coverage
in order to achieve design PPM.
 As scan design provides access to internal storage elements, test generation complexity is
reduced.

Using this basic Scan Flip-Flop as the building block, all the flops are connected in form of a chain,
which effectively acts as a shift register. The first flop of the scan chain is connected to the scan-in
port and the last flop is connected to the scan-out port. The Fig.2 depicts one such scan chain where

Design For Testability Page 37


Sykatiya Technologies PVT.LTD.

clock signal is depicted in red, scan chain in blue and the functional path in black. Scan testing is
done in order to detect any manufacturing fault in the combinatorial logic block. In order to do so,
the ATPG tool try to excite each and every node within the combinatorial logic block by applying
input vectors at the flops of the scan chain.

4.5 SCAN CELL DESIGNS


A scan cell has two different input sources that can be selected. The first input, data input, is driven
by the combinational logic of a circuit, while the second input, scan input, is driven by the output of
another scan cell in order to form one or more shift registers called scan chains. These scan chains
are made externally accessible by connecting the scan input of the first scan cell in a scan chain to a
primary input and the output of the last scan cell in a scan chain to a primary output. Because there
are two input sources in a scan cell, a selection mechanism must be provided to allow a scan cell to
operate in two different modes: normal/capturemode and shift mode. In normal/capture mode, data
input is selected to update the output. In shift mode, scan input is selected to update the output. This
makes it possible to shift in an arbitrary test pattern to all scan cells from one or more primary
inputs while shifting out the contents of all scan cells through one or more primary outputs. There
are three widely used scan cell designs. They are muxed-D scan, clocked-scan, and level-sensitive
scan design (LSSD).

4.5.1 Muxed-D Scan Cell


The D storage element is one of the most widely used storage elements in logic design. Its basic
function is to pass a logic value from its input to its output when a clock is applied. A D flip-flop is

Design For Testability Page 38


Sykatiya Technologies PVT.LTD.

an edge-triggered D storage element, and a D latch is a level-sensitive D storage element. The most
widely used scan cell replacement for the D storage element is the muxed-D scan cell. Figure 2.9a
shows an edge triggeredmuxed-D scan cell design. This scan cell is composed of a D flip-flop
and a multiplexer. The multiplexer uses a scan enable(SE) input to select between the data input
(DI) and the scan input (SI).
In normal/capture mode, SE is set to 0. The value present at the data input DI is captured into the
internal D flip-flop when a rising clock edge is applied. In shift mode, SE is set to 1. The SI is now
used to shift in new data to the D flip-flop while the content of the D flip-flop is being shifted out.
Sample operation waveforms are shown in Figure 2.9b.
Fig. 4.5.1 shows a level-sensitive/edge-triggered muxed-D scan cell design, which can be used to
replace a D latch in a scan design. This scan cell is composed of a multiplexer, a D latch, and a D
flip-flop. Again, the multiplexer uses a scan enable input SE to select between the data input DI and
the scan input SI. However, in this case, shift operation is conducted in an edge-triggered manner,
while normal operation and capture operation are conducted in a level-sensitivemanner.
Major advantages of using muxed-D scan cells are their compatibility to modern designs using
single-clock D flip-flops, and the comprehensive support provided by existing design automation
tools. The disadvantage is that each muxed-D scan cell adds a multiplexer delay to the functional
path.

Figure 4.5.1 Edge-triggered muxed-D scan cell design and operation

Design For Testability Page 39


Sykatiya Technologies PVT.LTD.

Fig.Level-sensitive/edge-triggered muxed-D scan cell design.

4.5.2 Clocked-Scan Cell


An edge-triggered clocked-scan cell can also be used to replace a D flip-flop in a scan design.
Similar to a muxed-D scan cell, a clocked-scan cell also has a data input DI and a scan input SI;
however, in the clocked-scan cell, input selection is conducted using two independent clocks, data
clock DCK and shift clock SCK, as shown in Figure 2.11a.
In normal/capture mode, the data clock DCK is used to capture the value present at the data input
DI into the clocked-scan cell. In shift mode, the shift clock SCKis used to shift in new data from the
scan input SI into the clocked-scan cell, whilethe current content of the clocked-scan cell is being
shifted out. Sample operation waveforms are shown in Figure 2.11b.

Figure 4.5.2 Clocked-scan cell design and operation: (a) clocked-scan cell, and (b) sample waveforms.
As in the case of muxed-D scan cell design, a clocked-scan cell can also be made to support scan
replacement of a D latch. The major advantage of using a clocked scan cell is that it results in no
performance degradation on the data input. The major disadvantage, however, is that it requires
additional shift clock routing.

Design For Testability Page 40


Sykatiya Technologies PVT.LTD.

4.5.3 LSSD Scan Cell


While muxed-D scan cells and clocked-scan cells are generally used for edge triggered, flip-flop-
based designs, an LSSD scan cell is used for level-sensitive, latch-based designs.
Figure shows a polarity-hold shift register latch (SRL) design that can be used as an LSSD scan
cell. This scan cell contains two latches, a master two-port D latch L1 and a slave D latch L2.
Clocks C, A, and B are used to select between the data input D and the scan input I to drive +L1
and +L2. In an LSSD design, either +L1 or +L2 can be used to drive the combinational logic of the
design. In order to guarantee race-free operation, clocks A, B, and C are applied in a non
overlapping manner. In designs where +L1 is used to drive the combinational logic, the master latch
L1 uses the system clock C to latch system data from the data input D and to output this data onto
+L1. In designs where +L2 is used to drive the combinational logic, clock B is used after clock A to
latch the system data from latch L1 and to output this data onto +L2. In both cases, capture mode
uses both clocks C and B to output system data onto +L2. Finally, in shift mode, clocks A and B are
used to latch scan data from the scan input I and to output this data onto+L1 and then latch the scan
data from latch L1 and to output this data onto +L2,which is then used to drive the scan input of the
next scan cell. Sample operation waveforms are shown in Figure .

Figure 4.5. 1 LSSD


Figure 4.5.3 lssd

Fig b.

The major advantage of using an LSSD scan cell is that it allows us to insert scan into a latch-based
design. In addition, designs using LSSD are guaranteed to be race-free, which is not the case for

Design For Testability Page 41


Sykatiya Technologies PVT.LTD.

muxed-D scan and clocked-scan designs. The major disadvantage, however, is that the technique
requires routing for the additional clocks, which increases routing complexity.
4.6 Scan Chains
The modified sequential cells are chained together to form one or more large shift registers,
called scan chains or scan paths.

Figure 4.6.1 scan chain

Scan operation :

Figure 4.6.2 scan chain operation

• Steps to follow in Scan operation:


 SE=1, Shift the data.
 SE=0, Apply Stimulus to Primary Inputs.
 SE=0, Measure Primary Outputs.
 SE=0, Pulse the clock to capture the data.
 SE=1, Shift out the data & load new data.

Design For Testability Page 42


Sykatiya Technologies PVT.LTD.

4.7Diff b/w Full Scan and Partial Scan


A design where all storage elements are selected for scan insertion is called a full-scan design. A
design where almost all (e_g_, more than 98%) storage elements are selected is called an almost
full scandesign. A design where some storage elements are selected and sequential ATPG is
applied is called a partial-scan design. A partial-scan design where storage elements are selected
in such a way as to break all sequential feedback loops and to which combinational ATPG can be
applied is further classified as a pipelined, feed-forward, or balanced partial-scan design. As
silicon prices have continued to drop since the mid-1990s with the advent of deep submicron
technology, the dominant scan architecture has shifted from partial-scan design to full-scan design.

Full-Scan Design
With a full-scan design technique, all sequential cells in the design are modified to perform a serial
shift function. Sequential elements that are not scanned are treated as black box cells(cells with
unknown function). Full scan divides a sequential design into combinational blocks as shownin Fig.
Ovals represent combinational logic; rectangles represent sequential logic.The full-scan diagram
shows the scan path through the design.

Figure 4.7.1 full scan design


Through pseudo-primary inputs, the scan path enables direct control of inputs to all combinational
blocks. The scan path enables direct observability of outputs from all combinational blocks through
pseudo-primary outputs. You can use the efficient combinational capabilities of TetraMAX ATPG
to achieve high test coverage results on a full-scan design.

Design For Testability Page 43


Sykatiya Technologies PVT.LTD.

Full Scan Problems


 Area overhead
 Possible performance degradation
 High test application time
 Power dissipation

Partial-Scan Design
 Basic idea is to select a subset of flip-flops for scan which has Lower overhead (area and
speed).
 Select scan flip-flops to simplify sequential ATPG
 Overhead is about 25% off than full scan
 Allow optimization of area, timing, and testability simultaneously

With a partial-scan design technique, the scan chains contain some, but not all, of the sequential
cells in the design. A partial-scan technique offers a trade off between the maximum achievable
test coverage and the effect on design size and performance.
The default ATPG mode of TetraMAX ATPG, called Basic-Scan ATPG, performs combinational
ATPG. To get good test coverage in partial-scan designs, you need to use Fast-Sequential or
Full-Sequential ATPG. The sequential ATPG processes perform propagation of faults through
nonscan elements. For more information, see “ATPG Modes”.
Partial scan divides a complex sequential design into simpler sequential blocks as shown in Fig.3.
Ovals represent combinational logic; rectangles represent sequential logic. The partial-scan diagram
shows the scan path through the design after sequential ATPG has beenperformed.
Typically, a partial-scan design does not allow test coverage to be as high as for a similar fullscan
design. The level of test coverage for a partial-scan design depends on the location and number of
scan registers in that design, and the ATPG effort level selected for the Fast- Sequential or Full-
Sequential ATPG process.

Design For Testability Page 44


Sykatiya Technologies PVT.LTD.

Figure 3 Scan Path Through a Partial-Scan Design

4.8 Automatic Test Pattern Generation


ATPG generates test patterns and provides test coverage statistics for the generated pattern
set.
ATPG for combinational circuits is well understood; it is usually possible to generate test vectors
that provide high test coverage for combinational designs.
Combinational ATPG tools can use both random and deterministic techniques to generate test
patterns for stuck-at faults. By default, TetraMAX ATPG only uses deterministic pattern
generation; using random pattern generation is optional.
During random pattern generation, the tool assigns input stimuli in a pseudo-random manner,
then fault-simulates the generated vector to determine which faults are detected. As the number
of faults detected by successive random patterns decreases, ATPG can change to a
deterministic technique.
During deterministic pattern generation, the tool uses a pattern generation process based on

Design For Testability Page 45


Sykatiya Technologies PVT.LTD.

path-sensitivity cone timepts to generate a test vector that detects a specific fault in the design.
After generating a vector, the tool fault-simulates the vector to determine the complete set of
faults detected by the vector. Test pattern generation continues until all faults either have been
detected or have been identified as undetectable by the process.
Because of the effects of memory and timing, ATPG is much more difficult for sequential circuits
than for combinational circuits. It is often not possible to generate high test coverage test vectors
for complex sequential designs, even when you use sequential ATPG. Sequential ATPG tools
use deterministic pattern generation algorithms based on extended applications of the path
sensitivity cone timepts.

CHAPTER 5

5.1 Drc violations

Violations that prevent •Uncontrollable Clocks


scan insertion •Asynchronous uncontrollable Pins

•Clock Used As Data


•Black Box Feeds Into Clock or Asynchronous Control
Violations that prevent •Source Register Launch Before Destination Register
data capture Capture
•Registered Clock-Gating Circuitry
•Three-State Contention
•Clock Feeding Multiple Register Inputs

•Combinational Feedback Loops


Violations that reduce •Clocks That Interact With Register Input
fault coverage •Multiple Clocks That Feed Into Latches and Flip-Flops
•Black Boxes

Design For Testability Page 46


Sykatiya Technologies PVT.LTD.

Violations that prevent Scan Insertion

Design For Testability Page 47


Sykatiya Technologies PVT.LTD.

Violations that prevent data capture

Design For Testability Page 48


Sykatiya Technologies PVT.LTD.

Design For Testability Page 49


Sykatiya Technologies PVT.LTD.

• Recommended Solution
 During the shift operation, certain modifications must be made to each tri-state bus in order
to ensure that only one driver controls the bus.
The value of a floating bus is unpredictable, so we use pull-up, pull-down or bus keeper

Design For Testability Page 50


Sykatiya Technologies PVT.LTD.

Violations that reduce Fault Coverage

Design For Testability Page 51


Sykatiya Technologies PVT.LTD.

Design For Testability Page 52


Sykatiya Technologies PVT.LTD.

5.2 lockup latches


What are lock-up latches: Lock-up latch is an important element in scan-based designs, especially for hold
timing closure of shift modes. Lock-up latches are necessary to avoid skew problems during shift phase of
scan-based testing. A lock-up latch is nothing more than a transparent latch used intelligently in the places
where clock skew is very large and meeting hold timing is a challenge due to large uncommon clock path.
That is why, lockup latches are used to connect two flops in scan chain having excessive clock
skews/uncommon clock paths as the probability of hold failure is high in such cases. For instances, the
launching and capturing flops may belong to two different domains (as shown in figure below).
Functionally, they might not be interacting. Hence, the clock of these two domains will not be balanced and
will have large uncommon path. But in scan-shift mode, these interact shifting the data in and out. Had
there been no lockup latches, it would have been very difficult for STA engineer to close timing in a scan
chain across domains. Also, probability of chip failure would have been high as there a large uncommon
path between the clocks of the two flops leading to large on-chip-variations. That is why; lockup latches can
be referred as as the soul mate of scan-based designs.

Figure 5.2.1 lockup latch

Design For Testability Page 53


Sykatiya Technologies PVT.LTD.

Where to use a lock-up latch: As mentioned above, a lock-up latch is used where there is high probability
of hold failure in scan-shift modes. So, possible scenarios where lockup latches are to be inserted are:

 Scan chains from different clock domains: In this case, since, the two domains do not interact
functionally, so both the clock skew and uncommon clock path will be large.
Flops within same domain, but at remote places: Flops within a scan chain which are at remote places are
likely to have more uncommon clock path.

In both the above mentioned cases, there is a great chance that the skew between the launch and capture
clocks will be high. There is both the probability of launch and capture clocks having greater latency. If the
capture clock has greater latency than launch clock, then the hold check will be as shown in timing diagram
in figure 3. If the skew difference is large, it will be a tough task to meet the hold timing without lockup
latches.

Figure 5.2.2 scope for a lock-up latch insertion

Figure 5.3.3: Timing diagram showing setup and hold checks for path crossing from domain 1 to domain 2

Positive or negative level latch?? It depends on the path you are inserting a lock-up latch. Since, lock-up
latches are inserted for hold timing; these are not needed where the path starts at a positive edge-
triggered flop and ends at a negative edge-triggered flop. It is to be noted that you will never find scan
paths originating at positive edge-triggered flop and ending at negative edge-triggered flop due to DFT

Design For Testability Page 54


Sykatiya Technologies PVT.LTD.

specific reasons. Similarly, these are not needed where path starts at a negative edge-triggered flop and
ends at a positive edge-triggered flop. For rest two kinds of flop-to-flop paths, lockup latches are required.
The polarity of the lockup latch needs to be such that it remains open during the inactive phase of the
clock. Hence,

 For flops triggering on positive edge of the clock, you need to have latch transparent when clock is
low (negative level-sensitive lockup latch)

 For flops triggering on negative edge of the clock, you need to have latch transparent when clock
is high (positive level-sensitive lockup latch)

Who inserts a lock-up latch: These days, tools exist that automatically add lockup latches where a scan
chain is crossing domains. However, for cases where a lockup latch is to be inserted in an intra-domain scan
chain (i.e. for flops having uncommon path), it has to be inserted during physical implementation itself as
physical information is not feasible during scan chain implementation

Which clock should be connected to lock-up latch: There are two possible ways in which we can connect
the clock pin of the lockup latch inserted. It can either have same clock as launching flop or capturing flop.
Connecting the clock pin of lockup latch to clock of capturing flop will not solve the problem as discussed
below.
 Lock-up latch and capturing flop having the same clock (Will not solve the problem): In this case,
the setup and hold checks will be as shown in figure 5. As is apparent from the waveforms, the hold check
between domain1 flop and lockup latch is still the same as it was between domain 1 flop and domain 2 flop
before. So, this is not the correct way to insert lockup latch.

Design For Testability Page 55


Sykatiya Technologies PVT.LTD.

Figure 5.3..4: Lock-up latch clock pin connected to clock of capturing flop

Figure 5.3.5 : Timing diagrams for figure 4

 Lock-up latch and launching flop having the same clock: As shown in figure 7, connecting the
lockup latch to launch flop’s clock causes the skew to reduce between the domain1 flop and lockup latch.
This hold check can be easily met as both skew and uncommon clock path is low. The hold check between
lockup latch and domain2 flop is already relaxed as it is half cycle check. So, we can say that the correct
way to insert a lockup latch is to insert it closer to launching flop and connect the launch domain clock to its
clock pin.

Figure 5.3.6: Lock-up latch clock pin connected to clock of launch flop

Figure 5.3.7: Waveforms for figure

Design For Testability Page 56


Sykatiya Technologies PVT.LTD.

Why don’t we add buffers: If the clock skew is large at places, it will take a number of buffers to meet hold
requirement. In normal scenario, the number of buffers will become so large that it will become a concern
for power and area. Also, since skew/uncommon clock path is large, the variation due to OCV will be high.
So, it is recommended to have a bigger margin for hold while signing it off for timing. Lock-up latch
provides an area and power efficient solution for what a number of buffers together will not be able to
achieve.

Advantages of inserting lockup latches:


 Inserting lock-up latches helps in easier hold timing closure for scan-shift mode
 Robust method of hold timing closure where uncommon path is high between launch and capture
flops
 Power efficient and area efficient
 It improves yield as it enables the device to handle more variations.

5.3Memoryshadow logic

Why is shadow logic so problematic for ATPG tools to fault-grade properly? Let’s first define the
concepts of observability and controllability as they relate to ATPG testing.
• Observability: A node is observable if you can predict the response on it and propagate the fault
effect to the primary outputs where you can measure the response. A primary output is an output
that can be directly observed in the test environment.
• Controllability: A node is controllable if you can drive it to a specified logic value by setting the
primary inputs to specific values. A primary input is an input that can be directly controlled in the
test environment.
In Figure , we see that the input shadow logic of a memory cell is not observable since it cannot be
“captured by a scan chain or a primary output,” but rather by the memory cell only. Further, the
output shadow logic of a memory cell is not controllable since it cannot be “driven by a scan chain
or a primary input,” but instead by the memory outputs.

Design For Testability Page 57


Sykatiya Technologies PVT.LTD.

figure 5. 1.1 memory shadow logic

As SoC devices start to incorporate more memory components of deeper arrays and wider data
busses, the portion of the fault universe associated with shadow logic becomes greater. If the
Design-for-Test (DFT) engineer does not handle this, the fault coverage of such devices becomes
unacceptable.
Standard Approaches
Let’s first review some of the standard approaches that have become popular in managing the
shadow logic surrounding embedded memories.
Mux Bypass
With this strategy, a bypass mux is placed on the RAM output data, which allows the RAM’s
data-in bus to be tied directly to its data-out bus as shown in Figure 3-1. This technique is very
easy to implement in logic, does a fine job of allowing the ATPG tool to cover both the input and
output shadow logic, and requires no special handling by the ATPG tool or the tester program.
However, it introduces a large number of gates and requires a lot of additional routing resources.
It also introduces a not-insignificant static delay on the read data bus.
Note that the “scan_mode” signal is active during both the scan update and scan shift portions of
ATPG. In a design with no other test modes, this is identical to the TESTMODE primary input
of the device.

Design For Testability Page 58


Sykatiya Technologies PVT.LTD.

figure 5. 2.1 forced controllability


Forced Controllability
Another implementation worthy of consideration is simply to qualify the data out of the RAM
with the scan mode signal such that during scan, the data-out bus always contains known values.
The block diagram in Figure 3-2 details this concept. This concept is simple to implement, small
in gate count, and has a lower routing impact than the mux bypass method. However, it does
make the RAM data-out flip-flop uncontrollable for some stuck-at logic values so some fault
coverage is lost through this technique. Further, it doesn’t provide any observability to the input
shadow logic. And again, a static delay component is added to the read data bus, although
perhaps less than with the mux bypass method.

Figure 3-2 shows that during functional operation, the flip-flop after the data out of the RAM
will receive the contents of RAM unmodified. The scan signal is de-asserted and thus doesn’t
affect the OR gate. However, during scan mode, the logic ‘1’ is logically ORed with all of the

Design For Testability Page 59


Sykatiya Technologies PVT.LTD.

bits on the data-out bus, allowing for limited stuck-at testing of the output shadow logic.
Wrapper or Register Collar
A register collar is the most obvious way to add full controllability and observability to an
embedded RAM’s shadow logic. With this method, a set of non-functional flip-flops is added to
the design. On the input side of the RAM, these flops allow the ATPG tool to capture and shift
out the input shadow logic. On the output side, they allow the tool to control the driving of the
output shadow logic. These additional flops are shown in Figure 3-3 below.
DFTC can create and insert this wrapper automatically, assuming the proper DFTC options are
available, or it can be created manually. This additional circuitry allows the input shadow logic to
be observed, since a scan chain sinks the data out of this logic. It also allows the output shadow
logic to be controlled, since a scan chain sources this combinatorial logic. However, it is clear to
see in Figure 3-3 that this method adds a lot of overhead to the die and routing needs on the chip,
along with the static delays inherent in the bypass mux method.

figure 5. 3.1 wrapper

Smart Wrapper
In this implementation, a single shadow flop is used to create both the observability of the input
shadow logic and the controllability of the output shadow logic. Using an XOR of the inputs to
the RAM reduces the width of the bypass circuitry to the size of the read data. This method
inserts some logic and some routing, although less than the full collar approach, and there is still
the delay penalty due to the mux on the read data path

Design For Testability Page 60


Sykatiya Technologies PVT.LTD.

figure 5. 4.1 smartwrapper

Design For Testability Page 61


Sykatiya Technologies PVT.LTD.

References
 Book “VLSI TEST PRINCIPLES AND ARCHITECTURES” by Laung-Terng Wang
 Tessent Scan & ATPG User Manual by Mentor Graphics
 Tessent® Shell User’s Manual
 Tessent® Scan and ATPG User's Manual
 DFTMAX user guide – SYNOPSYS
 http://vlsiuniverse.blogspot.in/2013/06/lockup-latches-soul-mate-of-scan-based.html
 https://anysilicon.com/lock-latch-implication-timing/
 ATPG Methods that Improve Fault Coverage of SoC Devices by Michael Lewis and Leah
Clark

Design For Testability Page 62


Sykatiya Technologies PVT.LTD.

References

Design For Testability Page 63

You might also like