You are on page 1of 6

An Accurate Timing-aware Diagnosis Algorithm for

Multiple Small Delay Defects*

Po-Juei Chen, Wei-Li Hsu, and James C.-M. Li Nan-Hsin Tseng, Kuo-Yin Chen,
Wei-pin Changchien, and Charles C. C. Liu
Department of Electrical Engineering Test-Chip Design Department, Design Methodology Division
National Taiwan University Taiwan Semiconductor Manufacturing Company
Taipei, Taiwan Hsin-Chu, Taiwan
nanometer technology demonstrate the effectiveness of this
Abstract -- This paper presents a novel diagnosis algorithm for small
delay defects (SDD). Faster-than-at-speed test sets are generated by technique.
masking long paths in the circuit for testing SDD. The proposed The structure of this paper is as follows. Section 2 introduces
diagnosis technique uses timing upper and lower bound to improve the related previous work. Section 3 describes our technique in
diagnosis resolution. Also, timing-aware single location at a time (TA- detail. Section 4 shows the experimental results. Finally, section
SLAT) technique is proposed to diagnose multiple SDD. Test results of 5 concludes the paper.
different test speeds, if available, can be combined to further improve
the diagnosis results. Experimental results on five advanced industrial 2. BACKGROUND
designs show the accuracy of the proposed technique. 2.1 Previous Work in SDD Test Generation
1. INTRODUCTION Due to large number of paths, generating a path delay fault test
set for SDD is very difficult. In practice, high quality transition
Small delay defects (SDD) are gaining more and more
fault test patterns have been widely used for SDD [Shao 02]. As
attention in modern nano-meter technologies due to both
late as possible ATPG has been proposed to detect transition fault
increased frequency and shrinking geometry [Mitra
via long paths [Gupta 04]. Because there are too many paths in a
04][Kruseman 04][Sato 05 a b][Ahmed 06]. Small delay defects
circuit, KLPG selects K longest paths per gate when generating
cause very small amount of extra delay that may escape slow-
patterns [Qiu 04]. Alternatively, a transition fault ATPG based
speed testing. Circuits with SDD may fail in the system or cause
on SOCRATES with propagation first or activation first
reliability problems in the future. Diagnosis of SDD helps to
heuristics was proposed in [Kajihara 06].
improve both the design rules and the process technology.
Statistical delay quality model (SDQM) was first proposed to
Therefore, an accurate diagnosis technique for SDD is highly
measure the quality of test patterns for SDD [Sato 05a
needed.
b][Hamada 06]. To detect SDD, faster-than-at-speed testing was
Successful diagnosis of SDD is difficult due to several reasons.
used [Kruseman 04]. To apply faster-than-at-speed testing,
First of all, circuits with SDD cannot be diagnosed if they escape
ATPG needs to avoid long paths so that good CUT can pass the
the test. Therefore, a high quality test set must be applied.
test [Turakhia 07].Timing-aware ATPG utilizes timing
Timing-aware ATPG has been proposed to generate test patterns
information of the circuit to generate test patterns along long
that detect delay faults via long paths. Besides quality test
paths for detecting SDD, such as [Cadence 11] [Synopsys 11].
patterns, faster-than-at-speed testing also helps to detect SDD.
Timing-aware ATPG requires long CPU time and the pattern
Second, circuits with SDD usually produce very few failing
count is large. Instead of timing-aware ATPG, timing-unaware
outputs compared with those circuits with large delay defects.
ATPG can be used with some modification [Ahmed 06]. Short
As a result, the number of diagnosed candidates is often very
paths and intermediate paths are masked to force ATPG to
large (i.e. poor diagnosis resolution for SDD). Finally, if SDD
generate patterns along the long paths. After test generation, a
are caused by systematic problems, then it is highly likely that
subset of patterns is then selected. Alternatively, selecting
multiple SDD may occur together on the same die. Multiple
patterns from a large N-detect test set is also effective for SDD
SDD makes diagnosis an even more challenging task.
[Lee 06][Goel 09][Peng 10].
This paper presents a novel timing-aware diagnosis algorithm
for SDD. This diagnosis technique is based on transition fault 2.2 Previous Work in SDD Diagnosis
model with several novel features. First of all, this technique Traditional timing-unaware path delay fault diagnoses have
considers timing information to improve the diagnosis resolution. been proposed based on critical path tracing (CPT), such as
Upper bounds and lower bounds (UB/LB) of each candidate are [Girard 92][Pant 99]. Path delay fault diagnosis is difficult
calculated for each pattern. Second, this technique improves the because the number of paths is exponential in the circuit.
concept of single location at a time (SLAT) to diagnose multiple Besides, CPT is not always correct when there are fanout
SDD [Huisman 04][Schuermyer 05]. Timing-aware SLAT reconvergences in the circuit.
patterns are identified for each candidate by considering timing Based on SDQM, a timing-aware diagnosis for SDD has been
UB/LB. Finally, test results of different test speeds, if available, proposed in [Aikyo 07]. This technique used 7-valued simulation
can be combined together to generate more accurate diagnosis to consider the timing information.SDD diagnosis test generation
results. Simulation and silicon experiments on an advanced has been proposed in [Guo 10]. Diagnosis test generation may
not be applicable in the production test environment. Timing-
* This research is supported by TSMC JDP under contract # 98-FS-C14. 1
aware diagnosis based on satisfiability [Lu 07] is not applicable the intersection of each backtrace cone as ICL, where a backtrace
for large circuits. Timing upper and lower bound was used for cone consists of fan in candidates traced back from a failing
delay fault diagnosis based on transition fault model [Wang observed point. The size of ICL is small whereas the actual
05][Mehta 09]. Upper bounds of fault size can be calculated locations of culprit faults may not be included in the presence of
from the timing of passing patterns while lower bounds can be multiple defects. A possible conservative approach takes the
calculated from the timing of failing patterns. union of backtrace cones as ICL. Although, this conservative
SLAT (Single Location at a Time) diagnosis was proposed by approach copes with multiple defects, runtime of fault simulation
IBM and has been successfully demonstrated on industrial and timing analysis is long.
designs [Huisman 04] [Schuermyer 05]. Because it is very Our structure analysis technique copes with multiple defects
difficult to find a fault model for all defects, SLAT uses only by analyzing netlist structure and test failures of each failing
those patterns whose outputs match that of one single stuck-at pattern. The idea of our technique is similar to the Failing PO
fault. This technique first identifies SLAT patterns, followed by Partition Technique proposed in [Wang 03].First of all, an
a minimum covering to find a small set of candidate faults that undirected failing observed point graph is constructed. Each
explain as many SLAT patterns as possible. The original SLAT vertex in the graph is a failing observed point. An edge between
patterns were proposed for single stuck-at fault. In this research, two vertices indicates that the intersection of their backtrace
we extend the idea of SLAT to diagnose multiple SDD. cones is not empty. Disjoint sets on the graph are then identified
by performing depth-first search. Given a failing patternPi, a
3. PROPOSED TECHNIQUE
disjoint set DSj and observed failure point Ok, ICL is obtained as
3.1 Overall Diagnosis Flow follows:
Figure 1 illustrates the overall flow of the proposed diagnosis ⎧⎪ ⎫⎪
technique. There are four required input files to this technique: Ii, j = ⎨ ∩ Ck ⎬ (1)
⎪⎩ ∀ O k ∈ DS j &&O k ∈ OP ( Pi ) ⎪⎭
SDF file, test patterns, netlist, and test failures from ATE. First
of all, structure analysis is performed by backtracing from failing ⎧ ⎫ (2)
ICL = ⎨ ∪ ∪ I i, j ⎬
observed points (primary outputs or scan flip-flops) to obtain an ⎩ ∀ i ∀ j ⎭
initial candidate list. Transition fault simulation and timing where OP(Pi) represents the set of failing observed points for
analysis are then performed for each candidate in the initial failing pattern Pi. Ck is a backtrace cone from observed point Ok.
candidate list to acquire simulation failures and candidate timing Ii,j is the intersection of cones from all OP(Pi) which belong to
slacks. After that, Timing-Aware SLAT (TA-SLAT) patterns are DSj. ICL is obtained by taking union of all Ii,j for all patterns and
identified by considering candidate timing slacks during the all disjoint sets.
comparison between test failures and simulation failures. Figure 2(a) shows an example circuit to demonstrate our
Minimum covering then selects minimum sets of candidates that structure analysis technique. Suppose there are three culprit
explain all TA-SLAT patterns. Finally, the ranking list of those faults f1, f2 and f3 (located on candidates F1, F2, and F3) in the
candidate sets is reported as diagnosis result. circuit. TABLE I shows the test failures of these culprit faults.
Each ‘X’ in the table indicates a failure discovered at the
observed point (column) by applying the test pattern (row). In
this example, there are six failing observed points: O1 ~ O6.
Figure 2(b) shows the corresponding failing observed point graph
which consists of six vertices: O1 ~ O6 and five edges: O1O2,
O3O4, O4O5, O4O6, O5O6. Two disjoint sets DS1 = {O1, O2} and
DS2 = {O3, O4, O5, O6} are identified in the graph. TABLE II
summarizes the results of our structure analysis. Take the first
row for example, two failing observed points O1 and O2 of test
pattern P1 belong to DS1. Therefore I1,1 = {A, B, F1} for test
pattern P1 is acquired by taking the intersection of backtrace
cones: C1 and C2. For the second row, failing observed points O1
and O2 belong to DS1 whereas failing observed points O3 and O4
belong to DS2. Therefore I2,1 = {A, B, F1} and I2,2 = {C, D, F2}
for test pattern P2 are acquired by taking intersection of backtrace
cones: C1, C2 and C3, C4 respectively. The last row shows ICL of
our structure analysis = {A, B, F1, C, D, F2, E, H, F3} which is
much better than ICLs obtained by the other two approaches.
Figure 1 Overall Diagnosis Flow The aggressive approach obtains an empty ICL by taking the
intersection of backtrace cones of C1 to C6. The conservative
3.2 Structure Analysis and Timing Analysis approach, which takes the union of backtrace cones of C1 to C6,
Structure analysis traces back from failing observed points to obtains an ICL consisting of all twenty-six candidates in the
obtain an initial candidate list (ICL). Candidates in ICL are gate circuit.
I/O pins which are the potential locations of culprit faults. The Timing analysis is performed to calculate candidate timing
selection of structure analysis approaches is a tradeoff between slacks for each candidate in ICL obtained by structure analysis.
correctness and runtime. A possible aggressive approach takes The candidate timing slack of a candidate d for an observed point
2
Ok, denoted by CTSd,Ok, is the slack of the longest path that goes fault that located on the candidate. Figure 3 shows the algorithm
through candidate d to observed point Ok. of TA-SLAT identification. Notations used in our algorithms are
TABLE III summarizes the candidate timing slacks for each defined as follows: UBd / LBd represents the upper/lower bound
candidate in ICL of TABLE II. Suppose the test cycle time is 1.0 defect size of a candidate d. UBd,Pi/LBd,Pi represents the
unit, and the gate delays are specified inside each gate symbol in upper/lower bound of defect size of a candidate d for a test
Figure 2 (wire delays are ignored in this demonstration example). pattern Pi. The input arguments of TA-SLAT identification are
The first row of TABLE III shows candidate timing slack: candidate d and two lists of patterns: TPSF_PList and
CTSF1,O1 and CTSF2,O2. The delay of the longest path that goes TFSF_PList. TPSF_PList consists of test pass simulation fail
through F1 to O1 is 0.4 unit (see Figure 2), so CTSF1,O1 is 0.6 unit. (TPSF) patterns which contains only TPSF observed points.
The delay of the longest path that goes through F1 to O2 is 0.6 TFSF_PList consists of test fail simulation fail (TFSF) patterns
unit, so CTSF2,O2 is 0.4 unit. which contains at least one TFSF observed points. Initially UBd
is set as test cycle time and LBd is set to 0.0 unit (line 2). For
each TPSF pattern Pi in the TPSF_PList, UBd,Pi is initialized as
test cycle time (line 4). Then both UBd and UBd,Pi are updated by
function UpdateUB(d, Pi, Ok) for each TPSF observed point Ok
(lines 5, 6). For each TFSF pattern Pi in TFSF_PList, LBd,Pi and
UBd,Pi are initialized to 0.0 unit and test cycle time respectively
(line 9). For each observed point Ok of pattern Pi, if Ok is a TPSF
observed point then both UBd and UBd,Pi are updated by function
UpdateUB(d, Pi, Ok) (lines 12, 13). If Ok is a TFSF observed
point then both LBd and LBd,Pi are updated by function
Figure 2 (a) Example Circuit
UpdateLB(d, Pi, Ok) (lines 14, 15). The details about these two
(b) Failing Observed Point Graph
functions are described in Figure 4. If Ok is a TFSP observed
TABLE I. TEST FAILURES OF FIVE FAILING TEST PATTERNS point then TFSP_flag is set to true to indicate the existence of
Test Pattern O1 O2 O3 O4 O5 O6
P1 X X
TFSP observed points (lines 16, 17). If TFSP_flag is false and
P2 X X X X LBd,Pi is smaller than or equals to UBd,Pi then test pattern Pi is
P3 X X added into pattern list TA_SLAT_PList (lines 19, 20). For each
P4 X X test pattern Pi in TA_SLAT_PList, if LBd,Pi is larger than UBd that
P5 X X X means although timing UB/LB is consistent for pattern Pi (LBd,Pi
TABLE II. RESULT OF STURCTURE ANALYSIS < UBd,Pi), the UB/LB is inconsistent through all test patterns. The
Test Pattern Candidates Comment inconsistent LBd,Pi indicates that the test failures of pattern Pi
P1 {A, B, F1} I1,1 = {C1∩ C2} could be actually result from other defects rather than candidate d
P2 {A, B, F1, C, D, F2} I2,1∪ I2,2= {C1∩ C2} ∪ {C3∩ C4} in the presence of multiple defects. Therefore Pi is removed from
P3 {E, H, F3} I3,2 = {C5∩ C6} TA_SLAT_PList (lines 22 to 24). Finally all test patterns in
P4 {C, D, F2} I4,2 = {C3∩ C4} TA_SLAT_PList are identified as TS-SLAT pattern for candidate
P5 {E} I5,2 = {C4∩ C5∩ C6} d (line 26).
All patterns {A, B, F1, C, D, F2, ICL = {I1,1 ∪ I2,1 ∪ I2,2 ∪ I3,2 ∪ I4,2 ∪
E, H, F3 } TA-SLAT Identification (d, TPSF_PList, TFSF_PList) {
I5,2}
1: TA_SLAT_PList = EMPTY;
2: UBd = test_cycle_time; LBd = 0;
3: foreach pattern Pi in TPSF_PList {
TABLE III. CANDIDATE TIMING SLACKS 4: UBd,Pi= test_cycle_time;
Candidate O1 O2 O3 O4 O5 O6 5: foreach observed point Ok of Pi
F1 0.6 0.4 N/A N/A N/A N/A 6: UpdateUB(d, Pi, Ok);
A 0.6 0.4 N/A N/A N/A N/A 7: }
B 0.6 0.4 N/A N/A N/A N/A 8: foreach pattern Pi in TFSF_PList {
F2 N/A N/A 0.5 0.5 N/A N/A 9: LBd,Pi= 0; UBd,Pi= test_cycle_time;
C N/A N/A 0.5 0.5 N/A N/A 10: TFSP_flag= false;
D N/A N/A 0.5 0.5 N/A N/A 11: foreach observed point Ok of Pi {
12: if ( Ok is TPSF )
F3 N/A N/A N/A N/A 0.7 0.4
13: UpdateUB(d, Pi, Ok);
H N/A N/A N/A N/A 0.7 0.4
14: if ( Ok is TFSF )
E N/A N/A N/A 0.8 0.7 0.4 15: UpdateLB(d, Pi, Ok);
16: if ( Ok is TFSP )
3.3 Timing-aware SLAT Pattern Identification 17: TFSP_flag= true;
Traditional SLAT diagnosis does not consider timing 18: }
information during the comparison between transition fault 19: if ( TFSP_flag== false && LBd,Pi UBd,Pi )
20: pushPiintoTA_SLAT_PList;
simulation failures and test failures, so the diagnosis resolution 21: }
decreases in the presence of SDD. Our proposed technique 22: foreachPi in TA_SLAT_PList {
identifies TA-SLAT patterns of a candidate by considering 23: if( LBd,Pi > UBd )
UB/LB of defect size for the candidate during the comparison 24: remove Pi from TA_SLAT_PList;
25: }
between transition fault simulation failures and test failures. 26: retrun TA SLAT PList; }
Defect size for a candidate is the extra delay caused by culprit
Figure 3 TA-SLAT Pattern Identification
3
Figure 4 describes the details about functions UpdateUB and Ok whose CTSE,Ok is smaller than UBE could actually caused by
UpdateLB. Function UpdateUB updates UBd,Pi and UBd by the the culprit faults located on other candidates. It could be
smallest CTSd,Ok Function UpdateLB updates LBd,Pi and LBd by observed that the test failures of P5 (see TABLE I and Figure 2)
the largest CTSd,Ok. are actually result from both culprit faults f2and f3. This
technique reduces the number of TA-SLAT pattern explained by
//update UBd,,Pi and UBd //updateLBd,Piand LBd by
bythe smallest CTSd,Ok the largest CTSd,Ok equivalent candidate, so the diagnosis resolution can be improved.
UpdateUB (d, Pi,Ok) { UpdateLB (d, Pi,Ok) { TABLE V. TA-SLAT PATTERN INDENTIFICATION FOR CANDIDATE E
1: if(CTSd,O <UBd,P ) 5: if(CTSd,O >LBd,P )
k i k i Pattern LB UB Comment
2: UBd,P = CTSd,O ; 6: LBd,P = CTSd,O ; P3 0.7 1.0 TFSF pattern
i k i k

3: if(CTSd,O <UBd) 7: if(CTSd,O >LBd) P5 0.8 1.0 TFSF pattern


k k

4: UBd=CTSd,O ; } 8: LBd=CTSd,O ; } P6 N/A 0.7 TPSF pattern


k k
P3, P5 0.8 0.7 LBE,P5is larger than UBE
Figure 4 UpdateLB and UpdateUB P3 0.7 0.7 P3 is a TA-SLAT pattern

TABLE IV demonstrates an example of TA-SLAT pattern 3.4 Minimum Covering


identification for pattern P1. The first row shows the transition
fault simulation failures for candidate F1 by applying pattern P1. Minimum-covering, selects a minimum set of candidates that
The second row shows the test failures of the culprit fault f1 with covers all TA-SLAT pattern. If a pattern is identified as a TA—
defect size 0.8 unit. Observed points O1 and O2 are both TFSF SLAT pattern for a candidate, then the candidate covers the
observed points, also CTSF1,O1is larger than CTSF2,O2, so LBF1,P1 is pattern. TABLE VI shows an example of a covering table used
determined by CTSF1,O1 which is 0.6 unit (see TABLE III). LBF1,P1 by minimum covering. Every row in the covering table
is smaller than UBF1,P1 which equals to the test cycle time (1.0 represents a candidate and every column is a TA-SLAT pattern.
unit) and there exist no TFSP observed points, so P1 is identified An ‘S’ indicates that the candidate covers the corresponding TA-
as a TA-SLAT pattern for candidate F1. The third row shows the SLAT pattern. A greedy covering algorithm is usedto find a
test failures of culprit fault f1with different defect size as 0.5 unit. minimum set of candidates that covers all TA-SLAT patterns.
Observed point O1 is a TPSF observed point, so UBF1,P1 is set to This algorithm first selects essential candidates that have at least
CTSF1,O1 (0.6 unit). Observed point O2 is a TFSF observed point, one unique S in a column. The algorithm then selects candidate
so LBF1,P1 is set to CTSF1,O2 (0.4 unit). Because LBF1,P1 is smaller that covers the largest number of TA-SLAT patterns. When
than UBF1,P1 and there exist no TFSP observed points, P1 is still multiple candidates cover the same number of TA-SLAT patterns,
identified as a TA-SLAT pattern for candidate F1. This example the algorithm first selects the candidate with the smallest number
also demonstrates the problem that number of SLAT patterns of TPSF patterns. In TABLE VI the essential candidates: F1, F2
decreases in the presence of SDD. It could be observed that P1 is are fist selected. F3 rather than E is then selected to cover P3,
not a SLAT pattern if the defect size of culprit fault f1 is 0.5 unit. because F3 has no TPSF pattern. Therefore the result of
This problem leads to poor resolution of traditional SLAT minimum covering is {F1, F2, F3}.
diagnosis. TABLE VI. COVERING TABLE
Candidate P1 P3 P4 TPSF pattern
TABLE IV. TA-SLAT PATTERN IDENTIFICATION FOR PATTERN P1 F1 S None
P1 F2 S None
Failures F3 S None
O1 O2 O3 O4 O5 O6
Simulation failures of E S P6
X X
transition fault on candidate F1
Test failures
X X 4. EXPERIMENTAL RESULTS
(defect size = 0.8 unit)
Test failures Experiments are conducted on five advanced industrial designs.
X The individual gate count of each design is around 0.3 million
(defect size = 0.5unit)
gates. The number of scan flip-flops in each design is around 30
TABLE V demonstrates an example of TA-SLAT pattern thousands.
identification for candidate E. Suppose that the simulation 4.1 SDD Test Generation
failures of candidate E perfectly match the test failures of two
failing patterns: P3 and P5. For the first two rows, P3 and P5 are At-speed (AS) test set and faster-than-at-speed (FS) test set are
identified as TA-SLAT patterns for candidate E. That is because generated for detecting SDD. First of all, the slack distribution
LBE,P3is smaller thanUBE,P3and LBE,P5is smaller than UBE,P5.LBE of each design, which consists of the slacks of the longest path to
is then updated by LBE,P5as 0.8 unit. Suppose that there is an each scan flip-flop, is acquired by performing static timing
additional pattern P6 which is a TPSF pattern for candidate E. analysis. After that, cut-off slacks for AS and FS test set are
Then the third row shows that UBE is updated by UBE,P6 which is determined based on the slack distribution. AS and FS test set
0.7 unit. The fourth row shows that LBE,P5is larger than UBE, so are then generated by masking those scan flip-flops whose
P5 is removed from TA_SLAT_PList for candidate E (see Figure corresponding slack values are smaller than the cut-off slack for
3 lines 22to 24). Therefore TA_SLAT_PList includes only P3 at AS and FS test set respectively. Considering inter-die process
the end of TA-SLAT pattern identification for candidate E. The variation, the test cycle times for each die should be different.
reason why P5 is not identified as a TA-SLAT pattern for The feasible test cycle time for AS test set of each die is
candidate E is that test failure observed at TFSF observed point calculated by monitoring the test structure inside the die. The

4
test cycle time for FS test set is equal to the test cycle time of AS summarized in TABLE IX. ‘Rank-1 hit rate’ is the ratio of
test set minus the cut-off slack for FS test set. inserted SDDs that are successfully diagnosed as the most likely
TABLE VII summarizes the information about AS and FS test candidate. ‘Success rate’ is the percentage of cases that at least
sets generated for five industrial designs d1 to d5. ‘TL’ represents one inserted SDD are successfully diagnosed. ‘Average
their test length and ‘FC’ represents transition fault coverage. resolution’ means the average number of candidates ranked prior
‘masked FF’ column shows the number of masked scan flip-flops. than or equal to the inserted SDD. It could be observed that not
‘Speed up’ column shows the speed up percentage from AS to FS. only detection ability but also diagnosis accuracy of our proposed
TABLE VII. SUMMARY AS TEST SET AND FS TEST SET flow are much better than those of traditional flow. The average
AS FS success rate of our proposed flow is 80% higher than that of the
Design TL FC masked TL FC masked speed traditional flow.
FF FF up To compare the diagnosis accuracy separately, the same failure
d1 300 79.83% 0 300 30.19% 25,007 10%
logs applying FS test set are fed to commercial tool and our
d2 300 78.62% 108 300 29.42% 25,193 14%
d3 300 79.63% 32 300 25.01% 25,287 10% diagnosis technique. TABLE X shows that our proposed
d4 300 78.03% 153 300 27.52% 25,494 14% technique produces higher rank-1 hit rate, better success rate and
d5 300 78.66% 81 300 29.86% 25,044 14% finer resolution than traditional diagnosis. For case1 ~ case9 the
success rate of our diagnosis is much higher than the one of
4.2 Simulation Results
traditional diagnosis. For case10 ~ case20 excluding case18
Twenty-five faulty cases of design d1 are generated by which escapes FS test set, the rank-1 hit rate and the average
inserting small extra delay into the SDF file. Timing simulations resolution of our diagnosis are still better than those of traditional
using a commercial simulator, VCS, are then performed for three diagnosis. For case21 ~ case25 with double SDD inserted,
test pattern sets: AS FS and timing-aware (TA) test pattern set traditional diagnosis only identify one inserted SDD as rank-1
generated by a commercial tool. Case 1 ~ case 9 are single SDD candidates in four cases, whereas our diagnosis correctly identify
with extra delay size 0.2 ns. Case 10 ~ case 20 are single SDD both inserted SDD as rank-1 candidates in three cases. Moreover
with extra delay size 0.4ns. Case 21 ~ case 25 are double SDD the resolution of our diagnosis is much higher than that of
of extra delay size 0.4ns. TABLE VIII summarizes the test traditional diagnosis. Notice that the average resolution using
results of three test sets for these simulation cases. ‘detection’ both AS and FS test set (shown in TABLE IX) is better than the
column shows the number of cases that fail the simulation. one only using FS test set (shown in TABLE X). Because fail
Notice that no case among case 1~ case 9 is detected by TA test logs of AS FS test sets, if both available, could be combined by
set, because the inserted extra delays are too small to fail the our diagnosis technique to improve diagnosis resolution.
circuit in normal operating speed. Those small SDDs are also
known as timing-redundant SDD, which could still cause 4.3 Silicon Results
reliability issues due to the aging effect. Therefore detection of AS and FS test sets are applied to five designs on a wafer.
those small SDD is still important for the yield improvement. It Twenty-five SDD suspects that passed slow-speed DC testing are
could be observed that FS test set detects all of those cases with selected for diagnosis. The suspects are classified into three
timing-redundant SDD inserted. classes based on the confidence of our diagnosis results. Class
TABLE VIII. NUMBER OF DETECTIONSFOR TEST SETS
I(the highest confidence) consists of dies whose failing patterns
detection
are all identified as TA-SLAT patterns and there are no TPSF
Cases Defect size patterns. Class II consists of dies whose failing patterns are all
AS FS TA
1-9 0.2ns 0 9 0 identified as TA-SLAT patterns but there exist TPSF patterns.
10-20 0.4ns 6 10 6 Class III consists of dies whose failing patterns are not all TA-
21-25 0.4ns 4 5 4 SLAT patterns.
Total N/A 10 24 10 TABLE XI summarizes the diagnosis results of each class.
There are ten dies belong to Class I and the average resolution is
To compare the overall flow of test and diagnosis, TA test set one, which means only the top rank candidate meets the
is applied at-speed and the failure logs are diagnosed by a constraints of Class I (i.e. all failing patterns are TA-SLAT
commercial tool with delay fault diagnosis settings. For our patterns and there exist no TPSF patterns). One of the ten dies in
proposed flow, AS test set is applied at-speed and FS test set is Class I has been verified by physical failure analysis (PFA).
applied faster-than at-speed. Then both failure logs are However, the PFA photo is not available for publication.
diagnosed by our proposed algorithm. Experimental results are
TABLE IX. COMPARISON OF OVERALL FLOW
Traditional (TA) Ours (AS+FS)
cases number of rank-1 success rate average number of rank-1 success rate average
detection hit rate resolution detection hit rate resolution
1-9 0 0/9 0% N/A 9 6/9 77%(7/9) 1.71
10-20 6 1/11 9%(1/11) 2 11 10/11 91%(10/11) 1.90
21-25 4 1/10 20%(1/5) 2 5 8/10 100%(5/5) 3.80
Total 10 2/30 8%(2/25) 2 25 24/30 88%(22/25) 2.27

5
TABLE X. COMPARISON OF DIAGNOSIS RESULTS
Traditional (FS) Ours (FS)
cases rank-1 average rank-1 average
success rate success rate
hit rate resolution hit rate resolution
1-9 0/9 0%(0/9) N/A 6/9 77%(7/9) 1.71
10-17,19- 20 8/10 80%(8/10) 4.75 9/10 90%(9/10) 2.11
21-25 4/10 100%(5/5) 7.80 8/10 100%(5/5) 4.80
Total 12/29 54%(13/24) 5.92 23/29 88%(21/24) 2.62
TABLE XI. SILICON DIAGNOSIS RESULT
Class TA-SLAT TPSF number of dies average resolution dies with multi. SDD PFA verified dies
I All None 10 1 0 1
II All Exist 10 5.1 2 0
III Partial Exist 5 N/A 2 0
[Kruseman 04] B. Kruseman, A. K. Majhi, G. Gronthoud, and S. Eichenberger,
“On Hazard-free Patterns for Fine-delay Fault Testing,” Proc. IEEE Int’l Test
Conf., pp.213–222, 2004.
5. SUMMARY [Lee 06] H. Lee, S. Natarajan, S. Patil, and I. Pomeranz, “Selecting High-quality
This paper presents a novel diagnosis algorithm for small Delay Tests for Manufacturing Test and Debug,” Proc. IEEE Int. Symp. Defect
Fault Tolerance Very Large Scale Integr. Syst., pp.59-70, 2006.
delay defects diagnosis. Both at-speed test sets and faster-than- [Lu 07] S. Y. Lu, M. T. Hsieh, and J. J. Liou, “An Efficient SAT-based Path
at-speed test sets have been applied. The proposed timing-aware Delay Fault ATPG with an Unified Sensitization Model,” Proc. IEEE Int’l Test
diagnosis technique uses timing upper and lower bound to Conf., paper 10.3, 2007.
improve the diagnosis resolution. This technique extends the [Mehta 09] V. J. Mehta, M. Marek-Sadowska, K. H. Tsai and JanuszRajski
“Timing-Aware Multiple-Delay-Fault Diagnosis,” IEEE Trans. on Computer-
SLAT technique to diagnose multiple SDD. Test results of both aided Design, Vol. 28, No.2, pp245-258, 2009.
AS/FS test sets are combined to further improve the diagnosis [Mitra 04] S. Mitra, E. Volkerink, E. J. McCluskey, and S. Eichenberger, “Delay
resolution. Experimental results on five advanced industrial Defect Screening using Process Monitor Structures,” Proc. IEEE VLSI Test
designs show the accuracy of the proposed technique. One of the Symp., pp.43-52, 2004.
[Pant 99] P. Pant and A. Chatterjee, “Efficient Diagnosis of Path Delay Faults in
failing dies has been verified by physical failure analysis. Digital Logic Circuits,” Proc. IEEE Int’l Conf. on Computer-Aided Design, pp.
471-476, 1999.
REFERENCES [Peng 10] K. Peng, J. Thibodeau, M. Yilmaz, K. Chakrabarty, M. Tehranipoor,
[Ahmed 06] N. Ahmed, M. Tehranipoor, V. Jayaram, “A Novel Framework for “A Novel Hybrid Method for SDD Pattern Grading and Selection,“ Proc. IEEE
Faster-than-at-Speed Delay Test Considering IR-drop Effects,”Proc.IEEE Int’l VLSI Test Symp., pp.45-50, 2010.
Conf. on Computer-Aided Design, pp.198-203, 2006. [Qiu 04] W. Qiu, J. Wang, D. M. H. Walker, D. Reddy, X. Lu, Z. Li, W. Shi, and
[Aikyo 07] T. Aikyo, H. Takahashi, Y. Higami, J. Otsu, K. Ono, and Y. H. Balachandran, “K Longest Paths Per Gate (KLPG) Test Generation for
Takamatsu, “Timing-Aware Diagnosis for Small Delay Defects,” Proc. IEEE Scan-Based Sequential Circuits,” Proc. IEEE Int’l Test Conf., pp.223-
Defect and Fault Tolerance in VLSI Systems Symp., pp.223-231, 2007. 231,2004.
[Cadence 11] Cadence Inc. “Encounter True Time ATPG,” 2011 [Online] [Sato 05 a] Y. Sato, S. Hamada, T. Maeda, A. Takatori, and S. Kajihara,
Available:http://www.cadence.com/products/ld/true_time_test/pages/default.as “Evaluation of the Statistical Delay Quality Model,” Proc. IEEE Asian and
px South Pacific Design Automation Conf., pp.305-310, 2005.
[Girard 92] P. Girard, C. Landrault, and S. Pravossoudovitch, “Delay-Fault [Sato 05 b] Y. Sato, S. Hamada, T. Maeda, A. Takatori, and S. Kajihara,
Diagnosis by Critical Path Tracing,” Proc. IEEE Design & Test of Computer, “Invisible Delay Quality – SDQM Model Lights Up What Could Not Be
Vol. 9, No. 4, pp. 27-32, 1992. Seen,” Proc. IEEE Int’l Test Conf., pp.1202-1210, 2005.
[Goel 09] S. K. Goel, N. Devta-Prasanna, and R. P. Turakhia, “Effective and [Schuermyer 05] C. Schuermyer, K. Cota, R. Madge, and B. Benware,
Efficient Test Pattern Generation for Small Delay Defect,” Proc. IEEE VLSI “Identification of Systematic Yield Limiters in Complex ASICS Through
Test Symp., pp.111-116, 2009. Volume Structural Test Fail Data Visualization and Analysis,” Proc. Int’l Test
[Guo 10] R. Guo, W. T. Cheng, T. Kobayashi, and K. H. Tsai, “Diagnostic Test Conf., paper 7.1, 2005.
Generation for Small Delay Defect Diagnosis,” Proc. IEEE VLSI Design [Shao 02] Y. Shao, I. Pormeranz, and S. M. Reddy, ”On Generating High Quality
Automation and Test Symp., pp.224-227, 2010. Tests for Transition Faults,” Proc. Asian Test Symp., pp.1-8, 2002.
[Gupta 04] P. Gupta and M. S. Hsiao, “ALAPTF: A New Transition Fault Model [Synopsys 11] Synopsys Inc., “TetraMAXDSMTest for Small DelayDefect
and the ATPG Algorithm,” Proc. IEEE Int’l Test Conf., pp.1053-1060, 2004. Testing,” 2011 [Online] Available: http://www.synopsys.com/Tools/
[Hamada 06] S. Hamada, T. Maeda, A. Takatori, Y. Noduyama, and Y. Sato, Implementation/RTLSynthesis/Pages/TetraMAXATPG.aspx
“Recognition of Sensitized Longest Paths in Transition Delay Test,” Proc. [Turakhia 07] R. Turakhia, W. R. Daasch, M. Ward, and J. van Slyke, “Silicon
IEEE Int’l Test Conf., paper 11.1, 2006. Evaluation of Longest Path Avoidance Testing for Small Delay Defects,”
[Huisman 04] L. M. Huisman, “Diagnosing arbitrary defects in logic designs Proc. IEEE Int’l Test Conf., pp.1-10, 2007.
using single location at a time (SLAT),” IEEE Trans. on Computer-aided [Wang 03] Z. Wang, K. H. Tsai, M. Marek-Sadowska, J. Rajski, “An Efficient
Design of Integrated Circuits and Syst., Vol. 23, No. 1, pp. 91-101, 2004. and Effective Methodology on the Multiple Fault Diagnosis”, IEEE Int’l Test
[Kajihara 06] S. Kajihara, S. Morishima, A. Takuma, X. Wen, T. Maeda, Conf., paper 12.3, 2003.
S.Hamada, and Y. Sato, “A Framework of High-Quality Transition Fault [Wang 05] Z. Wang, M. Marek-Sadowska, K. H. Tsai, and J. Rajski, “Delay-fault
ATPG for Scan Circuits,” Proc. IEEE Int’l Test Conf., paper 2.1, 2006. diagnosis using timing information,” IEEE Trans. on Computer-aided Design,
Vol. 24, No.9, pp1315-1325, 2005.

You might also like