You are on page 1of 162

Encounter® Test: Guide 4: Faults

Product Version 15.12
October 2015

© 2003–2015 Cadence Design Systems, Inc. All rights reserved.
Portions © IBM Corporation, the Trustees of Indiana University, University of Notre Dame, the Ohio State
University, Larry Wall. Used by permission.
Printed in the United States of America.
Cadence Design Systems, Inc. (Cadence), 2655 Seely Ave., San Jose, CA 95134, USA.
Product Encounter® Test and Diagnostics contains technology licensed from, and copyrighted by:
1. IBM Corporation, and is © 1994-2002, IBM Corporation. All rights reserved. IBM is a Trademark of
International Business Machine Corporation;.
2. The Trustees of Indiana University and is © 2001-2002, the Trustees of Indiana University. All rights
reserved.
3. The University of Notre Dame and is © 1998-2001, the University of Notre Dame. All rights reserved.
4. The Ohio State University and is © 1994-1998, the Ohio State University. All rights reserved.
5. Perl Copyright © 1987-2002, Larry Wall
Associated third party license terms for this product version may be found in the README.txt file at
downloads.cadence.com.
Open SystemC, Open SystemC Initiative, OSCI, SystemC, and SystemC Initiative are trademarks or
registered trademarks of Open SystemC Initiative, Inc. in the United States and other countries and are
used with permission.
Trademarks: Trademarks and service marks of Cadence Design Systems, Inc. contained in this document
are attributed to Cadence with the appropriate symbol. For queries regarding Cadence’s trademarks,
contact the corporate legal department at the address shown above or call 800.862.4522. All other
trademarks are the property of their respective holders.
Restricted Permission: This publication is protected by copyright law and international treaties and
contains trade secrets and proprietary information owned by Cadence. Unauthorized reproduction or
distribution of this publication, or any portion of it, may result in civil and criminal penalties. Except as
specified in this permission statement, this publication may not be copied, reproduced, modified, published,
uploaded, posted, transmitted, or distributed in any way, without prior written permission from Cadence.
Unless otherwise agreed to by Cadence in writing, this statement grants Cadence customers permission to
print one (1) hard copy of this publication subject to the following conditions:
1. The publication may be used only in accordance with a written agreement between Cadence and its
customer.
2. The publication may not be modified in any way.
3. Any authorized copy of the publication or portion thereof must include all original copyright,
trademark, and other proprietary notices and this permission statement.
4. The information contained in this document cannot be used in the development of like products or
software, whether for internal or external use, and shall not be used for the benefit of any other party,
whether or not for consideration.
Disclaimer: Information in this publication is subject to change without notice and does not represent a
commitment on the part of Cadence. Except as may be explicitly set forth in such agreement, Cadence does
not make, and expressly disclaims, any representations or warranties as to the completeness, accuracy or
usefulness of the information contained in this document. Cadence does not warrant that use of such
information will not infringe any third party rights, nor does Cadence assume any liability for damages or
costs of any kind that may result from use of such information.
Restricted Rights: Use, duplication, or disclosure by the Government is subject to restrictions as set forth
in FAR52.227-14 and DFAR252.227-7013 et seq. or its successor

Encounter Test: Guide 4: Faults

Contents
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
About Encounter Test and Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Typographic and Syntax Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Encounter Test Documentation Roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Getting Help for Encounter Test and Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Contacting Customer Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Encounter Test And Diagnostics Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Using Encounter Test Contrib Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
What We Changed for This Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1
Building Fault Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Build Fault Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Build Fault Model Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Build Fault Model Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Build Fault Model Examples for Including/Excluding Faults . . . . . . . . . . . . . . . . . . . . 17
Build Fault Model Examples for Cell Boundary Fault Model . . . . . . . . . . . . . . . . . . . . 18
Build Fault Model Examples with Fault Rule Files . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Build Fault Model Examples with Special Handling of Ignored Faults . . . . . . . . . . . . 21
Build Fault Model Example for Hierarchical Test - Chip Level . . . . . . . . . . . . . . . . . . 22
Build Alternate Fault Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Prepare Path Delay Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Prepare Path Delay Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Prepare Path Delay Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Build Package Test Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Build Package Test Objectives Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Build Package Test Objectives Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Build Bridge Fault Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

October 2015 3 Product Version 15.12
© 1999-2015 All Rights Reserved.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . 35 Using Fault Rule Specification to Add/Remove Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . 45 Report Faults . . . . . . . . . . . . . . . . . . . . . . . . . 43 3 Reporting Faults. . . . 36 Using Fault Subset for ATPG / Fault Simulation . . . . . . . . . . . . . . . . .. . 29 Build Bridge Fault Model Scenarios . . . . . . . .. . . . . . . . . . .. . .. . . . . . . . . . . .. . . . . . .. . . . . . . . . . . . . . 54 Options for Hierarchical Fault Statistics . . . . . . . . . . . . . . . . 40 Preparing Detected Faults File . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. 40 Preparing an Ignore Faults File . . . . . . .. . . . . . . . . . . . . . . . Encounter Test: Guide 4: Faults Build Bridge Fault Model Input . .. . . . . . . . . .. . . . .. . . . . . . . .. . . . . . . . . . .. . . . . . . . . . . . . . and Statistics . ..Core Level) . . . . . . . . . . . . . . .. . . . . . . . . . . 52 Reporting Bridge Faults . . . . . . . . . 39 Removing Scan Faults from Consideration . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . ... . . . 59 Reporting Low Power Fault Statistics . . . . . 33 Using Model Attributes to Add/Remove Faults . . . . . . . . . .. . . . . . . . . . .. . . . 53 Report Fault Coverage Statistics . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . 62 Report Domain Faults . .. .. . . . . . . . 30 2 Modifying Fault Model/Fault Status . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 64 Report Path Fault Coverage Statistics . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . ... . . . . . . . . . . . . . . . 33 Fault Attribute Specification in Verilog Design Source . . .. . 27 Build Bridge Fault Model Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . 65 Report Package Test Objectives (Stuck Driver and Shorted Nets) . . . . . . . . . . . . . . . . . .. . . . . 63 Report Path Faults .. . . . 33 Fault Attribute Specification in Edit Model File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Objectives. . . . . . . . . . . . . 67 October 2015 4 Product Version 15. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Reporting Bridge Fault Statistics . . . 42 Prepare Core Migration Faults (Hierarchical Test . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 © 1999-2015 All Rights Reserved. . . . . . . . . . . . . . . . .. . . . . . . . 66 Report Package Test Objectives Coverage Statistics . 35 Preparing a Fault Subset . . . . . . . . . 62 Report Domain Fault Coverage Statistics . . . . .. . . . . . . . . 45 Reporting Low Power Faults . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .

.. . . . . 78 Deterministic Controllability/Observability (CO) Measure Concepts . . . . . . . .. 90 Delete Package Test Objectives . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .... . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . ... . . . . . . . . . . Encounter Test: Guide 4: Faults 4 Analyzing Faults . . 76 Deterministic Testability Measurements for Sequential Test . . . . . . . . . . . .. . . . . . . . . .. .. .. .. . . . . . . 90 Delete Alternate Fault Model . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 79 Latch Tracing Analysis Concepts . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 89 Delete Committed Fault Status for a Test Mode . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . . .12 © 1999-2015 All Rights Reserved. . . .. . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . .. . 73 Analyze Faults . . . .. . . . . . . . . . . . . 88 5 Deleting Fault Model Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Delete Fault Model . . . . . . . . . . . . . . . . .. . . . . . . 90 Delete Fault Model Analysis Data . . . . . . . . 80 Test Points . . . . . . . . . . . .. 70 Input Files . . . . .. . . . . . . . 69 Analyze Deterministic Faults . .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . .. . .. . . . . .. . . 74 Restrictions .. . . . . . . . . . . . 69 Fault Analysis Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . .. . . 76 Performing Deterministic Testability Analysis . . . . . . . . . . . .. . . . . . .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Output . . . . . . . . . . .. . ... . . . . . . . . . . . . . . . 89 Delete Fault Status for All Existing Test Modes . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 82 . . . .. . . . . . . .. . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Input Files . . .. . . . . . . . . . . . . .. . . . . . . . . . . . 80 Analyze Random Resistance . .. . .. . . . . . . . . . .. . . . . . . . . . . . .. .. . . . . .. . . 77 Possible Value Set (PVS) Concepts . . . . . . . . . . . . . . . . . . . 74 Input Files . . . . . . . .. . . .. . . . . . . . . 75 Analyzing TFA Messages from Create Logic Tests or Analyze Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Outputs . .. . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . 72 Sample Methodology for Large Parts . . . . .. . . . .. . . . . . . 78 Sequential Depth Measure Concepts . . . . . .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 79 Random Resistant Fault Analysis (RRFA) . . . .. .. . . . . . . . . .. . . 75 Output . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . .. . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . .. .. . . . . . . . 75 GUI Schematic Fault Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . .. . . . . . . . . . . . . . . . . . . . . .. . . . . . ... . . . . . . . . . 90 October 2015 5 Product Version 15. . . .. . . . . . . . . . . . ... . . .. . . . . . . . .. . . .. . ... . . . .. . . . . . . .. . .

. . . . . . . . . . . . . . . . . . . . . . . . . . 113 Fault Modeling .12 © 1999-2015 All Rights Reserved. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Aborted (A) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Active and Inactive (i) . 101 IDDq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Pre-Collapsed . . . . . . . . . . . . . . . . . . . . . . 128 October 2015 6 Product Version 15. . . . . . . . . . . . . . . . . . . . . . . 101 Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Dynamic (Transition) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Untested – Not Processed (u) . 125 Fault Rule File Syntax . . . . . . . . . . . . . . . . . . . . . . . . . 107 Possibly testable at best faults (PTAB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Ignored Faults (I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Redundant (R) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Tested (T) . . . . . . . . . . . . . . . . . . . . . . . . . 91 Static (Stuck-at) . . . . . . . . . . . . . . . . . . . . 102 Fault Attributes . . . . . . . . . . . . . . . . . . . . . . 106 Grouping (&. . . . . . . . . . . . 118 Package Test Objectives . . . . 118 Path Delay . . . . . . . . . . . . . . . . . . . Encounter Test: Guide 4: Faults 6 Concepts . . 110 Fault/Test Coverage Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Other Test Objectives . . . . . 125 Fault Rule File Element Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . It) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Fault Test Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Parametric (Driver/Receiver) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Collapsed (C) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Untestable . . . . . . . . . . . . . . . . 108 Possibly Tested (P) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 A Pattern Faults and Fault Rules . . . . . 115 Pin Faults and Pattern Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Cross-Mode Markoff (MARKOFF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Fault Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Iu. Ib. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . 153 SOC Level Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 October 2015 7 Product Version 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Encounter Test: Guide 4: Faults How to Code a Fault Rule File . . . . . . . . . . . . . . . . 140 Creating Shorted Net Fault Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Output Files . . . 140 B Hierarchical Fault Processing Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 C Building Register Array and Random Resistant Fault List Files for Pattern Compaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Specifying Bridge Faults in a Fault Rule File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Input Files . . . . . . . . . . . . . . 153 Core Level Flow . . . . . . . . . . . . . . . . . . . . . . . 157 Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Composite Fault Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Specifying a Shorted Net Fault in a Fault Rule File . . . . . . . . . . . . . . . . . . . 138 Creating Bridge Fault Definitions . . . . . . . . . . .12 © 1999-2015 All Rights Reserved. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Encounter Test: Guide 4: Faults October 2015 8 Product Version 15. .12 © 1999-2015 All Rights Reserved.

. . . . . . . . . 54 Figure 3-6 Bridge Fault Statistics Report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Figure 3-4 Representation of Faults 43 and 65 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Figure 3-1 Fault Status Key . . . . . . . . . . . . . . 99 Figure 6-7 TSD Dynamic Pattern Faults Automatically Generated by Encounter Test . 97 Figure 6-6 XOR Dynamic Pattern Faults Automatically Generated by Encounter Test . . . . . . . . . . . . . . . . . . 19 Figure 2-1 Fault Attributes in Verilog . . 125 Figure A-2 Example of NET Statement . . . . . . . . . . . . . . . . . 62 Figure 4-1 Simple AND Block for Fault Analyzer Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Figure 6-5 Status Faults for MUX2 Primitive . . . . . . . . . . . . . . .available through msgHelp TFM-705 or optionally included in log (reportkey=yes). . . . . . . . . . . . . . 100 Figure 6-8 Latch Dynamic Pattern Faults Automatically Generated by Encounter Test . . . . . . 115 Figure 6-11 Path Delay Fault Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Figure 6-3 TSD Static Pattern Faults Automatically Generated by Encounter Test . . . . . . . . . . . . . . . . . . . . . . . . . . Encounter Test: Guide 4: Faults List of Figures Figure 1-1 Fault Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Figure A-1 Fault Rule File Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Figure 6-2 XOR Static Pattern Faults Automatically Generated by Encounter Test . . . . . . . . . . . . . . . 47 Figure 3-3 Representation of Faults 40 and 41 . . . . . . . . . . . . . . . . . . 15 Figure 1-2 Fault Model Compatibility with Other Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Figure 4-2 An Observe Test Point . . . . . 46 Figure 3-2 Fault Type key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Figure 4-3 Control-1 Test Point . . . . . . . 101 Figure 6-9 Test Coverage Formulas Used By Encounter Test. . . . .12 © 1999-2015 All Rights Reserved. . . . . . . . . . . . . . . . . . .available through msgHelp TFM-705 or optionally included in log (reportkey=yes). . . 83 Figure 4-4 Test Point Insertion Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Figure 6-4 Latch Static Pattern Faults Automatically Generated by Encounter Test . . . 114 Figure 6-10 Test Coverage Formulas with Ignore Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 October 2015 9 Product Version 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Figure 3-5 Report Bridge Fault Output . . . . . . . . . . 85 Figure 6-1 S-A-1 AND gate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .12 © 1999-2015 All Rights Reserved. . . . . . . . . 153 Figure B-2 SOC Level Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Encounter Test: Guide 4: Faults Figure A-3 Dependent Composite Fault . . . . . . . . . . . 139 Figure B-1 Core Level Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 October 2015 10 Product Version 15. . . . . . . . . . . . . .

giving flexibility and options to control test costs. these capabilities minimize power consumption during test while still delivering the high quality of test for low-power devices. ■ Variables appear in Courier italic type. and generates low-power scan vectors that significantly reduce power consumption during test. faster and at lower cost.12 © 1999-2015 All Rights Reserved. It also inserts design-for-test (DFT) structures to enable control of power shut- off during test. It works with all popular design libraries and automatic test equipment (ATE). Encounter Test is integrated with Encounter RTL Compiler global synthesis and inserts a complete test infrastructure to assure high testability while reducing the cost-of-test with on- chip test data compression. Encounter Test also supports manufacturing test of low-power devices by using power intent information to automatically create distinct test modes for power domains and shut-off requirements. Encounter Test uses XOR-based compression architecture to allow a mixed-vendor flow. October 2015 11 Product Version 15. such as level shifters and isolation cells. . filenames. Encounter Test: Guide 4: Faults Preface About Encounter Test and Diagnostics Encounter® Test uses breakthrough timing-aware and power -aware technologies to enable customers to manufacture higher-quality power-efficient silicon. ■ Text that you type. and dialog values. Cumulatively. Encounter Diagnostics identifies critical yield-limiting issues and locates their root causes to speed yield ramp. ■ Optional arguments are enclosed in brackets. such as commands. appears in Courier type. Example: Use TB_SPACE_SCRIPT=input_filename to specify the name of the script that determines where Encounter Test binary files are stored. Example: Type build_model -h to display help for the command. Typographic and Syntax Conventions The Encounter Test library set uses the following typographic and syntax conventions. The power-aware ATPG engine targets low-power structures.

Delete . button names. . Encounter Test Documentation Roadmap The following figure depicts a recommended flow for traversing the documentation structure. appear in Helvetica italic type. such as field names.12 © 1999-2015 All Rights Reserved. Getting Started Overview and New User Quickstart Models Testmodes Guides Test Structures Faults ATPG Test Vectors Diagnostics Flow PMBIST Pattern PMBIST Analysis ET Flows Generation Commands GUI Expert Reference Messages Test Pattern Formats Documents Extension Language Glossary October 2015 12 Product Version 15.Model and fill in the information about the model. menus. and items in clickable list boxes. Example: Select File . menu commands. Encounter Test: Guide 4: Faults Preface Example: [simulation=gp|hsscan] ■ User interface elements.

and much more. Refer to the following in the Encounter Test: Reference: GUI for additional details: ■ “Help Pull-down” describes the Help selections for the Encounter Test main window. ■ IBM Field Design Center Customers October 2015 13 Product Version 15.cadence. full site search capabilities. Go to http://www. Contact the CRC through Cadence Online Support. Go to http://support. Contacting Customer Service Use the following methods to get help for your Cadence product. type cdnshelp at the command prompt. Encounter Test: Guide 4: Faults Preface Getting Help for Encounter Test and Diagnostics Use the following methods to obtain help information: 1.com and click the Contact Customer Support link to view contact information for your region. software updates. and technical solutions documents that give step-by-step instructions on how to solve known problems. ■ Cadence Customer Response Center (CRC) A qualified Applications Engineer is ready to answer all of your technical questions on the use of this product through the Cadence Customer Response Center (CRC).aspx for more information on Cadence Online Customer Support.com/support/pages/default. software update ordering. It also gives you product-specific e-mail notifications. Click the Help or ? buttons on Encounter Test forms to navigate to help for the form and its related topics. notifications.12 © 1999-2015 All Rights Reserved. 2. software updates.cadence. ■ “View Schematic Help Pull-down” describes the Help selections for the Encounter Test View Schematic window. From the <installation_dir>/tools/bin directory. ■ Cadence Online Customer Support Cadence online customer support offers answers to your most common technical questions. . To view a book. It lets you search more than 40. up-to-date release information. service request tracking.000 FAQs. double-click the desired product book collection and double-click the desired book title in the lower pane to open the book.

What We Changed for This Edition There are no significant modifications specific to this version of the manual. . These files are provided AS IS and there is no express. From outside the United States call 001-1-802-769-6753. or statutory obligation of support or maintenance of such files by Cadence.com. The e- mail address is edahelp@us. FAX 001-1-802-769-7226. Using Encounter Test Contrib Scripts The files and Perl scripts shipped in the <ET installation path>/ etc/tb/ contrib directory of the Encounter Test product installation are not considered as "licensed materials". October 2015 14 Product Version 15. Encounter Test: Guide 4: Faults Preface Contact IBM EDA Customer Services at 1-802-769-6753.12 © 1999-2015 All Rights Reserved. Encounter Test And Diagnostics Licenses Refer to “Encounter Test and Diagnostics Product License Configuration” in Encounter Test: Release: What’s New for details on product license structure and requirements. These scripts should be considered as samples that you can customize to create functions to meet your specific requirements.ibm. FAX 1-802-769-7226. implied.

12 © 1999-2015 All Rights Reserved. Figure 1-1 Fault Selection October 2015 15 Product Version 15. attributes. simulation. Refer to Concepts for descriptions of the Encounter Test fault types. it is recommended that you build your test modes and verify them before building the fault model in order to avoid locking issues. and modeling options. Encounter Test: Guide 4: Faults 1 Building Fault Models Building a fault model identifies the faults that ATPG. Figure 1-1 shows when to select various types of fault models and where building fault models fits in the processing flow. and diagnostics processes use. The fault models you need depend on your test generation methodology. . status. faults detected in one fault model are not marked in any other fault model. Each fault model and associated status is unique. Note that fault models may be built either before or after test modes are built. However.

Encounter Test: Guide 4: Faults Building Fault Models The following sections describe how to build each of these types of fault or objective models: Build Fault Model The fault model can include Static (Stuck-at). Refer to ”Build Fault Model” in the Encounter Test: Reference: GUI. dynamic. October 2015 16 Product Version 15. Dynamic faults can be excluded if you are not going to do any delay testing on this design. If test modes exist. Parametric (driver/receiver) faults are included only if they are explicitly selected. Example syntax for various uses of the build_faultmodel command are provided in the following sections: ■ Build Fault Model Examples for Including/Excluding Faults ■ Build Fault Model Examples for Cell Boundary Fault Model ■ Build Fault Model Examples with Fault Rule Files ■ Build Fault Model Examples with Special Handling of Ignored Faults ■ Build Fault Model Examples for Hierarchical Test . To build a fault model using command line. The process of identifying active faults per testmode and initializing the testmode fault status is done during build_faultmodel for all existing testmodes and is done during build_testmode for any testmodes built after the fault model exists. Dynamic (Transition). and Parametric (Driver/Receiver) faults and other pattern faults defined with fault rules. . To build a fault model using the graphical interface. select Models-Fault Model from the Verification pulldown on the main GUI window.12 © 1999-2015 All Rights Reserved. and Iddq faults. By default.Chip Level Build Fault Model Input An Encounter Test Model is the only required input for creating the default fault model. which is documented in Encounter Test: Reference: Commands. the fault model includes static. they are also used as input to build fault model. Iddq. use “build_faultmodel”. The process of building the fault model is independent of any testmode(s) that might be created and therefore can be run either before or after defining the testmode(s).

you can do that without having to re-build the fault model. Encounter Test allows specification of the following syntax to include the pre-collapsed faults for each AND/NAND. ignore faults. However. build_faultmodel workdir=<directory> includedynamic=no Example 2: Including Faults for Parametric Tests The following syntax is used to include the driver and receiver faults which are the target of parametric tests: build_faultmodel workdir=<directory> includedrvrcvr=yes Example 3: Including Precollapsed Faults Identifying Pre-Collapsed faults is common in the industry. To provide a similar capability. . OR/NOR.12 © 1999-2015 All Rights Reserved. fault simulation. Encounter Test: Guide 4: Faults Building Fault Models Tip It is recommended that you build the fault model after creating and verifying the testmodes in order to enable building multiple testmodes simultaneously and to avoid locking issues. However. Build Fault Model Examples for Including/Excluding Faults Example 1: Excluding Dynamic Faults The following syntax turns off dynamic faults in the fault model. mark faults redundant. and diagnostics. some test tools include the faults in their fault models and simply mark their test status based on the faults they target for test generation. or mark faults detected). build_faultmodel workdir=<directory> precollapsed=yes Example 4: Including Collapsed Faults October 2015 17 Product Version 15. Build Fault Model Output The binary data files are created and stored in the workdir/tbdata directory for use by subsequent fault oriented tasks such as ATPG. and BUF/ INV in the fault model and simply marks the extra faults as collapsed unless collapsefaults=no is also specified as in Example 4: Including Collapsed Faults. if you need to build additional testmodes after the fault model exists. The default is to include dynamic faults. Fault Rule Files are used as input to building the fault model if you have pattern faults to add or want to modify the status or inclusion of specific faults (that is.

. build fault model also analyzes the design and collapses faults that are logically equivalent between gates (such as faults on a string of buffers where the test for the s-a-0 fault on the output of the first BUF in the string will also test the s-a-0 faults on the outputs of all the other BUFs in that string). Encounter Test: Guide 4: Faults Building Fault Models In addition to the pre-collapsed faults discussed in the previous example. The default is to fault at the primitive level. prior to build_faultmodel. Example 5: Excluding Scan Chain Faults Excluding scan chain faults reduces the size of the fault model and reduces simulation time during create_scanchain_tests. An alternative that reduces test generation time while keeping the faults in the coverage is to use a priori fault markoff. October 2015 18 Product Version 15. Notes Although this technique is supported. Test generation will not have any scan faults to work on in this case so scanchain tests need to be run with good machine simulation (create_scanchain_tests gmonly=yes or create_scanchain_delay_tests gmonly=yes). create a full scan test mode and then specify its name in the excludescanmode keyword as shown in the example below: build_faultmodel excludescanmode=FULLSCAN Build fault model processes the specified testmode and removes faults along the scan chain that would be easily tested by scanchain or logic tests.12 © 1999-2015 All Rights Reserved. See Removing Scan Faults from Consideration. ECR) and then fault simulation marks all the equivalent faults with the same status as the Independent fault. allows the test generator to work on a single fault (called the Independent fault or the Equivalence Class Representative. it is rarely used. Build Fault Model Examples for Cell Boundary Fault Model Encounter Test provides comprehensive fault modeling capabilities that includes modeling at technology cell boundaries and at the primitive level. The following syntax turns off fault collapsing so every fault is independent. To exclude scan faults from the fault model. Fault collapsing. build_faultmodel workdir=<directory> collapsefaults=no Note that this does not include the pre-collapsed faults unless you include them with precollapsed=yes on the same command line. which is industry standard.

.12 © 1999-2015 All Rights Reserved. This fault modeling is the default behavior in when the model is built with industrycompatible=yes. This is depicted in Figure 1-2 on page 19 (industrycompatible=no). Encounter Test: Guide 4: Faults Building Fault Models Example 6: Building Industry Compatible Fault Model The following syntax on build_model causes Encounter Test to model faults at the cell boundary level with a single pair of faults at the cell boundary even when a net fans out internally from a cell input pin. The build_model process inserts a buffer into the flattened model so a single pair of faults can be built at the cell boundary. build_faultmodel workdir=<directory> cellfaults=yes Figure 1-2 Fault Model Compatibility with Other Tools Build Fault Model Examples with Fault Rule Files Example 8: Using Fault Rule Files named by Verilog Module Name October 2015 19 Product Version 15. build_model workdir=<directory> industrycompatible=yes build_faultmodel workdir=<directory> Example 7: Building Cell Fault Model without Industry Compatible The following syntax causes Encounter Test to model faults at the cell boundary level with multiple pairs of faults included when a net fans out internally from a cell input pin. as shown in Figure 1-2 (industrycompatible=yes).

The following syntax selects the fault rule files to be applied based on the file names in the FAULTS attributes and the directories listed in the faultpath keyword. Encounter Test: Guide 4: Faults Building Fault Models The following syntax selects fault rule files to be applied based on cell block name (cellnamefaultrules=yes causes build_faultmodel to search the directories specified in faultpath looking for each cell in the design) : build_faultmodel cellnamefaultrules=yes faultpath=/dir1/faultrules1:/dir1/ faultrules2:/dir2/faultrules3 This assumes the fault rule files are named based on the cell name (Verilog module name). this fault rule will be applied to modify the default faulting assumptions on that cell. build_faultmodel faultpath=/dir/faultrule1:/dir/faultrule2:/dir2/faultrule3 The following syntax selects the fault rule file to be applied based on the names of the files in the faultrulefile keyword: build_faultmodel faultrulefile=/dir1/myfaultrule1. . Example 9: Using Fault Rule Files with User-selected Names A user-specified fault rule name may be identified with the FAULTS attribute in the Verilog (see Using Design Source Attributes to Add/Remove Faults) or may be identified directly on the command line with the faultrulefile keyword on the build_faultmodel command line./dir1/myfaultrule2./dir2/ myfaultrule3 October 2015 20 Product Version 15. That is. File name is: tech_AND Fault rule file contains: Entity = tech_AND When any instance of tech_AND is found in the model. the file name and the name on the Entity statement in the fault rule match.12 © 1999-2015 All Rights Reserved.

the one found in the first directory is used (searching left to right in the list provided to the keyword).12 © 1999-2015 All Rights Reserved. If you use FAULTS attributes and the files are not found during build_faultmodel. If the same filename happens to exist in more than one of the directories specified in FAULTPATH. no message is printed for missing names. such as NOFAULT or FAULTONLY statements. This is especially important for fault rules containing statements where the order of the statements matters. All faults defined in a fault rule are included regardless of any NOFAULTing or other fault removal. If is recommended that you not include more than one fault rule that contains rule data for the same entity (Verilog module. Encounter Test: Guide 4: Faults Building Fault Models Notes If faultpath and faultrulefile are both specified on the command line files specified with the faultrulefile keyword are processed before the rules from the faultpath keyword. regardless of whether dynamic faults are requested to be created when the fault model is built. Build Fault Model Examples with Special Handling of Ignored Faults Example 10: Including Ignored Faults in the Fault Model By default. A count of the number of uncollapsed ignored faults that would have been included is calculated and printed. Dynamic faults defined in a fault rule file are always included in a fault model. This behavior reduces the size of the fault model for customers who don’t need details about these faults. hierBlock in the model). unless the model is built with industrycompatible=yes. An example of the report in the Build Fault Model log is below: Global Ignored Static Fault Count: 100 Global Ignored Dynamic Fault Count: 200 October 2015 21 Product Version 15. If you use cellnamefaultrules=yes and name your files according to the cell name. build_faultmodel checks for fault rules for every cell and since it doesn’t expect to find them for every cell. it is assumed that you want it to be included. . Build Fault Model accepts compressed or gzipped versions of fault rule files to conserve memory when processing large files. If the fault is defined in a fault rule. a message is printed to let you know it is missing. but the actual faults cannot be reported. ignored faults are not included in the Encounter Test fault model.

specifying ignoredfaultstatistics=yes also: ■ causes the output of report_faults ignored=yes to categorize the ignored faults as It (tied). The output from prepare_core_migration_faults is October 2015 22 Product Version 15. build_faultmodel includeignore=yes Example 11: Accounting for Ignored Faults in Reports/Statistics The following syntax analyzes the globally ignored static and dynamic faults and categorizes the ignored attribute. Global Static Total Ignored (undetectable) Fault Count: 100 Ignored-Tied Fault Count: 40 Ignored-Unconnected Fault Count: 20 Ignored-Blocked Fault Count: 40 Global Dynamic Total Ignored (undetectable) Fault Count: 200 Ignored-Tied Fault Count: 80 Ignored-Unconnected Fault Count: 40 Ignored-Blocked Fault Count: 80 Notes Using includeignore=yes without ignorefaultstatistics (or with ignoredfaultstatistics=no) causes the output of report_faults ignored=yes to report the ignored faults as I without any additional categorization. Encounter Test: Guide 4: Faults Building Fault Models The following syntax includes the ignored faults in the fault model and provides the ability to list them. In addition to categorizing the total fault counts in the build_faultmodel log.Chip Level Build Fault Model supports hierarchical test with the coremigrationpath keyword. See Report Faults ■ causes the ignored faults to be included in the global test coverage calculation. and are not included in any fault calculations. An example is below. . Iu (unconnected). or Ib (blocked). build_faultmodel includeignore=yes ignorefaultstatistics=yes On specifying the above keywords. However. The output in the Build Fault Model log looks the same as without includeignore=yes.12 © 1999-2015 All Rights Reserved. build fault model accumulates global data (uncollapsed) fault counts for each of the categories for static and dynamic faults separately. Build Fault Model Example for Hierarchical Test . These are then printed in the Build Fault Model log in place of the 2 line fault counts. This keyword is used to identify the location of the core migration directory for each core that is instanced on the chip (SOC). the same as the 2 lines above that provide the counts. are not categorized. the ignored faults are not collapsed.

One example of how the alternate fault model could be advantageous is during diagnostic simulation where shorted net fault descriptions not typically found in the standard fault model. build_alternate_faultmodel workdir=<mydir> ALTFAULT=SHORTEDNET faultpath=<path to faultrules created by prepare_shorted_nets> cellrulefaultnames=yes October 2015 23 Product Version 15.12 © 1999-2015 All Rights Reserved. An example of the command line for this purpose is: build_faultmodel coremigrationpath=/hierTest/core1migr:/hierTest/core2migr Refer to Build Fault Model in Encounter Test: Reference: GUI Refer to build_faultmodel in the Encounter Test: Reference: Commands for complete syntax of the command line. could be found in an alternate fault model. The alternate fault model file names are differentiated from the standard fault model files by adding your own identifier as a file name qualifier using either the Build Alternate Fault Model window or command line. Encounter Test: Guide 4: Faults Building Fault Models found in these directories and is used to provide a count of core internal faults that are not visible in the chip model due to use of core migration models for the cores. . The following syntax builds an alternate fault model named SHORTEDNET with the shorted- nets faults from prepare_shorted_nets included in with the standard faults. and thus able to be considered in diagnostic simulation. These counts are added into the totals for the chip (SOC). Build Alternate Fault Models Encounter Test supports the creation of alternate fault models as a means to temporarily replace or augment the standard Encounter Test fault model without having to rebuild the files associated with the standard fault model.

the ALTFAULT= user_specified_name keyword must be specified or set in the environment. the standard Encounter Test fault model (assuming it has been created) is made available to the application. For a complete description of the build_alternate_faultmodel syntax. The pathname is an optional keyword that allows you to name the path delay fault model. Encounter Test: Guide 4: Faults Building Fault Models Notes For alternate fault models to be accessed by Encounter Test applications. you may have the path delay faults built as part of the create_path_delay_tests command by specifying a pathfile. . The only difference in the output is the identification of the data by the ALTFAULT name. build_alternate_faultmodel has the same fault selection options and input file requirements as build_faultmodel. To perform Build Alternate Fault Model using the graphical interface. by default it is named by the name of the pathfile. prepare_path_delay workdir=myworkdir testmode=FULLSCAN pathfile=/dir/mypathfile maxnumberpaths=10 pathname=mypaths The maxnumberpaths is an optional keyword that indicates how many paths are to be created along the paths specified in the pathfile. Refer to prepare_path_delay in the Encounter Test: Reference: Commands for complete syntax. If commit_tests is run. To gain credit in the master falut model built by build_faultmodel. refer to Build Alternate Fault Model in the Encounter Test: Reference: GUI. Or. patterns are commited but not the fault coverage. Prepare Path Delay is run using a command line such as the one below. If this is not set. October 2015 24 Product Version 15. refer to build_alternate_faultmodel in the Encounter Test: Reference: Commands. Prepare Path Delay Faults The faults for path delay test are identified by specifying a list of paths to be included and are saved by a user specified path name. Uncommitted fault status results for an alternate fault model cannot be committed against a master as there is no master alternate fault model.12 © 1999-2015 All Rights Reserved. In either case you can run one or more create_path_delay_tests experiments against those faults by specifying the pathname. the patterns must be resimulated without the ALTFAULT keyword prior to commit_tests. You may build the path delay faults using prepare_path_delay.

shorted-nets and optionally slow-to-disable objectives described in Concepts.<pathfile name> Example 1: prepare_path_delay pathfile=/dir/mypathfile create_path_delay_tests pathname=mypathfile GUI ALTFAULT: Paths. To reference this path delay fault model on the GUI schematic. To reference this path delay fault model in create_path_delay_tests use pathname=<pathname specified on prepare_path_delay> or. set the analysis context ALTFAULT to Paths. The following syntax builds the stuck-driver and shorted-nets test objectives as well as the slow-to-disable objectives: build_sdtsnt_objectives workdir=myworkdir objectivefile=myslowdisables October 2015 25 Product Version 15. Use report_path_faults for more information about the paths. INFO (TPT-315): 18 path groups are defined. pathname=<name of the pathfile specified on prepare_path_delay>.12 © 1999-2015 All Rights Reserved. Encounter Test: Guide 4: Faults Building Fault Models Prepare Path Delay Inputs ■ Encounter Test Model from Build Model ■ Test Mode from Build Test Mode ■ Path file with the identification of paths for which path faults are to be created Refer to Path File in Encounter Test: Guide 5: ATPG for syntax of this file. The command is build_sdtsnt_objectives (sdtsnt means stuck-driver test/shorted-nets test).<pathname> or Paths.mypaths The output log reports the number of path groups created as shown below. Prepare Path Delay Outputs A path delay fault model identified by the pathname or pathfile name. .mypathfile Example 2: prepare_path_delay pathfile=/dir/mypathfile pathname=mypaths create_path_delay_tests pathname=mypaths GUI ALTFAULT: Paths. [end TPT_315] Build Package Test Objectives The package test objectives are stuck-driver. if no pathname was specified.

■ Objective file (optional). 3D or Interposer test) or IOWRAP for chip test. If you want to analyze the stuck-driver objectives on the GUI or report the stuck-driver test faults.e. Identfies slow-to-disable faults for interconnect tests. Two objectives are defined for each pin pair where 1->Z and 0->Z are the required transitions on the driver being tested for slow disable. Build Package Test Objectives Input ■ Encounter Test Model from Build Model ■ Test Modes built with a mode definition file that indicates the test type is INTERCONNECT (for MCM. See Mode Definition file Statement TEST_TYPES in Encounter Test: Guide 2: Testmodes.12 © 1999-2015 All Rights Reserved. . Each pair of pin names identifies a single net driven by both chips. The objectivefile is used to notify Encounter Test where to define slow-to-disable objectives. they are known from the existence of the package test objectives model. to be slow-to-disable tested) and the second short_chip_pin_name2 identifies the backtrace point for the driver getting the enable transition. Build Package Test Objectives Output There are multiple representations of these objectives: ■ The package test objectives model ■ The stuck-driver test faults included in alternate fault model ##TB_SDT When you run create_interconnect_tests or create_iowrap_tests. Encounter Test: Guide 4: Faults Building Fault Models Note: The package test objectives can also be built with the fault model by specifying sdtsnt=yes on the build_faultmodel command line. the objectives are built and made active for every applicable testmode. The syntax for the objective file: PIN<short_chip_pin_name1> PIN<short_chip_pin_name2> where the first short_chip_pin_name1 identifies the backtrace point for the driver getting the disable transition (i. specify ALTFAULT in the analysis context or on the command line as ##TB_SDT. Note that the testmode keyword is not specified on the build_sdtsnt_objectives command line. there is no need to identify the objectives. October 2015 26 Product Version 15. This net is the target slow-to-disable objective net.

When building an alternate faultmodel. When building a standard faultmodel. Refer to Net Pair File on page 28 for more information. A net pair is a pair of nets that are adjacent to each other on one or more metal layers. This faultrule file would not contain bridge faults but might contain some other type of pattern fault. The build_bridge_faultmodel command can automatically identify the potential bridge candidates based on net adjacency using the physical design information from OpenAccess (OA). the active faults for each existing testmode are identified. If includedynamicbridge=yes. the testmode processing is the same as for build_faultmodel. the testmodelist keyword is used to identify the testmode(s) to which this alternate fault model applies. identified with the innetpairfile keyword. a text file containing net pairs is used as input. A faultrule file also may be used as input if you are building a standard fault model. October 2015 27 Product Version 15. Bridge faults are built using the net pairs extracted from OA or from the input net pair file. This process extracts neighboring nets for every logical net from the OA database to create net pairs. Refer to build_bridge_faultmodel in the Encounter Test: Reference: Commands for details of the command line. By default. the command builds four static bridge faults for each net pair. they are also used as input to build_bridge_faultmodel. Encounter Test: Guide 4: Faults Building Fault Models Build Bridge Fault Model Bridge faults can be included in the standard Encounter Test fault model or can be included separately in an alternate fault model using the build_bridge_faultmodel command. then four dynamic bridge faults are also built.12 © 1999-2015 All Rights Reserved. If you do not have an OA database. . These optionally may be written out as a faultrule file. Build Bridge Fault Model Input There are two required inputs to build_bridge_faultmodel: ■ The Encounter Test Model ■ An identification of the source of net pairs (OA database or net pair file) If testmodes exist.

The net pair file is used for building a bridge fault model for use by Encounter Test.12 © 1999-2015 All Rights Reserved.cell name of the top level physical design in the OA database The following keywords are used to determine the adjacent nets.directory containing the OA database (the cds. ■ proximity . . Net Pair File The objective of creating a net pair file is to identify to Encounter Test pairs of nets which are most prone to bridge defects. You can use any Design for Manufacturability (DFM) strategies available in the industry to identify candidate bridge net pairs. Refer to Physical Diagnostics in Encounter Test: Guide 7: Diagnostics for information on Creating the OA Database and on how to report Physical Correlation. The following is a sample format: October 2015 28 Product Version 15. Encounter Test: Guide 4: Faults Building Fault Models OpenAccess Database If you have an OpenAccess (OA) database containing physical layout.in microns. you may use this as input to build the bridge faults. the identification of proable candidates for a bridge defect can be done accurately.cell view associated with the physical design in the OA database (typically layout) ■ oacell .greatest distance .the maximum number of bridge faults to be included in the fault model. for which to model bridge faults: ■ coverage .library name associated with the physical design in the OA database ■ oaview .lib file should be in this directory) ■ oalib -. between two shapes to have them considered adjacent ■ layerminspacing . in the physical design described by the OA database. As this is based on the physical layout of the design. You should analyze the Physical Correlation results to ensure good correlation prior to running build_bridge_faultmodel.percent target coverage of adjacent metal for the bridge faults in the fault model. Encounter Test accepts a generic format for identifying such net pairs as shown in the following example. The following keywords are used to identify the OA database: ■ oapath .choose whether the minimum spacing or maximum spacing value for each metal layer is to be used to determine the proximity ■ faults .

n_2403 THE_REG_FILE.n_2403 THE_REG_FILE.n_2403 SCAN_ENABLE THE_REG_FILE.n_2403 The net pair file is identified with the innetpairfile keyword on the command line.n_88 THE_REG_FILE.n_539 THE_REG_FILE. In this case. Build Bridge Fault Model Output The binary files are created and stored in the workdir/tbdata directory.creates tests for dynamic bridge faults. In this case.n_44 THE_REG_FILE. static bridge faults ■ create_bridge_delay_tests . The following fault commands can process bridge faults: October 2015 29 Product Version 15. It is assumed that you are writing the netpairfile out to a file so that you can edit it and then use it as input to build_bridge_faultmodel with the innetpairfile keyword.n_2403 THE_REG_FILE.12 © 1999-2015 All Rights Reserved.creates tests for. Encounter Test: Guide 4: Faults Building Fault Models THE_REG_FILE.n_1447 THE_REG_FILE.n_4030 THE_REG_FILE.n_2403 THE_REG_FILE.n_2403 THE_REG_FILE. Applications that Support Bridge Faults The following Encounter Test commands support simulation of bridge faults when selected with the faulttype keyword: ■ create_logic_tests / create_logic_delay_tests ■ create_random_tests ■ simulate_vectors / analyze_vectors The following Encounter Test commands target bridge faults for top-off test generation: ■ create_bridge_tests . Note: Tests generated and/or simulated against bridge faults can be committed. You may request that the internally generated bridge faults be written out as a fault rule file specified with the outfaultrulefile keyword.n_2403 THE_REG_FILE. the fault model is not created in tbdata.n_1065 THE_REG_FILE. simulation of these tests is done against dynamic and static bridge faults unless overridden with the faulttype keyword. You may request that the net pair file generated from OA be written out to a file specified with the netpairfile keyword.n_180 THE_REG_FILE.n_2403 THE_REG_FILE. and simulates.n_4156 THE_REG_FILE. the fault model is not created in tbdata. .

Build Bridge Fault Model Scenarios Using the build_bridge_faultmodel command to create a bridge fault model is useful in the following scenarios. Additional patterns can be generated using create_bridge_tests or create_bridge_delay_tests commands to target any remaining untested bridge faults./mynetpairs \ includestaticbridge=yes includedynamicbridge=yes simulate_vectors testmode=xxx faulttype=staticbridge. .dynamicbridge \ October 2015 30 Product Version 15. Encounter Test: Guide 4: Faults Building Fault Models ■ prepare_detected_faults / prepare_ignore_faults ■ report_faults ■ report_fault_statistics The diagnostics using bridge faults continues to be supported. build_bridge_faultmodel includestaticbridge=yes \ oapath=/path/to/oa oalib=myoalib oaview=layout oacell=mytop create_logic_tests testmode=xxx experiment=tg1 faulttype=static. Bridge fault coverage is a quality metric that you can use to evaluate the quality of test patterns.staticbridge commit_tests testmode=xxx inexperiment=tg1 create_bridge_tests testmode=xxx experiment=topoff commit_tests testmode=xxx inexperiment=topoff Scenario 2: Simulate Vectors with Bridge Faults You can simulate traditional stuck and/or delay patterns using simulate_vectors or analyze_vectors with the bridge fault model to generate bridge fault coverage. the choice is based on the availability: Scenario 1: Use Model for ATPG with Bridge Faults You can generate test patterns for stuck and/or delay fault models while simulating these test vectors against the bridge fault model using create_logic_tests or create_logic_delay_tests with the faulttype keyword. Each scenario includes a sample set of commands with a variety of using either OA database or a net pair file as input to building the bridge fault model. Example using standard fault model that also includes bridge faults: build_bridge_faultmodel innetfile=. Note that there is no restriction on selecting either OA or net pair file in these scenarios. This may be done by including the bridge faults in the standard fault model or by creating an alternate fault model (ALTFAULT) that includes only the bridge faults.12 © 1999-2015 All Rights Reserved. This will generate a higher-quality test set.

Encounter Test: Guide 4: Faults Building Fault Models experiment=exp1 <options> Example using alternate fault model with only bridge faults: build_bridge_faultmodel innetpairfile=./mynetpairs altfault=bridgeflt \ includestaticbridge=yes includedynamicbridge=yes simulate_vectors testmode=xxx altfault=bridgeflt \ experiment=exp1 <options> Scenario 3: Performing diagnostics with no additional pattern generation and resimulation In this scenario. and that defect is modeled in the bridge fault model and is not present in the scan chain. . if the fail data was caused by a bridge defect. build_bridge_faultmodel altfault=bridgeflt \ oapath=/path/to/oa oalib=myoalib oaview=layout oacell=mytop \ includestaticbridge=yes includedynamicbridge=yes diagnose_failset_logic testmode=xxx altfault=bridgeflt <options> October 2015 31 Product Version 15. diagnose_failset_logic will identify the bridge defect with a maximum score.12 © 1999-2015 All Rights Reserved.

. Encounter Test: Guide 4: Faults Building Fault Models October 2015 32 Product Version 15.12 © 1999-2015 All Rights Reserved.

Encounter Test: Guide 4: Faults 2 Modifying Fault Model/Fault Status Encounter Test allows specification of attributes in the model to add faults that would not be included by default or to remove faults that would be included by default.12 © 1999-2015 All Rights Reserved. Using Model Attributes to Add/Remove Faults This section discusses the following topics: ■ Fault Attribute Specification in Verilog Design Source ■ Fault Attribute Specification in Edit Model File Fault Attribute Specification in Verilog Design Source Encounter Test allows specification of attributes in technology library or design source to add faults that would not be included by default or to remove faults that would be included.The following is a set of fault specifications that can be placed on instance pins of logic primitives and the values that are allowed for the attribute: ■ PFLT (Pin FauLT) -. .these are stuck-at (also known as static) faults ❑ PFLT=”NO” – remove all stuck-at faults from this pin ❑ PFLT=”0” – include only a stuck-at-0 on this pin ❑ PFLT=”1” – include only a stuck-at-1 on this pin ❑ PFLT=”01” – include both stuck-at-0 and stuck-at-1 on this pin ■ TFLT (Transition FauLT) – these are transition (also known as delay or dynamic) faults ❑ TFLT=”NO” – remove all transition faults from this pin ❑ TFLT=”0” – include a slow-to-fall transition fault on this pin ❑ TFLT=”1” – include a slow-to-rise transition fault on this pin October 2015 33 Product Version 15.

(* PFLT="0" *) b3). b1.specified on a cell (Verilog module) to include faults in the fault rule with the specified filename. and nb1_G (nb1_S001. a3. b2. b1. b3. (* FAULTS="NO" *) nor y_G001(y. na1_S003. a3. Encounter Test: Guide 4: Faults Modifying Fault Model/Fault Status ❑ TFLT=”01” – include both slow-to-fall and slow-to-rise transition faults on this pin ■ DFLT (Driver FauLT) -. b3). a2. (* PFLT="01" *) b2. output y. and na1_G (na1_S003. .these are driver objectives for parametric test ❑ DFLT=”NO” – remove the driver objective from this pin ■ RFLT (Receiver FauLT) – these are receiver objectives for parametric test ❑ RFLT=”NO” – remove the receiver objective from this pin The following attributes can be placed on the instance of a primitive or a cell (Verilog module) depending on the value of the attribute: ■ FAULTS=NO . a2. a2. b2. endmodule In the above example. a1.specified on an instance of a primitive to remove all faults from that primitive ■ FAULTS=<filename> . The following is an example of Verilog using PFLT and FAULT syntax – this example is unrealistic and is included just to show placement of the attributes: Figure 2-1 Fault Attributes in Verilog (* FAULTS="mypflt" *) module pflt_test (y. input a1. a1.12 © 1999-2015 All Rights Reserved.(* PFLT="NO" *) a3). na1_S003. ■ Pattern faults defined in file mypflt in one of the directories specified with the build_faultmodel FAULTPATH keyword will be included for each instance of this module ■ All faults will be removed from the nor gate y_G001 for all instances of this module ■ Faults SA0 and SA1 will be placed on pin b2 of instance nb1_G for each instance of this module (instead of only SA1 that would be included by default) ■ Fault SA0 will be placed on pin b3 of instance nb1_G for each instance of this module (instead of both SA1 and SA0 that would be included by default) ■ All static faults will be removed from pin a3 of instance na1_G for each instance of this module (instead of both SA1 and SA0 that would be included by default) October 2015 34 Product Version 15. nb1_S001). wire nb1_S001. b1.

ADD ATTRIBUTE FAULTS=NO on INSTANCE y_G001 of CELL pflt_test . ■ run: build_model editfile=<filename> ■ run: build_faultmodel <your normal options> and the faults defined with the attributes in the edit file will be included/removed as appropriate. DFLT. TFLT. For dynamic pattern faults. the file would look like this: ADD ATTRIBUTE FAULTS=mypflt on CELL pflt_test . To include the same attributes as shown in the Verilog in the previous section. and FAULTS attributes are valid only on primitives. Note: Attributes in the Verilog source can be overridden with an edit file with CHANGE ATTRIBUTE or DELETE ATTRIBUTE edit model statements. Fault Attribute Specification in Edit Model File The PFLT. October 2015 35 Product Version 15. To do that: ■ create a file with Edit Model ADD ATTRIBUTE statements.12 © 1999-2015 All Rights Reserved. ADD ATTRIBUTE PFLT=NO on PIN a3 on INSTANCE na1_G of CELL pflt_test . ADD ATTRIBUTE PFLT=01 on PIN b2 on INSTANCE nb1_G of CELL pflt_test . and FAULTS attributes discussed in the previous section can be included in the model during build_model by specifying them in an Edit Model file rather than directly in the Verilog Design Source. . TFLT. RFLT. there is also an initial value on the module inputs. Using Fault Rule Specification to Add/Remove Faults A fault rule may be used to: ■ add user-defined pattern faults to the fault model. Encounter Test: Guide 4: Faults Modifying Fault Model/Fault Status Notes ■ The right parenthesis and the semicolon on the and instances MUST NOT be on the same line as the PFLT attribute as this is illegal Verilog. ADD ATTRIBUTE PFLT=0 on PIN b3 on INSTANCE nb1_G of CELL pflt_test . ■ The PFLT. These faults are defined with a pattern of required module input values and expected module output values with and without the presence of the fault. Encounter Test will issue a syntax error in this case.

REDUNDANT. Encounter Test: Guide 4: Faults Modifying Fault Model/Fault Status ■ identify faults as IGNORE. . Preparing a Fault Subset A fault subset is a set of static. The output is an experiment with the fault subset for use as input to Test Generation/Fault Simulation runs. See Fault Rule File Syntax in Appendix A for complete details on the syntax and usage of fault rule file statements including examples. Preparation of a fault subset includes reading an input fault list which specifies which faults to include and optionally assigns status to the specified faults. DETECTED (i. and/or parametric faults that are used as input to a Test Generation or Fault Simulation task.e. These options are discussed in more detail in the Prepare Fault Subset Input section below.subset1 The fault list input may be specified as a file using the faultlist keyword as shown above or may be created with report_faults and piped into the prepare_fault_subset command. The command line to use is shown in the following syntax: prepare_fault_subset workdir=myworkdir testmode=FULLSCAN experiment=fltsubset1 / faultlist=/dir/faults4fs. October 2015 36 Product Version 15. Tested) . PTAB.12 © 1999-2015 All Rights Reserved. dynamic. ■ remove all faults from an instance within module (NOFAULT INSTANCE name IN MODULE name) ■ remove all faults from an instance of a module (NOFAULT BLOCK name) ■ remove all faults from the entire module (NOFAULT MODULE name) ■ include faults only on an instance within a module (FAULTONLY INSTANCE name IN MODULE name) ■ include faults only in an instance of a module (FAULTONLY BLOCK name) ■ include faults only in a module (FAULTONLY MODULE name) There can be multiple NOFAULT and FAULTONLY statements to identify the complete set of blocks to be faulted or not faulted..

the vectors created using this subset will not be able to be committed. refer to “prepare_fault_subset” in the Encounter Test: Reference: Commands Prepare Fault Subset Input ■ Encounter Test model from Build Model ■ Encounter Test fault model from Build Faultmodel ■ Encounter Test test mode from Build Testmode ■ A fault list created manually or from report faults. then read by prepare_fault_subset using the faultlist= <keyword> as shown in the following steps: report_faults faults=1:10 > report_faults. specify prepare_fault_subset -I.out October 2015 37 Product Version 15. fault type. ■ Direct the output of the report_faults command to prepare_fault_subset using the following syntax: report_faults faults=1:10 | prepare_fault_subset -I ■ The output of the report_faults command can be written to a file. . See the following sections for more information. it must be run with a command line. If the input is to be processed using fault index. The default is to use the pin name and fault type. For a complete description of the prepare_fault_subset syntax. The input to prepare_fault_subset must be referenced by pin name and fault type or by fault index. and indexes to identify the faults. Use one of the following methods to process the output of report_faults with prepare_fault_subset. Encounter Test: Guide 4: Faults Modifying Fault Model/Fault Status Notes If the status of the faults are updated during prepare_fault_subset. A GUI form is not available for invocation of prepare_fault_subset. Using the Output from Report Faults to Create an Input Fault List Typical output from report_faults uses pin names. edited (if desired).12 © 1999-2015 All Rights Reserved.

13. The list shows faults for the specified testmode and also shows the gloabl status of all faults. If you use the list with two sets of status as input to prepare_fault_subset. as shown in the following example: October 2015 38 Product Version 15. .3. 2. The previous examples all show how to obtain the global status for the faults. or lists (faults=1. Notes: 1.12 © 1999-2015 All Rights Reserved. Manually Creating an Input Faultlist File An input file for Report Faults and Prepare Fault Subset can be manually created.50 includes 15 faults 1 through 10. 3.out prepare_fault_subset faultlist=report_faults. To obtain the status from a test mode. and 50).27:29. but it is not recommended since this will cause confusion.13. specify testmode=testmode_name. then either piped or passed into prepare_fault_subset using the following syntax: cat myfaultlist | prepare_fault_subset <parameter> or prepare_fault_subset faultlist=myfaultlist <other parameters> report_faults output can also be piped directly into prepare_fault_subset. The fault status characters processed by prepare_fault_subset are the characters viewed in the output of Report Faults and are limited to U (untested). 27 through 29. report_faults faults=1:10 includes 10 faults with indexes 1 through 10). the last one should win. P (possibly tested). or C (possibly tested at best).out ■ The output of report_faults can be directed to a file using the following syntax: report_faults faults=1:10 > myfaultlist myfaultlist can then be edited. Encounter Test: Guide 4: Faults Modifying Fault Model/Fault Status prepare_fault_subset -I faultlist=report_faults.X. T (tested).out or report_faults faults=1:10 > report_faults. and R (redundant). Refer to the Encounter Test: Reference: Commands for additional report_faults and prepare_fault_subset options. 4. 3. The selection of faults in report_faults may be done with ranges (as shown in the previous examples.5. The following two file formats are acceptable: The file must include at least two columns: fault index and fault status for fault index references.7 includes only those 4 faults) or a combination of these (faults=1:10.

Encounter Test: Guide 4: Faults Modifying Fault Model/Fault Status 21 U 22 T 23 U or The file must include pin name. the status of the faults in that subset has changed to match the results from the command you just ran. fault type (SA0. if you run: prepare_fault_subset workdir=dir testmode=FULLSCAN_DELAY experiment=subset1 faultlist=/dir/ mysubsetfile To use this fault subset in one of these commands you would run: ■ create_logic_tests testmode=FULLSCAN_DELAY experiment=subset1 append=yes ■ create_logic_delay_tests testmode=FULLSCAN_DELAY experiment=subset1 append=yes ■ create_parametric_tests testmode=FULLSCAN_DELAY experiment=subset1 append=yes ■ analyze_vectors testmode=FULLSCAN_DELAY inexperiment=2analyze experiment=subset1 append=yes Note: Once you have run a command using the fault subset. You may append October 2015 39 Product Version 15. and a fault status character for fault pin name references. SF). as shown in the following example: A OSA0 T A OSA1 U hadd12.A0 ISA1 U Prepare Fault Subset Output An experiment in tbdata consisting of the fault subset data and an empty vectors file identified by the specified experiment name.12 © 1999-2015 All Rights Reserved. Using Fault Subset for ATPG / Fault Simulation When you run ATPG or fault simulation specify the same experiment name as you specified for prepare_fault_subset and specify append=yes. For example.carry2. . SR. And the vectors generated by that run are identified with the experiment from the subset. SA1.

Faults considered by this command are stuck-at and transition (slow-to-rise and fall). The syntax for the command is shown below: prepare_apriori_faults WORKDIR=<directory> TESTMODE=<testmode name> Prepare Apriori Faults Inputs ■ Encounter Test model from build_model ■ Full Scan Test mode from build_testmode ■ Fault model from build_faultmodel Prepare Apriori Faults Outputs The fault status for the testmode updated with scan faults marked as tested. Refer to “Example 5: Excluding Scan Chain Faults” on page 18 for details.12 © 1999-2015 All Rights Reserved. The command. Refer to “prepare_apriori_faults” in the Encounter Test: Reference: Commands for syntax information. Removing Scan Faults from Consideration A high number of scan faults can be detrimental to the speed of throughput of downstream test generation applications. prepare_apriori_faults marks scan faults as tested but does not remove them from the fault model. October 2015 40 Product Version 15. By default. There is also an option to mark the redundant faults as Redundant instead of Ignored. Driver and receiver faults are not considered. . An alternative method to removing scan faults from consideration is to exclude them from the fault model. the command processes Redundant faults but options exist to select Possibly Testable at Best (PTAB) faults or faults untestable due to X-source. Encounter Test: Guide 4: Faults Modifying Fault Model/Fault Status again using the same or another command. but the vectors and status from the preceding run will be used as input (like a normal append run). can be used to mark scan faults as tested prior to beginning the test generation process. Preparing an Ignore Faults File The prepare_ignore_faults command creates a fault rule file for a cell with a list of faults to be marked as ignored when processing the cell at a higher packaging level where the faults were already proven untestable (redundant) at the cell level. prepare_apriori_faults.

3 Date 19970306215136 */ IGNORE { PATTERN 1 BLOCK BLTI_p_model_p_blth_p_B_LATCH_p_BLT_DATAI_OUT$OMX104.EnbAnd.EnbAnd.master } IGNORE { PATTERN 2 BLOCK BLTI_p_model_p_blth_p_B_LATCH_p_BLT_DATAI_OUT$OMX104.01 } . .EnbAnd.01 } IGNORE { SA0 PIN BLTI_p_DRV110. .0000100.A2 } IGNORE { SA1 PIN btext_p_mb_data_cio003.EnbAnd.0000100.0000101. refer to “prepare_ignore_faults” in the Encounter Test: Reference: Commands.0000101. Prepare Ignore Faults Outputs The output from prepare_ignore_faults is the file specified by the outfaultfile and/ or OUTFAULTDIR keywords. A sample command line is shown below: prepare_ignore_faults WORKDIR=mywd TESTMODE=FULLSCAN OUTFAULTDIR=. IGNORE { SA1 PIN btext_p_mb_data_cio000.EnbAnd.master } IGNORE { SA0 PIN BLTI_p_DRV103.A2 } IGNORE { SA1 PIN btext_p_mb_data_cio002. The following is an example of the fault rule data in the output file: Entity = BTTP /* Start prepare_ignore_faults fault rule creation. keep in mind that the ENTITY statement is required for all fault rule files.12 © 1999-2015 All Rights Reserved. October 2015 41 Product Version 15.A2 } Note: If you edit this fault rule file or generate your own fault rule file. Version 7.A2 } IGNORE { SA1 PIN btext_p_mb_data_cio001./myfaultdir Prepare Ignore Faults Inputs ■ Encounter test model from build_model ■ Test mode from build_testmode ■ Fault model and status from build_faultmodel ■ For redundant or untestable due to X-source faults to be processed an experiment from ATPG or committed ATPG data must be provided. Encounter Test: Guide 4: Faults Modifying Fault Model/Fault Status To prepare ignore faults. .01 } IGNORE { SA0 PIN BLTI_p_DRV106.A2 } IGNORE { SA1 PIN btext_p_mb_data_cio004.0000101.

dff_primitive.f1. Driver and receiver faults are not considered.b1. experiment or committed. To prepare detected faults. . This will cause the faults that were detected at the chip level to be initialized to detected status in the fault model for the SCM. Encounter Test: Guide 4: Faults Modifying Fault Model/Fault Status Preparing Detected Faults File The prepare_detected_faults command creates a fault rule file for a cell (ordinarily a full chip) with a list of faults to be marked as detected when processing the cell at a higher packaging level (ordinarily the SCM containing just this chip). thus circumventing the need to explicitly perform test generation for these faults at the SCM level. Refer to “Test Generation and Fault Simulation” in the Encounter Test: Guide 5 : ATPG.f1.slave } October 2015 42 Product Version 15.dff_primitive.slave } DET { PATTERN 5 BLOCK c1. A sample of the command is shown below: prepare_detected_faults WORKDIR=mywd TESTMODE=FULLSCAN EXPERIMENT=tg1 OUTFAULTDIR=.b1. Faults considered by this command are stuck-at and transition (slow-to-rise and fall).slave } DET { PATTERN 4 BLOCK c1. refer to “prepare_detected_faults” in the Encounter Test: Reference: Commands./myfaultdir Prepare Detected Faults Inputs ■ Encounter test model from build_model ■ Test mode from build_testmode ■ Fault model from build_faultmodel ■ Fault status from test generation on the cell. The fault rule file created by this step can be read in when the fault model is later created for the SCM. This capability is useful if it is known that the same tests applied to the chip are to be applied to the SCM.12 © 1999-2015 All Rights Reserved.b1.dff_primitive. The following is an example of the fault rule data in the output file: Entity = newpart DET { PATTERN 1 BLOCK c1.f1. Prepare Detected Faults Outputs The output from prepare_detected_faults is the file specified by the outfaultfile and/or OUTFAULTDIR keywords.

12 © 1999-2015 All Rights Reserved. . It writes the out-of-context core fault information into the core migration directory as a fault rule file that is read in during build_faultmodel at the chip (SOC) level. Note: This command is run for each testmode on the out-of-context core. The information includes the total number of faults and the number of detected faults in the logic that is not included in the core migration model. Refer to prepare_core_migration_faults in the Encounter Test: Reference: Commands for complete syntax of the command line.Core Level) This preparation process derives information from the core faultModel that will be used when the faultModel is built at the chip level. Example of prepare_core_migration_faults: prepare_core_migration_faults workdir=/hierTest/core1 \ coremigrationdir=/ hierTest/core1migr testmode=COMPRESSION_INTEST The output of this command (in the <coremigrationdir>/ <Verilog_Module_for_Core_Top>) is used as input to build_faultmodel when processing the chip (SOC). Encounter Test: Guide 4: Faults Modifying Fault Model/Fault Status Prepare Core Migration Faults (Hierarchical Test . The information it gathers is for faults that are associated with logic that is not included in the core migration model (that is.refer to Test Modes for Hierarchical Test in Encounter Test: Guide 2: Testmodes ). Refer to Build Fault Model Example for Hierarchical Test . the faults that are in the INTEST testmodes but not in the EXTEST testmodes . Refer to Prepare Core Migration Faults in Encounter Test: Reference: GUI for information on running the command using the graphical user interface.Chip Level on page 22. October 2015 43 Product Version 15.

Encounter Test: Guide 4: Faults Modifying Fault Model/Fault Status October 2015 44 Product Version 15.12 © 1999-2015 All Rights Reserved. .

The status of the fault. refer to “Report Faults” in the Encounter Test: Reference: GUI. and Statistics This chapter covers the Encounter Test tasks for reporting individual test objectives and reporting test coverage statistics. Report Faults displays the contents of a fault model and/or an objective model in the Encounter Test Log window. ■ Type .The simulation function of the logic driving the net where the fault effect origin.Logic Faults from the main menu. select Report . ■ Func . ■ Fault pin name/Propagation net name. Report Faults To produce a fault report using the graphical interface. Encounter Test: Guide 4: Faults 3 Reporting Faults. ■ Status . Report Faults Output Report Faults produces the following information: ■ Fault . October 2015 45 Product Version 15. For a complete description of the report_faults syntax.Faults . The syntax for the report_faults command is given below: report_faults workdir=<directory> testmode=<name> experiment=<name> / faultstatus=untested faulttype=static inputcommitted=no ignored=yes To view a list of faults in the fault model or objective model.12 © 1999-2015 All Rights Reserved. refer to report_faults in the Encounter Test: Reference: Commands. . Test Objectives.The type of the fault in the fault model.Each fault is assigned a unique identifier called the fault index.

. ■ Fault Status Fault status refers to the set of attributes of a fault that change during processing by Encounter Test applications. The key includes some Fault Attributes and Fault Test Status. and Statistics Additional ouptut may be displayed through the use of Advanced options.available through msgHelp TFM-705 or optionally included in log (reportkey=yes) Status key: i= inactive in testmode c= collapsed . Of primary interest among these attributes are those that denote whether a test has been generated for the fault.12 © 1999-2015 All Rights Reserved. Report_faults Options The fault status (status) and fault type (type) keywords are used to specify the data to report.listed after equiv. which explains the notation that is used in the fault listing to denote fault status. Test Objectives. but there are several other attributes that affect Encounter Test processing or which may be useful in the analysis of a design's testability. class representative fault u= untested . Encounter Test: Guide 4: Faults Reporting Faults.not processed I= ignored (unclassified) It= ignored (tied) Iu= ignored (unconnected) Ib= ignored (blocked) T= tested by simulation Ti= tested by implication P= possibly tested (limit not reached) Tpdl=tested (possibly detected limit reached) Tm= tested in another mode Tus= tested user specified (DETECTED statement) A= Aborted R= redundant Rus= redundant: user specified Ud= untestable: undetermined reason Uus= untestable: user specified Ulh= untestable: linehold inhibits fault control/observe Utm= untestable: testmode inhibits fault control/observe Ucn= untestable: constraint Uxs= untestable: X source or sinked Usd= untestable: sequential depth (may be testable with more sequential depth) Ugt= untestable: Global termination Unio=untestable: seq or control not specified to target fault (PI to PO Path) Unil=untestable: seq or control not specified to target fault (PI to LATCH Path) Unlo=untestable: seq or control not specified to target fault (LATCH to PO Path) October 2015 46 Product Version 15. Figure 3-1 Fault Status Key . Refer to “report_faults” in the Encounter Test: Reference: Commands for more information. The report faults function optionally prints a status key as shown in Figure 3-1.

■ Faults This keyword may be used to report specific fault indexes. and Statistics Uner=untestable: seq or control not specified to target fault (interdomain LATCH to LATCH) Unra=untestable: seq or control not specified to target fault (intradomain LATCH to LATCH) PTAB (Possibly Tested at Best) status: 3 = PTAB 3-State:Untested P3 = PTAB 3-State:Possibly Tested T3 = PTAB 3-State:Tested (limit reached) X = PTAB X-state:Untested PX = PTAB X-state:Possibly Tested TX = PTAB X-state:Tested (limit reached) C = PTAB Clock Stuck Off:Untested PC = PTAB Clock Stuck Off:Possibly Tested TC = PTAB Clock Stuck Off:Tested (limit reached) TiC = PTAB Clock Stuck Off:Tested by implication Refer to Fault Attributes on page 103 and Fault Test Status on page 108 for more information. October 2015 47 Product Version 15.12 © 1999-2015 All Rights Reserved. which explains the notation in the listing. ■ Fault Type Fault type refers to the types of faults to be reported. as shown in Figure 3-2.available through msgHelp TFM-705 or optionally included in log (reportkey=yes) Type key: & = AND Grouped | = OR Grouped ISA0 = stuck-at-zero fault on an input pin ISA1 = stuck-at-one fault on an input pin ISR = slow-to-rise fault on an input pin ISF = slow-to-fall fault on an input pin OSA0 = stuck-at-zero fault on an output pin OSA1 = stuck-at-one fault on an output pin OSR = slow-to-rise fault on an output pin OSF = slow-to-fall fault on an output pin DPAT = dynamic pattern fault SPAT = static pattern fault DRV0 = driver fault driving 0 DRV1 = driver fault driving 1 RCV0 = receiver fault receiving 0 RCV1 = receiver fault receiving 1 SBRG0= static net shorted to 0 bridge fault SBRG1= static net shorted to 1 bridge fault DBRG0= dynamic net shorted to 0 bridge fault DBRG1= dynamic net shorted to 1 bridge fault IDDQ0= Iddq logic 0 fault IDDQ1= Iddq logic 1 fault QPAT = Iddq pattern fault Refer to Fault Types on page 91 for more information. . Figure 3-2 Fault Type key . Test Objectives. Encounter Test: Guide 4: Faults Reporting Faults. Use status=all with faults. The report_faults command optionally prints a type key.

DLX_CORE.Y 385 u ISA1 NOR2BX1 DLX_CORE. as shown in the following sample output: INFO (TFM-705): Global Fault List: Fault Status Type Sim Func/Cell Name Fault pin name 1 u OSA0 PI DLX_CHIPTOP_READY 2 u OSA1 PI DLX_CHIPTOP_READY 5 u OSA0 PI DLX_CHIPTOP_RESET 6 u OSA1 PI DLX_CHIPTOP_RESET 45 u OSA0 PO DLX_CHIPTOP_A[0] 46 u OSA1 PO DLX_CHIPTOP_A[0] 363 u OSA1 PI DLX_CHIPTOP_DATA[9] 364 u OSA0 PI DLX_CHIPTOP_DATA[9] 381 u OSA0 AO22X2 DLX_CORE.f.DLX_CORE.i_07661.nl.CLOCK_GATE.nl.DLX_TOP. udp_1_mux.f.Q_N33_reg_31.DLX_CHIPTOP_DATA[9] 381 u OSA0 OR Pin.DLX_CORE.A_REG.l.A2 397 u ISA0 MUX Pin.nl.DLX_CHIPTOP_READY 5 u OSA0 PI Net.f.DLX_CHIPTOP_DATA[9] 364 u OSA0 PI Pin.STORAGE. the output log reports the highest level cell boundary pin associated with the fault for a cell boundary fault model.DLX_TOP. as shown in the following sample output: INFO (TFM-705): Global Fault List: Fault Type Func Propagation net name Status 1 u OSA0 PI Net.DLX_TOP.DLX_TOP.f.l.CLOCK_GATE.DLX_CHIPTOP_A[0] 363 u OSA1 PI Pin.A_REG.DLX_CHIPTOP_A[0] 46 u OSA1 PO Net.l.STORAGE.nl.i_3. Test Objectives.DLX_TOP.I2.i_07661.Q_N33_reg_31.l.f.DLX_CHIPTOP_DATA[9] October 2015 48 Product Version 15. the output log reports the name of the net where the fault effect originates.A_REG.A_REG.f.CLOCK_GATE.f.DLX_CHIPTOP_RESET 6 u OSA1 PI Net.DLX_TOP. udp_1_mux.Q_N33_reg_31.DLX_TOP.B 397 u ISA0 SDFFQX1 DLX_CORE.I1.DLX_TOP. Encounter Test: Guide 4: Faults Reporting Faults.DLX_TOP.CLOCK_GATE.f.l.01 385 u ISA1 NBUF Pin.DLX_CORE.DLX_CHIPTOP_RESET 45 u OSA0 PO Pin.f.nl.nl.DLX_CHIPTOP_A[0] 363 u OSA1 PI Net.DLX_CHIPTOP_RESET 45 u OSA0 PO Net.f.l.DLX_TOP.I2.nl.nl.A_REG.DLX_TOP.l.DATA0 398 u ISA1 MUX Pin.f.l.f.CLOCK_GATE.DLX_CHIPTOP_RESET 6 u OSA1 PI Pin.nl.nl.l.nl.A_REG.CLOCK_GATE.A_REG.l.A_REG.f.l.l.DLX_TOP.Y 382 u OSA1 AO22X2 DLX_CORE.nl.f.l.i_3.l.l.nl.f.D ■ Faultlocation=pin If the faultlocation keyword is set as pin.nl.A_REG.i_3.DLX_TOP.DLX_TOP.l.f.DLX_CORE.nl.DLX_TOP.STORAGE.i_07661.STORAGE.l.DLX_TOP.nl.l.DLX_CHIPTOP_READY 5 u OSA0 PI Pin.DLX_TOP.DLX_CHIPTOP_A[0] 46 u OSA1 PO Pin. the output log reports the primitive pin name associated with the fault.AN 389 u ISA0 NOR2BX1 DLX_CORE.nl.DLX_CHIPTOP_READY 2 u OSA1 PI Pin.__i0.i_07661.l.f.f.A_REG.DLX_TOP.DLX_TOP.nl.A0 389 u ISA0 NOR Pin.DLX_CHIPTOP_READY 2 u OSA1 PI Net. and Statistics ■ Faultlocation=cellpin If the faultlocation keyword is set as cellpin.Q_N33_reg_31.DLX_TOP.DLX_CORE.DLX_TOP.nl.f.f.D 398 u ISA1 SDFFQX1 DLX_CORE.12 © 1999-2015 All Rights Reserved. as shown in the following sample output: INFO (TFM-705): Global Fault List: Fault Type Sim Func/ Fault pin name Status Cell Name 1 u OSA0 PI Pin.CLOCK_GATE.01 382 u OSA1 OR Pin.DATA0 ■ Faultlocation=net If the faultlocation keyword is set as net.nl.A_REG.__i1.l. .CLOCK_GATE.i_3.I1.A_REG.l.f.nl.

CLOCK_GATE.B 135105 u OSA0 NOR2BXL TBB_BSC_MERGED_82.DFT__0.udp_net_out 398 u ISA1 MUX Net.CLOCK_GATE.nl.DFT__0.B0 1407 u ISA1 AO22X1 TBB_BSC_MERGED_82.AN 135102 u ISA1 NOR2BXL TBB_BSC_MERGED_82.l.mux_50_33.CLOCK_GATE.nl.DLX_TOP.i_07661.A1 1417 u ISA1 AO22X1 TBB_BSC_MERGED_82.DLX_CORE.mux_50_33.mux_50_33.A0 1429 u ISA0 AO22X1 TBB_BSC_MERGED_82.DLX_CORE. udp_net_out ■ hierstart Specifies the index value or the name of the hierblock. The default value is primitive.i_07661.l.DFT__0.A_REG.nl.DLX_CORE.nl. The default value is a non-hierarchical format.Q_N33_reg_31.DFT__0.DFT__0.Y 1430 u ISA1 AO22X1 TBB_BSC_MERGED_82.i_07661.Y 397 u ISA0 MUX Net.A_REG.DFT__0. This keyword is used to request the fault list for the specific heirarchical block.DLX_TOP.l. Following is a sample of the report format for a hierarchical fault listing.DFT__0.l.Y 382 u OSA1 OR Net.f.i_07661.DLX_CHIPTOP_DATA[9] 381 u OSA0 OR Net. or primitive.A_REG.A_REG.mux_50_33.mux_50_33.DLX_TOP.nl.l.f.I1.CLOCK_GATE. I1.A_REG. This keyword can be set to values <depth>. Hier Block ID: 0 Fault Status Type Sim Func/Cell Name Fault pin name 1 u OSA0 PI DLX_CHIPTOP_READY 2 u OSA1 PI DLX_CHIPTOP_READY 5 u OSA0 PI DLX_CHIPTOP_RESET 6 u OSA1 PI DLX_CHIPTOP_RESET 9 u OSA0 PI DLX_CHIPTOP_RESET2 45 u OSA0 PO DLX_CHIPTOP_A[0] 46 u OSA1 PO DLX_CHIPTOP_A[0] 381 u OSA0 AO22X2 DLX_CORE.A0 1420 u ISA1 AO22X1 TBB_BSC_MERGED_82.l.f.mux_50_33. techcell.DLX_CORE.B1 135182 u ISA0 AO22X1 TBB_BSC_MERGED_82.DFT__0.B1 1402 u OSA0 AO22X1 TBB_BSC_MERGED_82.DLX_TOP.DFT__0.f.mux_50_33.B0 1414 u ISA0 AO22X1 TBB_BSC_MERGED_82.DFT__0. Test Objectives.nl.Y 363 u OSA1 PI DLX_CHIPTOP_DATA[9] Hier Block/Cell: TBB_BSC_MERGED_82/BC_ENAB_NT.AN 135098 u ISA0 NOR2BXL TBB_BSC_MERGED_82.f. report_faults hierstart=top hierend=macro INFO (TFM-705): Global Fault List: Hier Block/Cell: f.l.A1 135174 u ISA0 AO22X1 TBB_BSC_MERGED_82. Encounter Test: Guide 4: Faults Reporting Faults.DLX_TOP.Y 1398 u OSA1 AO22X1 TBB_BSC_MERGED_82.l.CLOCK_GATE.mux_50_33.A_REG.DLX_CORE.DLX_CORE.mux_50_33.i_3. .DLX_TOP.Y 382 u OSA1 AO22X2 DLX_CORE.DFT__0.DLX_TOP.Y 385 u ISA1 NBUF Net.i_3.A_REG.STORAGE.STORAGE.Y 135106 u OSA1 NOR2BXL TBB_BSC_MERGED_82.DLX_TOP. The name of the hierblock is displayed before each hierblock fault heading and fault list. ■ hierend Specifies the number of hierarchical levels up to which faults are to be listed.A_REG.nl/DLX_TOP.f. and Statistics 364 u OSA0 PI Net.12 © 1999-2015 All Rights Reserved.DFT__0.Y October 2015 49 Product Version 15. Hier Block ID: 64320 Fault Status Type Sim Func/Cell Name Fault pin name 1508 u ISA1 NOR2BXL TBB_BSC_MERGED_82.f.DFT__0.DFT__0.A 389 u ISA0 NOR Net.DFT__0.Q_N33_reg_31.mux_50_33.nl.CLOCK_GATE.

A0 1330 u ISA1 AO22X1 TBB_BSC_MERGED_82.mux_55_33.B0 1322 u ISA0 AO22X1 TBB_BSC_MERGED_82.mux_55_33.B1 1310 u OSA0 AO22X1 TBB_BSC_MERGED_82. . ■ The inputs are positional from top to bottom. ■ For dynamic faults. then the number is the index of the pin that feeds that input pin.l.DFT__0.12 © 1999-2015 All Rights Reserved. ■ When the specific known value does not matter. If it mentions net.A1 135234 u ISA0 AO22X1 TBB_BSC_MERGED_82. Some points to note: ■ For static faults. Test Objectives.mux_55_33. it is represented as V and its opposite is represented as ~V (so if V=1 then ~V=0 and vice versa). you see a pin initialized to a value to set up the transition then a dashed line.DFT__0.mux_55_33. Hier Block ID: 64389 Fault Status Type Sim Func/Cell Name Fault pin name 1336 u ISA1 AO22X1 TBB_BSC_MERGED_82.f.mux_55_33.f.B0 1314 u ISA1 AO22X1 TBB_BSC_MERGED_82.A0 1335 u ISA0 AO22X1 TBB_BSC_MERGED_82. Fault Status Type Sim Func Fault pin name /Cell Name 40 u ISA1 AND Pin. For outputs.Y 1309 u OSA1 AO22X1 TBB_BSC_MERGED_82.mux_55_33.and2.l.DFT__0. ■ The output is the pin on the primitive.DFT__0.DFT__0.mux_55_33. followed by a set of pins/nets set to value (one of which is the same pin that was previously set to the opposite value).littlepart.nl.nl.mux_55_33.A2 net 18 1/1 pin 29 0/X ----------- pin 13 0/1 41 u ISR AND Pin. the inputs are above the dashed line and the output is below the dashed line.mux_55_33/mux_55. The use of net versus pin as the input to be stim'd is based on how the fault is represented. and then the output is below the second dashed line. Encounter Test: Guide 4: Faults Reporting Faults.A1 1323 u ISA1 AO22X1 TBB_BSC_MERGED_82.A2 pin 29 0/0 ----------- October 2015 50 Product Version 15. then the number is the index of the net connected to that input pin.Y ■ reportdetails Specifies that the listing of faults should include the details of the embryonic test for the fault. the good machine value is the expected value and the fault machine value is the value in the presence of the fault.DFT__0. and Statistics Hier Block/Cell: TBB_BSC_MERGED_82.DFT__0.DFT__0. there are two dashed lines. ■ The values are shown as good-machine-value / fault-machine-value.DFT__0. Following is an example of the report with reportdetails=yes.DFT__0.mux_55_33. but that distinction actually matters only for the output value. If it mentions pin.littlepart.and2.B1 135242 u ISA0 AO22X1 TBB_BSC_MERGED_82.mux_55_33.

nl.DOUT PATTERN 1: auto MUX SEL SA1 | D0 SA1 pin 25 0/X pin 32 X/1 pin 29 0/X ----------- pin 39 0/1 The following Figures represent the four faults from the previous report. the output will be 1. Figure 3-3 Representation of Faults 40 and 41 In good machine.l. But if fault 41 occurs. Encounter Test: Guide 4: Faults Reporting Faults. the bottom input will not transition in time and the output will be 0. October 2015 51 Product Version 15.SEL pin 29 1/1 ----------- pin 25 V/X pin 32 X/~V pin 29 0/X ----------- pin 39 V/~V 65 u SPAT MUX Pin. If fault 40 occurs.littlepart.select. machine. and Statistics net 18 1/X pin 29 1/X ----------- pin 13 1/0 43 u ISF MUX Pin. the top input is 1 and the the output is 0.littlepart. Test Objectives. The encircled numbers show the embryonic test required to test the faults.f. In good will both be 1 and the output will be 1.l. the inputs bottom input transitions from 0 to 1.nl.select. the inputs are 1 and 0 so In good machine.12 © 1999-2015 All Rights Reserved.f. .

.nl..l....top.f. Reporting Low Power Faults Example 3-1 on page 52 illustrates a portion of sample output using the report_faults powercomponent=srpg keyword value to obtain a listing of low power faults that contain state retention logic.l.. Encounter Test: Guide 4: Faults Reporting Faults.nl. and the output will be ~V.l.f. the transition will not occur on time curs.l. Test Objectives. 7429 u!sq SPAT 5766 LATCH 1 Pin..top.nl.top.nl. the output will be 1.f.. Hier Block ID: 1862 7412 Cp OSA1 5759 BUF 2 Pin.nl.top.nl.top... 7461 p!ti OSA1 5772 NBUF 2 Pin..nl.top..atpg INFO (TLP-603): Getting Power Component instances active in Power Mode ’PM2’. the value on the select line will In good machine..l.top. 7518 u!ti ISA1 5843 AND 1 Pin. .l. 7419 u#im SPAT 5766 LATCH 1 Pin. 7454 u#im OSA1 5768 NBUF 1 Pin.nl.f. the value on the select line will transition from 1 to 0 so the data on the top input be 0 so the data on the top input will be selected will be selected and the output will be V... 7453 Cp OSA0 5768 NBUF 1 Pin. For example..f... 7430 u!sq SPAT 5766 LATCH 1 Pin. [end TLP_603] INFO (TLP-604): Found 30 active instances of Power Component type(s) ’srpg’ in Power Mode ’PM2’.l. If fault and the output will be 0..f. 7512 Xp OSA1 5841 NBUF 1 Pin.f...top.\out_reg[0]/SDRFFRX1M.l. If pattern fault 65 oc- 43 occurs. In either case.top...top.nl.. 7418 u#im SPAT 5766 LATCH 1 Pin. and Statistics Figure 3-4 Representation of Faults 43 and 65 In good machine. Example 3-1 Power Component Report from report_faults Experiment Fault List: PM2. In the report.nl...l.f.l..f. October 2015 52 Product Version 15.f..reg_bank_1. the Wt (weight) field represents how many faults are represented by each fault type..[end TLP_604] Fault Status Type Pin Func Wt Fault pin name Hier Block/Cell: inst_C.12 © 1999-2015 All Rights Reserved. then either the select line will be stuck-at 1 and the data on the second input will be selected or the top input will be stuck-at-1.

top.....reg_bank_1.f.. ...nl....dynamicbridge (to get both static and dynamic bridge October 2015 53 Product Version 15..top.f. Hier Block/Cell: inst_D. Test Objectives.l... Maximum Storage used during the run and Cumulative Time in hours:minutes:seconds: Working Storage = 10.nl.nl..l. 13026 u!sq SPAT 10983 LATCH 1 Pin.nl...top.nl.top.. 1 INFO (TLP-604): Found 30 active instances of Power Component type(s) ’srpg’ in Power Mode ’PM2’.f.top.f..nl. 7610 p!ti OSA1 5922 NBUF 2 Pin. 13057 p!ti OSA1 10989 NBUF 2 Pin.f..\out_reg[1]/SDRFFRX1M.top.10 Elapsed Time = 0:00:00. Hier Block ID: 1910 7578 u!sq SPAT 5916 LATCH 1 Pin.12 [end TDA_001] ******************************************************************************* * Message Summary * ******************************************************************************* Count Number First Instance of Message Text ------.l.856 bytes (Paging) Swap Space = 15..485.l. Encounter Test: Guide 4: Faults Reporting Faults..nl..l.f. use report_faults faulttype=staticbridge. 7667 u!ti ISA1 5993 AND 1 Pin.reg_bank_3..f.-----... 1 INFO (TDA-001): System Resource Statistics.nl.nl..f.f.f.top.12 © 1999-2015 All Rights Reserved.nl...nl.... 1 INFO (TLP-603): Getting Power Component instances active in Power Mode ’PM2’.l.top.top. INFO (TDA-001): System Resource Statistics.l.l.nl.. Maximum Storage used during the run 1 INFO (TLP-602): Reading CPF information from the Encounter Test database.. 13114 u!ti ISA1 11060 AND 1 Pin.l. and Statistics 7533 p!ti OSA0 5857 BUF 1 Pin.top. ------------------------------ INFO Messages. 7683 p!ti OSA1 6007 BUF 1 Pin.f. . . ******************************************************************************* Reporting Bridge Faults To report bridge faults...... 7534 p!ti OSA1 5857 BUF 1 Pin.l.l.nl. .\out_reg[4]/SDRFFRX1M..l..188 bytes Mapped Files = 249.top.nl.top.974. 7579 u!sq SPAT 5916 LATCH 1 Pin.l... 7682 p!ti OSA0 6007 BUF 1 Pin.top.l... Hier Block ID: 3502 13000 p!ti ISA1 10970 AND 3 Pin.f..f..588 bytes CPU Time = 0:00:00.top. 766 Xp OSA1 5991 NBUF 1 Pin. Hier Block/Cell: inst_C.l.f.top. 13025 u!sq SPAT 10983 LATCH 1 Pin.. 13108 Xp OSA1 11058 NBUF 1 Pin..f.nl..

Test Objectives. 6 8 u DBRG0 cellA NETB=0->1/0 NETA=0/0 AND_group=5. 4. this is the result of the bridge ■ required values of the second net (good and fault machine should be the same) ■ identification of the other three faults included in the AND group. 7 The Propagation Net Name shows: ■ names of the 2 nets that are shorted in the bridge fault ■ propagation values of the first net (good machine/fault machine). 2. The information is as follows: Total Adjacent Net Run Length (microns): 21. Encounter Test: Guide 4: Faults Reporting Faults. Figure 3-5 Report Bridge Fault Output Fault Status Type Func Propagation Net Name 1 u SBRG1 cellA NETA=0/1 NETB=1/1 AND_group=2. 5 7 u DBRG0 cellA NETA=0->1/0 NETB=0/0 AND_group=8. 1. 8. 3 5 u DBRG1 cellA NETA=1->0/1 NETB=1/1 AND_group=6. 5. 4 2 u SBRG1 cellA NETB=0/1 NETA=1/1 AND_group=3.12 © 1999-2015 All Rights Reserved. 7. refer to “Report Fault Statistics” in the Encounter Test: Reference: GUI. additional information about the adjacency of the net pairs is provided from report_faults. 1 3 u SBRG0 cellA NETA=1/0 NETB=0/0 AND_group=4.2342 Total (Bridge) Faulted Adjacent Net Run Length (microns): 10. and Statistics faults). The output for bridge faults is in the format shown in Figure 3-5 irrespective of the setting of faultlocation. To produce a fault statistics report using the graphical interface.196776% of total) Net Adjacency Parameter: layerminspacing=maximum Report Fault Coverage Statistics The effectiveness of the test vectors produced by logic test generation is expressed as the percentage of modeled faults that were detected by simulation of the test patterns. 3. 6. All four faults must be tested to get credit for testing any of them but fault status is reported individually If the input to build_bridge_faultmodel was an OpenAccess database.2342 (48. 8 6 u DBRG1 cellA NETB=1->0/1 NETA=1/1 AND_group=7. 2 4 u SBRG0 cellA NETB=1/0 NETA=0/0 AND_group=1. This percentage is generally called test coverage or fault coverage. October 2015 54 Product Version 15. .

12 © 1999-2015 All Rights Reserved. Encounter Test: Guide 4: Faults Reporting Faults. ■ APCov October 2015 55 Product Version 15. Test Objectives. refer to “Fault/Test Coverage Calculations” on page 113. as coveragecredit defaults to redundant: report_fault_statistics workdir=mywd testmode=FULLSCAN experiment=tg1 Note: redundant includes tested. active faults. A brief description of each calculation and an example syntax for the report_fault_statistics command is given below. refer to ““report_fault_statistics” in the Encounter Test: Reference: Commands. For more information. Encounter Test calculates and prints the following calculations: ■ TCov Percentage Test Coverage calculated as the number of active faults with a status of tested or tested in another mode divided by the total number of active faults. . so coveragecredit-tested. so coveragecredit-tested. report_fault_statistics workdir=mywd testmode=FULLSCAN experiment=tg1 coveragecredit=possibly Note: possibly includes tested. and Statistics To produce a fault statistics report using command line. report_fault_statistics workdir=mywd testmode=FULLSCAN experiment=tg1 coveragecredit=redundant or.possibly is the same as coveragecredit=possibly.redundant is the same as coveragecredit=redundant. ■ PCov Percentage Possibly Detected Coverage calculated as the number of active faults with a status of tested or tested in another mode or possibly tested divided by the total number of active faults. Encounter Test can calculate and print the following calculations depending on the value specified for coveragecredit. report_fault_statistics workdir=mywd testmode=FULLSCAN experiment=tg1 coveragecredit=tested ■ ATCov Percentage Adjusted Test Coverage calculated as the number of active faults with a status of tested or tested in another mode divided by the number of non-redundant.

including only results that have been committed. Test Objectives. This includes all results that were committed at the time the experiment was created.possibly is the same as coveragecredit=redundant. dynamic. These statistics may be reported for: ■ The global design Global fault statistics reflect the committed results for all test modes combined. report_fault_statistics workdir=mywd testmode=FULLSCAN experiment=tg1 coveragecredit=redundant.12 © 1999-2015 All Rights Reserved. ■ The committed results for a test mode Committed test mode fault statistics reflect the cumulative results for that test mode. Note that the statistics for a test mode are affected by the results of some other test mode if some faults have tested in another mode status.possibly. active faults. including only the committed results for all test modes. so coveragecredit- tested. only faults active in the test mode are counted ■ For comet statistics. ■ An experiment on a test mode Uncommitted fault statistics reflect the cumulative results for that test mode up to and including the experiment. ■ A comet Comet fault statistics reflect the cumulative results for all of its member test modes combined. all non-ignored faults are counted ■ For test mode statistics. Each variable includes only faults that are active. Encounter Test: Guide 4: Faults Reporting Faults. a fault is counted if it is active in any test mode in that comet Following are some sample outputs of fault statistics: ■ Default output of fault statistics is shown below: Fault Statistics for Global : October 2015 56 Product Version 15. and includes the tested in another mode status. that is: ■ For global statistics. and Statistics Percentage Adjusted Possibly Detected Coverage calculated as the number of active faults with a status of tested or tested in another mode or possibly tested divided by the number of non-redundant.redundant. PI. The statistics are reported for each of several different types of faults: static. pattern. . etc. PO. The calculation and interpretation of comet fault statistics is the same for both cross-mode markoff comets and statistics-only comets.possibly Note: redundant and possibly include tested.

00 4 0 0 0 4 4 0 0 0 4 PO Dynamic 0.00 42 0 0 0 42 42 0 0 0 42 PI Dynamic 0.00 4 0 0 0 4 4 0 0 0 4 POStatic 0.00 30 0 0 0 30 Collapsed Static 0.00 2 0 0 0 2 Total Dynamic 0.00 42 0 0 0 42 42 0 0 0 42 CollapsedDynamic0.00 0.00 30 0 0 0 30 30 0 0 0 30 CollapsedStatic 0. and Statistics INFO (TFM-701): Fault Statistics for Global: -.ATCov ----.ATCov -.00 30 0 0 0 30 30 0 0 0 30 PIStatic 0.12 © 1999-2015 All Rights Reserved.00 42 0 0 0 42 PI Dynamic 0.00 2 0 0 0 2 Parametric IDDq 0.00 0.00 0.00 0. Test Objectives. Encounter Test: Guide 4: Faults Reporting Faults.00 0.-------- Global Faults ----------- Testmode Global Total Tested Possibly Redundant Untested Total Tested Possibly Redundant Untested TotalStatic 0.00 0.00 0 0 0 0 0 0 [end TFM_701] ■ Fault statistics output with options reportpiporows=yes.possibly: INFO (TFM-701): Fault Statistics for Global: October 2015 57 Product Version 15.0 42 0 0 0 42 Collapsed Dynamic 0.00 0. .00 0.00 4 0 0 0 4 PO Static 0. ------------. -------.00 0.dynamic reportptab=yes and coveragecredit=tested.00 2 0 0 0 2 2 0 0 0 2 Dynamic 0.00 4 0 0 0 4 PO Dynamic 0. reporttype=static.00 2 0 0 0 2 2 0 0 0 2 Parametric IDDq 0.00 0.Testmode Faults-------------.00 30 0 30 ■ Fault statistics output with testmode specified: INFO (TFM-701): Fault Statistics for Testmode:COMPRESSION --.00 30 0 30 30 0 30 Path 0.Global Faults ------------------ Global Total Tested Possibly Redundant Untested Total Static 0.00 30 0 0 0 30 PI Static 0.

00 16 0 0 0 16 PO Static 0. ----Global Faults----- Global 3-state TIE X CSO Static 0.00 64 0 0 0 64 PI Dynamic 0.12 © 1999-2015 All Rights Reserved.Global Faults ---------------------- Global Total Tested Possibly Redundant Untested Total Static 80. Encounter Test: Guide 4: Faults Reporting Faults.------------.00 74 0 0 0 74 Collapsed Static 0.00 61 0 0 0 61 PI Static 0.Global Faults ---------------- Global Total Tested Possibly Redundant Untested Total Static 0. Test Objectives.00 2 0 0 0 2 [end TFM_701] INFO (TFM-702): Possibly Testable at Best Fault Statistics and Reasons Report for Global: -PTBCov.------------.00 74 0 0 0 74 Collapsed Dynamic 0. and Statistics -.TCov --.00 2 0 0 0 2 Total Dynamic 0.00 2000 1600 50 50 100 Collapsed Dynamic 77. .00 1000 800 25 25 50 Collapsed Static 77.00 16 0 0 0 16 PO Dynamic 0.50 800 620 20 20 40 Total Dynamic 80.00 10 0 0 [end TFM_702] Following is an example of the report_fault_statistics output for Global TCov statistics with ignored fault statistics: Global Static Total Ignored (undetectable) Fault Count: 100 Ignored-Tied Fault Count: 40 Ignored-Unconnected Fault Count: 20 Ignored-Blocked Fault Count: 40 Global Dynamic Total Ignored (undetectable) Fault Count: 200 Ignored-Tied Fault Count: 80 Ignored-Unconnected Fault Count: 40 Ignored-Blocked Logic Fault Count:80 INFO (TFM-701): Fault Statistics for Global [end TFM_701] --.00 10 0 0 Dynamic 0.PCov -.50 1600 1240 40 40 80 Following is an example of the report_fault_statistics output for Global ATCov statistics with ignored fault statistics: Global Static Total Ignored (undetectable) Fault Count: 100 Ignored-Tied Fault Count: 40 Ignored-Unconnected Fault Count: 20 Ignored-Blocked Fault Count: 40 Global Dynamic Total Ignored (undetectable) Fault Count: 200 Ignored-Tied Fault Count: 80 Ignored-Unconnected Fault Count: 40 Ignored-Blocked Logic Fault Count: 80 October 2015 58 Product Version 15.

Following is a sample output of the heirachical fault statistics for a given testmode. In this the test modes are listed after the statistics and the details of the statistics is available in the help for the message TFM-074 and not printed in the output log.00 2 2 0 October 2015 59 Product Version 15.Faults -------------------------- ---------------- Depth Global Total Untested Tested Possibly Redundant HierName 1 0. The default value is 1. The default value is a non- hierarchical format. Encounter Test: Guide 4: Faults Reporting Faults.00 4 4 0 0 0 t11 / test2 2 0. and Statistics INFO (TFM-701): Fault Statistics for Global [end TFM_701] --. This keyword can be set to values <depth>.00 30 30 0 PI Static 100. Test Objectives. .00 8 8 0 0 0 t2 / test1 2 0. techcell.00 30 30 0 0 0 topcell / topcell 2 0.ATCov --. Hierarchical Fault Stats selected:Total Static -APCov.------------. A sample output of the Maximum Global Test Coverage Statistics is shown below: INFO (TFM-704): Maximum Global Test Coverage Statistics: %Active #Faults #Active #Inactive Total Static 100.--------------------------.00 30 30 0 Collapsed Static 100.18 1600 1240 40 40 80 Options for Hierarchical Fault Statistics ■ hierstart Specifies the index value or the name of the hierblock.43 2000 1600 50 50 100 Collapsed Dynamic 91.00 8 8 0 0 0 t2/test2 2 0. ■ hierend Specifies the number of hierarchical level upto which fault statistics are to be displayed.43 1000 800 25 25 50 Collapsed Static 91. or primitive.12 © 1999-2015 All Rights Reserved. This keyword is used to request the fault statistics for the specific heirarchical block.00 4 4 0 PO Static 100.18 800 620 20 20 40 Total Dynamic 91.00 4 4 0 0 0 t22 / test2 [end TFM_701] The maximum Global Test Coverage statistics is a part of the Global Fault Statistics.Global Faults -------------------- Global Total Tested Possibly Redundant Untested Total Static 91.

[end TLP_603] INFO (TLP-604): Found 30 active instances of Power Component type(s) ’srpg’ in Power Mode ’PM2’.00 42 42 0 PI Dynamic 100. [end TLP_604] Hierarchical Fault Stats selected: Total Static October 2015 60 Product Version 15. Depth : (Hier block stats only) Number of hier levels down in the design a hier block is relative to the first block printed. #Tested : Number of Active Faults marked tested.atpg INFO (TLP-603): Getting Power Component instances active in Power Mode ’PM2’. #Untested: Number of Active Faults untested. Encounter Test: Guide 4: Faults Reporting Faults.12 © 1999-2015 All Rights Reserved. .00 42 42 0 Collapsed Dynamic 100.00 4 4 0 PO Dynamic 100. %TCov (%Test Coverage) : #Tested / #Faults %ATCov (%Adjusted TCov) : #Tested / (#Faults-#Redund) %PCov (%Possibly Detected Coverage) : (#Tested+#Possibly) / #Faults %APCov (%Adjusted PCov) : (#Tested+#Possibly) / (#Faults-#Redund) %PTBCov (%Possibly Testable at Best Coverage): (#TestPTB+#Tested) / #Faults %APTBCov (%Adjusted PTBCov) : (#TestPTB+#Tested) / (#Faults-#Redund) Experiment Statistics: PM2. and Statistics Total Dynamic 100. Example 3-2 Power Component Report from report_fault_statistics Global Ignored Static Fault Count 1854 Global Ignored Dynamic Fault Count 1833 Coverage Definitions: #Faults : Number of Active Faults (observable). Test Objectives." #TestPTB : Number of "possibly tested at best" faults marked possibly tested or marked tested by implication. #PTB : Number of Active Faults marked "possibly testable at best. fault value is X). #Possibly: Number of Active Faults marked possibly tested (good value is 0 or 1.00 2 2 0 Parametric There are 2 test mode(s) defined: FULLSCAN COMPRESSION [end TFM_704] There are no PPIs for Test Mode: COMPRESSION Information for Test Mode: COMPRESSION -------------------------- Scan Type = GSD Reporting Low Power Fault Statistics Example 3-2 illustrates a portion of sample output using the report_fault_statistics powercomponent=srpg keyword value to obtain a listing of low power fault statistics for faults that contain state retention logic. #Redund : Number of Active Faults untestable due to redundancy.

52 3310 \out_reg[0]/SDRFFRX1M 0 54 3 43 79.63 3406 \out_reg[2]/SDRFFRX1M 0 54 3 43 79.36 2540 \out_reg[4]/SDRFFRX1M 0 56 5 45 80.00 2105 \out_reg[0]/SDRFFRX1M 0 56 3 48 85.71 2006 \out_reg[3]/SDRFFRX1M 0 56 6 35 62. Maximum Storage used during the run 1 INFO (TLP-602): Reading CPF information from the Encounter Test database. Encounter Test: Guide 4: Faults Reporting Faults.-----. Maximum Storage used during the run and Cumulative Time in hours:minutes:seconds: Working Storage = 10.36 2824 \out_reg[0]/SDRFFRX1M 0 56 3 48 85.71 3163 \out_reg[2]/SDRFFRX1M 0 56 4 48 85.07 2348 \out_reg[0]/SDRFFRX1M 0 56 3 45 80.36 2396 \out_reg[1]/SDRFFRX1M 0 56 3 45 80.71 3016 \out_reg[4]/SDRFFRX1M 0 56 6 45 80.856 bytes (Paging) Swap Space = 15.63 3358 \out_reg[1]/SDRFFRX1M 0 54 3 43 79.71 2872 \out_reg[1]/SDRFFRX1M 0 56 3 48 85.71 2153 \out_reg[1]/SDRFFRX1M 0 56 4 48 85.588 bytes CPU Time = 0:00:00.00 1862 \out_reg[0]/SDRFFRX1M 0 56 3 48 85.974.700 bytes Mapped Files = 249.71 3259 \out_reg[4]/SDRFFRX1M 0 54 9 37 68.63 3454 \out_reg[3]/SDRFFRX1M 0 54 3 43 79.71 3211 \out_reg[3]/SDRFFRX1M 0 56 4 48 85.71 2201 \out_reg[2]/SDRFFRX1M 0 56 4 48 85.36 2444 \out_reg[2]/SDRFFRX1M 0 56 3 45 80. ------------------------------ INFO Messages. 1 INFO (TLP-603): Getting Power Component instances active in Power Mode ’PM2’. Test Objectives.71 1910 \out_reg[1]/SDRFFRX1M 0 56 3 48 85.12 © 1999-2015 All Rights Reserved.71 2920 \out_reg[2]/SDRFFRX1M 0 56 3 48 85.71 2249 \out_reg[3]/SDRFFRX1M 0 56 10 35 62.36 3067 \out_reg[0]/SDRFFRX1M 0 56 4 48 85. 1 INFO (TLP-604): Found 30 active instances of Power Component type(s) ’srpg’ in Power Mode ’PM2’.10 [end TDA_001] ******************************************************************************* * Message Summary * ******************************************************************************* Count Number First Instance of Message Text ------.36 2492 \out_reg[3]/SDRFFRX1M 0 56 6 45 80.71 2968 \out_reg[3]/SDRFFRX1M 0 56 3 48 85..459. October 2015 61 Product Version 15. .05 Elapsed Time = 0:00:00.71 3115 \out_reg[1]/SDRFFRX1M 0 56 4 48 85.50 2054 \out_reg[4]/SDRFFRX1M 0 56 6 42 75.. 1 INFO (TDA-001): System Resource Statistics. and Statistics Depth #Faults #Untested #Tested %TCov HierIndex SimpleName/CellName 0 56 6 42 75.36 INFO (TDA-001): System Resource Statistics.63 3502 \out_reg[4]/SDRFFRX1M #Faults #Tested #Untested %TCov Total Power Comp Stats 1670 1342 131 80.50 2297 \out_reg[4]/SDRFFRX1M 0 56 8 37 66.71 1958 \out_reg[2]/SDRFFRX1M 0 56 3 48 85.

Encounter Test: Guide 4: Faults Reporting Faults.197388% of total) Net Adjacency Parameter: layerminspacing=minimum Global Ignored Static Fault Count 2 Global Ignored Dynamic Fault Count 2 INFO (TFM-701): Fault Statistics for Global : -. about net adjacency.ATCov -.00 68610 0 0 0 68610 PI Dynamic 0. use the report_domain_faults function. The syntax for this function is given below: report_domain_faults experiment=tg1 testmode=FULLSCAN_TIMED workdir=mywd faultlocation=net clockconstraints=clk_const October 2015 62 Product Version 15. select one sequence or use a clockconstraints file with the clock(s) for the single domain.Global Faults ---------------------- Global Total Tested Possibly Redundant Untested Total Static 0. Test Objectives.00 51808 0 0 0 51808 PI Static 0. The information near the top. Figure 3-6 Bridge Fault Statistics Report Total Adjacent Net Run Length (microns): 195591.00 184 0 0 0 184 Total Dynamic 0.00 184 0 0 0 184 Parametric 0. .00 90840 0 0 0 90840 Collapsed Dynamic 0. To see the faults in each domain. run the command multiple times selecting an individual domain each time.00 63400 0 0 0 63400 IDDq 0.00 110 0 0 0 110 PO Static 0.953125 Total (Bridge) Faulted Adjacent Net Run Length (microns): 164683. The faults included are in the domain(s) identified by the clock constraints or test sequences. To list faults in an individual domain.------------.00 69162 0 0 0 69162 Collapsed Static 0.00 110 0 0 0 110 PO Dynamic 0.00 7168 0 0 0 7168 Static Bridge 0. This function will generate a report that contains the faults that meet the specified criteria. and Statistics ******************************************************************************* Reporting Bridge Fault Statistics Figure 3-6 shows a sample from the default output for report_fault_statistics when there are static and dynamic bridge faults in the fault model. is included only if the input to build_bridge_faultmodel was an OpenAccess (OA) database.12 © 1999-2015 All Rights Reserved.00 63400 0 0 0 63400 Dynamic Bridge 0.312500 (84.00 69162 0 69162 Report Domain Faults To report a list of faults within clock domain.

The syntax for this function is given below: report_domain_fault_statistics experiment=printCLKdom testmode=FULLSCAN / workdir=mywd coveragecredit=tested testsequence= myclkseq Note: To get a list of domains with the total number of faults in each domain.rx_clk 428 T OSF PI Pin.feq. Encounter Test: Guide 4: Faults Reporting Faults.I0.l.apb_1.I1.l.DATA0 For a complete description of the report_domain_faults syntax. and Statistics Following is a sample output of this function for a domain in a clock constraint file.f.12 © 1999-2015 All Rights Reserved.f.l.l.f.nl.nl. clk_const: INFO (TFM-705): Testmode: FULLSCAN_TIMED Fault List: Fault List for domains from Clock Constraints File clk_const: Fault Status Type Sim Func/ Fault pin name Cell Name 380 T OSF PI Pin.nl.I0.DATA1 737 T ISR MUX Pin. October 2015 63 Product Version 15.f.I0.feq.l.l.l. Inputs ■ Encounter Test model from build_model ■ Test mode from build_testmode ■ Fault_model from build_faultmodel ■ Experiment or committed results from ATPG The following is a sample report for experimental test coverage based on clocking sequences.apb_1.DOUT 735 Unio ISR MUX Pin. run the command multiple times.feq. This command reports the domain coverage for the superset of faults in all clocking sequences in the clock constraint file or the specified test sequence(s).I0.DATA1 736 Unio ISF MUX Pin.apb_1.feq.f.nl.DATA0 1124629c OSF MUX Pin.comp_done_reg.comp_done_reg.wb_clk 738 T ISF MUX Pin.feq. To get separate statistics for multiple domains. refer to report_domain_faults in the Encounter Test: Reference: Commands.nl.f.i_277092.rx_clk 379 T OSR PI Pin. use the report_domain_fault_statistics function.f. Test Objectives.f.nl.feq.wb_clk 427 T OSR PI Pin.apb_1.f.feq.l.I1.I0.comp_done_reg.l.apb_1.I1.feq.nl.nl. run create_logic_tests or create_logic_delay_tests reportsequencefaults=yes.I1.nl.feq. Report Domain Fault Coverage Statistics To report fault coverage statistics for a clock domain. .I0.comp_done_reg.

.06 Report Path Faults To report path faults using the graphical user interface.06 25......56 26...59 27....- There are 16 path faults and 8 path groups in this fault model faultID: 1 transition: 0->1 size: 4 status: TG Fail Near Robust group: Path_1_01 faultID: 2 transition: 0->1 size: 4 status: TG Fail Near Robust group: Path_1_01 faultID: 3 transition: 1->0 size: 4 status: TG Fail Near Robust group: Path_1_10 faultID: 4 transition: 1->0 size: 4 status: TG Fail Near Robust group: Path_1_10 faultID: 5 transition: 0->1 size: 4 status: TG Fail Near Robust group: Path_2_01 ....93 26... ...- Listing of individual path faults with status and groups .93 27.- Listing of individual path faults with status and groups October 2015 64 Product Version 15..29 44..59 1730 612 45 0 1073 0 0 35...12 1584 690 47 0 847 0 0 43.38 35.38 35.56 46...53 46...59 26..84 44... and Statistics Experiment Statistics: FULLSCAN..29 44..56 1568 417 21 0 1130 0 0 26. The syntax for the report_pathfaults command is given below: report_pathfaults experiment=<experiment_name> testmode=<testmode_name> / workdir=<directory> globalscope=no Sample output for the short and long format (specified using the reportdetails =yes|no keyword) of the report_pathfaults command output is shown below: Short form ..28 35. Path group: Path_4_10 has 1 Path Faults Path group: Path_4_01 has 1 Path Faults Path group: Path_3_10 has 2 Path Faults Path group: Path_3_01 has 2 Path Faults . . refer to Report Path Faults in the Encounter Test: Reference: GUI...38 928 411 33 0 484 0 0 44.12 35...... Test Objectives. Long Form ..53 43.84 47.12 © 1999-2015 All Rights Reserved.56 43.28 37...12 37.38 37....56 43...56 25... Encounter Test: Guide 4: Faults Reporting Faults......59 26............98 35...06 25..06 26.....98 37...12 35..printCLKdom #Faults #Tested #Possibly #Redund #Untested #PTB #Test %T %AT %P %AP %PTB %APTB PTB Cov Cov Cov Cov Cov Cov 3152 1107 68 0 1977 0 0 35. For a complete description of the report_pathfaults syntax..29 802 201 12 0 589 0 0 25...... refer to report_pathfaults in the Encounter Test: Reference: Commands... ....29 47.

Encounter Test: Guide 4: Faults
Reporting Faults, Test Objectives, and Statistics

- - - - - - - - - - - - - - - - - - - - - - - - - - - -

There are 16 path faults and 8 path groups in this fault model

faultID: 1 transition: 0->1 size: 4 status: TG Fail Near Robust group: Path_1_01
Pin Pin.f.l.TESTCASE1.nl.FF_2.__i0.dff_primitive.Q,
Pin Pin.f.l.TESTCASE1.nl.FF_2.__i4.01,
Pin Pin.f.l.TESTCASE1.nl.FF_3.mux1.DOUT,
Pin Pin.f.l.TESTCASE1.nl.FF_3.__i0.dff_primitive.Q

faultID: 2 transition: 0->1 size: 4 status: TG Fail Near Robust group: Path_1_01
Pin Pin.f.l.TESTCASE1.nl.FF_2.__i0.dff_primitive.Q,
Pin Pin.f.l.TESTCASE1.nl.FF_2.__i4.01,
Pin Pin.f.l.TESTCASE1.nl.FF_3.mux1.DOUT,
Pin Pin.f.l.TESTCASE1.nl.FF_3.__i0.dff_primitive.Q
...
...
...
Path group: Path_4_10 has 1 Path Faults
Path group: Path_4_01 has 1 Path Faults
Path group: Path_3_10 has 2 Path Faults
Path group: Path_3_01 has 2 Path Faults
...
...
...

The fault status can be any of the following:
■ Tested Hazard Free
■ TG Fail Near Robust
■ Not Processed
■ Test Gen NearRobust

Report Path Fault Coverage Statistics
To report path statistics using the graphical interface, refer to Report Path Fault Statistics in
the Encounter Test: Reference: GUI.

For a complete description of the report_pathfault_statistics syntax, refer to
report_pathfault_statistics in the Encounter Test: Reference: Commands.

The syntax for the report_pathfault_statistics command is given below:
report_pathfault_statistics experiment=<experiment name> /
testmode=<testmode name> workdir=<directory> pathname=<value>

An example of path faults statistics follows:

October 2015 65 Product Version 15.12
© 1999-2015 All Rights Reserved.

Encounter Test: Guide 4: Faults
Reporting Faults, Test Objectives, and Statistics

Path Fault Coverage Definitions

#Faults : Number of Path Faults
#Tested : Number of Path Faults marked tested.
#HzFree : Number of Path Faults marked tested and hazard free
#Robust : Number of Path Faults marked tested and robust
#NrRob : Number of Path Faults marked tested and nearly robust (almost
robust)
#NoRob : Number of Path Faults marked tested and Non Robust
#NoTest : Number of Path Faults for which Path Test Generation could not
generate a test

%TCov (%Test Coverage) : #Tested/#Faults
%HFree (%Hazard Free TCov) : #HzFree/#Faults
%Rob (%Robust TCov) : #Robust/#Faults
%NrRob (%Nearly Robust TCov) : #AlRob/#Faults
%NoRob (%Non Robust TCov) : #NoRob/#Faults
%NoTest (%Untesable) : #NoTest/#Faults

Test Mode Statistics for Path Faults

#Faults #Tes #Hz #Rob #NrR #NoR #NoTes %TCov %HFre %Rob %NrRob %NoRob %NoTe
ted Free ust ob ob t e st

Paths 1000 800 200 150 200 250 100 80.0 20.0 15.0 20.0 25.0 10.0

Report Package Test Objectives (Stuck Driver and
Shorted Nets)
Report Package Test Objectives produces information for objectives defined in the
objectiveModel and objectiveStatus files. Stuck driver and shorted net objectives are defined
for I/O wrap and interconnect test purposes. Both of these objectives are based upon target
nets. Stuck driver objectives are defined to check the integrity of the path from a driver to each
of its receivers. Shorted nets objectives are defined to check the integrity of the paths for a
set of target nets.

To report package test objectives, use report_sdtsnt_objectives. Refer to
report_sdtsnt_objectives in Encounter Test: Reference: Commands for more information.

The following information is produced for static and dynamic stuck driver objectives:

For stuck driver objectives:
■ Objective - Each objective is assigned a unique identifier called the objective index.
■ Type - Static or Dynamic
■ Value - The objective value

October 2015 66 Product Version 15.12
© 1999-2015 All Rights Reserved.

Encounter Test: Guide 4: Faults
Reporting Faults, Test Objectives, and Statistics

■ Status - Test status of objective
■ Intrachip - A yes or no indicator. Intrachip indicates whether the driver and receiver are
on the same chip.
■ PkgPinNet - A yes or no indicator. PkgPinNet indicates whether the driver or receiver are
associated with a net which has a net which has a primary input, output or bidi pin.
■ Net - The net which connects the driver and receiver.
■ Driver - The logic model block identified as the driver.
■ Receiver - The logic model block identified as the receiver.

The shorted nets objective list includes the number of target nets and the corresponding
number of objectives. Each net is listed in a table showing the logic value required on that net
for a given objective.

Report Package Test Objectives Coverage Statistics
To report package test objectives coverage, use
report_sdtsnt_objective_statistics. Refer to report_sdtsnt_objective_statistics in
Encounter Test: Reference: Commands for more information.

October 2015 67 Product Version 15.12
© 1999-2015 All Rights Reserved.

12 © 1999-2015 All Rights Reserved. and Statistics October 2015 68 Product Version 15. . Test Objectives. Encounter Test: Guide 4: Faults Reporting Faults.

the first step is to set neta=0. ■ Next put a non-controlling value on netb to allow the effect of neta to be seen at the output. . For the example. October 2015 69 Product Version 15. assume the test generator is trying to find a test for the first input of the AND (neta) stuck at one. data about these untestable faults is provided to you for analysis. Encounter Test: Guide 4: Faults 4 Analyzing Faults Fault analysis is an early test generation process that determines if there are untestable static logic faults in the design. Set netb=1. When the early test generation process ends.12 © 1999-2015 All Rights Reserved. Fault Analysis Process A simplified view of the early test generation/fault analysis process is shown in the following figure: Figure 4-1 Simple AND Block for Fault Analyzer Example The early test generation process includes the following steps: ■ Set up the test for a single fault. In order to detect the difference between the good device and the device with the stuck-at-1 fault.

The analyze_deterministic_faults command identifies how many faults should be tested with each identified test point. ■ The output values (on netc) must be propagated forward to the POs or to flops or latches whose value can be scanned out. Refer to analyze_deterministic_faults in Encounter Test: Reference: Commands for more information. scan paths. Analyze Deterministic Faults Use the analyze_deterministic_faults command to analyze untested faults and identify observe and control test points to test such faults. a test point might provide testability for only a single fault. October 2015 70 Product Version 15. more than 20K test points might be required to achieve a test coverage of 99. In many cases. The following is a sample usage of the analyze_deterministic_faults command: analyze_deterministic_faults workdir=workdir inexperiment=experiment_name testmode=testmode The previous command creates all possible test points for the experiment. ■ The input values (on neta and netb) must be justified back to the PIs or to flops or latches whose value can be scanned in. then the fault is untestable.9% or higher. or if conflicting values are required to do the justification and propagation. This is represented as 0/1 in net c. if there are over 100K untested faults in a design. The command does not identify test points for any untested faults for the following since test points in this logic could invalidate the design or testmode and cause failures in verify test structures: ■ Inactive logic ■ Outputs of RAMs. Encounter Test: Guide 4: Faults Analyzing Faults ■ Expect a value of zero on netc when the device is good and a value of one if the defect represented by the fault on net a occurs. a high test coverage requires a large number of test points. ROMs. . or non-scan flops ■ Clock paths. In these cases. or the outputs of RAMs. Test Inhibit/Test Constraint paths. thus allowing you to use only those test points that test large number of faults. or cannot be observed. If the values cannot be set up.12 © 1999-2015 All Rights Reserved. tied logic paths. ROMs and non-scan latches The total number of test points that the command identifies is determined by the number of untested faults in the design and the ability to combine as many test points as possible and still retain the ability to test the faults. For example.

0-9 and . ■ notpfile A file containing locations where testpoints should not be inserted. cell <cell_names> .12 © 1999-2015 All Rights Reserved. ❑ cell_names: list of names of cells in the design (may be technology cells. Testpoints will not be placed on any pin on the boundary of the instances. This provides the starting point for coverage for untested faults. net <net_names> .) Testpoints will not be placed on (or inside) any instance of these cells.) should be specified within quotation marks. or any level of hierarchy.a-z. cell. ■ tgfaultlist file (optional) containing a list of faults on which analyze_deterministic_faults is to work. pin <pin_names> . and pin are case insensitive. ❑ pin_names: list of names of hierarchical pins in the design. where: ❑ instance_names: list of names of hierarchical instances in the design. net. ■ October 2015 71 Product Version 15. Note: ❑ Names containing special characters (characters other than A-Z. . Locations may be specified with one of the following statements: block <instance_names> . This allows you to target specific pieces of the design. Encounter Test: Guide 4: Faults Analyzing Faults Input Files ■ Encounter Test Model from build_model ■ Testmode from build_testmode ■ Fault model from build_faultmodel ■ Experiment from previous test generation run. ❑ The keywords block. ❑ net_names: list of names of hierarchical nets in the design. Testpoints will not be placed on these pins. macros. or on any net segments that are electrically common to these nets. Testpoints will not be placed on these nets. nor anywhere inside the instances.

then the observe test point will be listed first. A sample is given below: October 2015 72 Product Version 15. starting with the test points that test the highest number of faults. for example [faults:9] in the given output.control both test point ■ The last entry." & "\NB/NBBST/UU_BistCamState_bp0_int3_1 [pin:Z] [type:O] [faults:1]." & "\NB/NBBST/UU_BistPauseCnt_bp0_1_X11 [pin:Z] [type:O] [faults:8]. the command creates an additional file named InactiveFaults. for example [pin:Z] in the given output.observe test point ❑ [type:C0] . depicts the number of faults tested You can edit this file to remove any test points." & "\NB/NBBST/UU_BistPauseCnt_bp0_1_X8 [pin:Z] [type:O] [faults:6]. Encounter Test: Guide 4: Faults Analyzing Faults Outputs TestPointInsertion." & "\NB/XBL2S/Ldt2ToXbarBuf/UU_invCrcError_bit1 [pin:Z] [type:O] [faults:6]. If an observe and control test point each test the same number of faults. is the type of test point. If a testresults directory does not exist then the command places the output file in the workdir. you must ensure that the last line ends with “. is the pin name ■ The third entry. A sample of the file is given below: entity XYZ is attribute INTERNAL_TEST_POINT_LIST of XYZ: constant is "\NB/NBBST/UU_BistPauseCnt_bp0_1_X9 [pin:Z] [type:O] [faults:9]. If you specify traceinactive=yes. for example [type:O] in the given output.control to 1 test point ❑ [type:CB] . However. In the output: ■ The first entry. The following types of test points are available: ❑ [type:O] ." & "\NB/XBPML/UU_NBXBR_PerfMonXbar_3_1 [pin:Z] [type:O] [faults:6]. The output file lists the test points in weighted format. is the block name ■ The second entry.dfa in the testresults directory.control to 0 test point ❑ [type:C1] . for example \NB/NBBST/UU_BistPauseCnt_bp0_1_X9 in the given output." & "\NB/XBL1S/Ldt1ToXbarBuf/UU_LdtToXbarr_xp0/UU_Z0 [pin:Z] [type:O] [faults:1]".12 © 1999-2015 All Rights Reserved." & "\NB/NBBST/UU_BistPauseCnt_bp0_1_X10 [pin:Y] [type:C1] [faults:6]." & "\NB/XBL1D/XbarToLdt1Buf/UU_tPktCnt_n2_bit2 [pin:Z] [type:O] [faults:1].”.<testmode name> in the testresults or workdir directory. end XYZ." & "\NB/XBPML/UU_NBXBR_PerfMonXbar_2_1 [pin:Y] [type:C0] [faults:7]. .<testmode name>.

it is recommended that you use the following methodology to analyze untested faults: 1. October 2015 73 Product Version 15. Create a fault list from the identified areas of the design by running report_faults. ■ The line(s) under each pin entry lists the fault indexes of the faults that are inactive because of that pin. . Sample Methodology for Large Parts For large designs that typically contain thousands of untested faults. to identify areas in the design that have low coverage: report_fault_statistics workdir=workdir testmode=testmode experiment=exp_name hierlevel=macro colsuntested=yes globalscope=no 2. Encounter Test: Guide 4: Faults Analyzing Faults Inactive fault analysis summary ------------------------------- pin BS1 creates 34 inactive fault(s) 2140 2133 2123 2072 2065 2055 1870 1863 1853 1802 1795 1785 1600 1593 1583 1532 1525 1515 1330 1323 1313 1262 1255 1245 1127 1114 1104 1041 1028 1018 128 118 108 9 pin DI1 creates 42 inactive fault(s) 2211 2210 2209 2208 2193 2192 2189 2186 1949 1948 1947 1946 1931 1930 1927 1924 1917 1916 1679 1678 1677 1676 1661 1660 1657 1654 1647 1646 1409 1408 1407 1406 1391 1390 1387 1384 1377 1376 967 966 30 29 pin RI creates 29 inactive fault(s) 2207 2206 2199 2099 1945 1944 1937 1920 1829 1675 1674 1667 1650 1559 1405 1404 1397 1380 1289 970 191 190 181 180 163 162 148 147 49 pin BS2 creates 12 inactive fault(s) 2109 2096 2086 1839 1826 1816 1569 1556 1546 1299 1286 1276 In the output: ■ Each line starting with pin <pinname> is a cause of the specified number of inactive faults. as shown below. Run report_fault_statistics. as shown below: report_faults workdir=workdir testmode=testmode experiment=exp_name reporthierarchical=yes globalscope=no statuspossibly=no statusincomplete=yes statusaborted=yes statusuntestable=yes typedrvrcvr=no inputcommitted=no hierrange=range The hierrange keyword is the list of hierblock indexes for which you want the untested faults.12 © 1999-2015 All Rights Reserved.

as shown below. ❑ Only a single set of fault analysis data can exist at any point in time. analyze_deterministic_faults workdir=workdir testmode=testmode inexperiment=exp_name tgfaultlist=fault_list A disadvantage of this methodology is that it does not target the complete design but only specific areas in the design. During analysis of an aborted fault. therefore. October 2015 74 Product Version 15. Analyze Faults The analyze_faults command is basically the beginning of the create_logic_tests process. refer to “Analyze Faults” in the Encounter Test: Reference: GUI. it only produces TFA messages to report testability problems. ■ Limitations of the interactive analysis of Deterministic Fault Analysis messages: ❑ Analysis of aborted faults is limited. ■ Specific analysis capability is not provided for: ❑ Driver and receiver faults ❑ Testability of faults by IDDq tests. refer to “analyze_faults” in the Encounter Test: Reference: Commands. Faults that require multiple-time frames to be proven untestable are classified as incomplete. To perform Analyze Faults using the graphical interface. Restrictions Encounter Test applies the following restrictions on deterministic fault analysis: ■ Deterministic Fault Analysis processes only static (stuck-at or pattern) faults. To perform Analyze Faults using command lines. . resulting in limited test coverage. Feed this list of faults to the analyze_deterministic_faults command by specifying the tgfaultlist keyword. The messages do not print to the create_logic_tests log but you can view them with GUI message analysis. Encounter Test: Guide 4: Faults Analyzing Faults 3.12 © 1999-2015 All Rights Reserved. ■ Fault Analysis is generally incapable of resolving all untestable faults in the presence of non-scannable memory elements. only the associated fault block is displayed. You also can create TFA messages by running create_logic_tests. analyze_faults does not produce vectors.

Encounter Test: Guide 4: Faults Analyzing Faults Input Files ■ Encounter Test Model from build_model ■ Testmode from build_testmode ■ Fault model from build_faultmodel Output ■ Fault status information ■ TFA messages in the log that indicate testability problems. The reason is included as the last sentence in the text of the message.Messages .Redundant faults ❑ TFA-030 . Select a message number. 3. When you analyze a TFA message. then select View. Select the Messages Tab and click the View Messages icon.12 © 1999-2015 All Rights Reserved. or Window . There are three messages generated. 2. Refer to Encounter Test: Reference: GUI for details. October 2015 75 Product Version 15. The message dialog provides additional information about the cause of the testability issue and/or recommends a test point that would improve the testability of that fault.Untestable faults ❑ TFA-020 . ❑ TFA-001 . The Analyze Faults Message Summary list is displayed. . an informational message dialog pops up along with the display of the logic containing the fault and any additional logic that was found to contribute to the controlability or observability problem. Analyzing TFA Messages from Create Logic Tests or Analyze Faults 1. The Analyze FaultsSpecific Message List is displayed. These messages can be analyzed interactively with GUI message analysis.Aborted faults See “Specific Message List Window” in the Encounter Test: Reference: GUI for more information. Analyze the message using Actions on the View Schematic Window. Select a specific fault to analyze by selecting a specific message and then Analyze.Analyze Faults.

The net values displayed by Deterministic Fault Analysis are expressed in terms of good and fault machine logic value pairs. This allows you to graphically view testability problems in the design. ■ The third function provides sequentiality analysis at input and output pins of each primitive block ■ The fourth function provides latch tracing information. . ■ The second function provides deterministic controllability/observability measures at output pins of each primitive block. and a design value of zero in the presence of the fault. pins. GUI Schematic Fault Analysis You can use Analyze Faults on a fault in the Graphical User Interface Schematic to use fault analysis on specific faults. Deterministic Testability Measurements for Sequential Test Deterministic Testability Analysis (Testability Measurements) provides four design testability analysis functions. by means of Possible Value Set (PVS) simulation. The logic values of entities (blocks. These deterministic testability analysis (TTA) functions are performed for the good design and at the test mode level. or nets) unaffected by a particular fault are normally expressed as V/V. The first value is for the good machine and the second value is for the fault machine (design behavior in the presence of a given fault). ATPG includes additional processes that may prove a fault untestable that the GUI reports as testable. a 1/0 represents a design value of one for the good machine. Note: Analyze Fault on the GUI provides an optimistic analysis of the fault. Analyze Fault uses a dual-machine nomenclature to represent the good machine and fault machine values associated with each entity. For example. where V can be any of the valid logic values. Encounter Test: Guide 4: Faults Analyzing Faults Refer to “Message Analysis Windows” in the Encounter Test: Reference: GUI for additional information. ■ The first function provides potentially-attainable logic values (LV) at output pins of each block.12 © 1999-2015 All Rights Reserved. These values are displayed on the design and in the Information Window. See “Possible Value Set (PVS) Concepts” on page 78 for details. October 2015 76 Product Version 15.

If selected. October 2015 77 Product Version 15. refer to “report_testability_measurements” in the Encounter Test: Reference: Commands ■ To perform Latch Tracing Analysis using command lines. refer to “report_testability_measurements” in the Encounter Test: Reference: Commands. refer to “report_sequential_trace” in the Encounter Test: Reference: Commands Performing Deterministic Testability Analysis To perform Deterministic Testability Analysis. Encounter Test: Guide 4: Faults Analyzing Faults Deterministic Testability Analysis prints out results on output pins based on pin types being selected. Note that these options are not retained because of the long run time associated with these tasks. The stats tables provide summary of the analysis. The analysis data can be included in the GUI Schematic Window by selecting corresponding Information Window Display options.12 © 1999-2015 All Rights Reserved. It is therefore recommended that the options be used only when analyzing small sections of sequential logic. . Notes: ■ To perform Deterministic Testability Analysis. Input Files ■ Encounter Test Model from build_model ■ Testmode from build_testmode Fault model from build_faultmodel Output Testability information in the log Note: A blank entry in the controlability column indicates that the net is two-state instead of three-state. The results tables provide the analysis results for each output pins of selected types. one results table and one stats table are printed for each of the two analysis functions.

for an AND with an X-source at one of its inputs. all internal signals converge to smaller numbers. Z. its 0-controllability measure is 1 plus the smaller 0- controllability of its two inputs. while for PFET and NFET. Z. the PVS at the output of a three-state driver is {Z } if its enable signal's PVS is {0 }. Similarly. October 2015 78 Product Version 15. For non-controlling logic values. For two-state. Information about attainable logic signal values is the basis of Deterministic Testability Analysis. H) in its base set. 1 }. 1. the controllability of the non-controlling values at the outputs are the sum of the controllabilities of the non-controlling values at the inputs plus 1. L. 0-controllability and 1-controllability for all PI's are set to 1 and for all internal signals set to the largest integer supported by the software. though less than the maximum integer may also indicate potential testability problems. Encounter Test: Guide 4: Faults Analyzing Faults Possible Value Set (PVS) Concepts While the Encounter Test Logic Values (LV) technique describes signal simulation responses in a logic network and for a given instance. respectively. it indicates that it is impossible to control this signal to the respective logic value. . the initial PVS values are {0.12 © 1999-2015 All Rights Reserved. If a signal's controllability measure remains at the maximum integer. their initial PVS values include all five fully- specified logic values. and together. Z. These two techniques complement each other. the controllability of the controlling values at the outputs are the minimum controllability of the controlling values at the inputs plus 1 (for the added logic depth going from an input to the output of a primitive logic function). H }. Deterministic Controllability/Observability (CO) Measure Concepts CO analysis is a measure combining the number of primary inputs (PI's) and the (compounded) logic depth needed in setting a signal value. Initially. its 1-controllability measure is 1 plus the sum of the 1- controllabilities of its two inputs. PVS consists of the 32 unique subsets of these five fully- specified logic values. {0. and weak signals. the signal's PVS may result in smaller sets than their initial PVS sets. H }. 1. During TTA calculations. A surprisingly large controllability measure. PVS is a logic algebra identifying potentially- attainable logic values at a pin at and beyond a given time instance. {0. For example. L. three-state. PVS uses the five fully-specified logic values (0. they form the complete (static) logic value system of Encounter Test. 1. its output PVS would be {0 }. For controlling logic values of primitives. L. which may cause difficulties in test generation. H. and {Z. For example. To be consistent with the logic scope defined by Encounter Test Logic Values. with a 2-input AND gate. These five fully-specified logic values are 0. After TTA calculations. L. 1. Z }.

Encounter Test: Guide 4: Faults
Analyzing Faults

Similar to controllability measures, the observability measure combines the number of PI's
and the (compounded) logic depth needed to observe a signal through primary outputs
(PO's). Initially, observability measures would assume an integer 1 for all PO's and the
maximum integer for all internal signals. During TTA's observability calculations, all
observability internal signals would converge to integer numbers between 1 and the
maximum integer. A maximum integer or a surprisingly large observability measure would
indicate it impossible or very difficult to observe the signal at PO's.

CO analysis imposes a SCAN penalty on Scan controllable/measurable signals, to
distinguish these Scan controllable/measurable signals from PI's and PO's for cost
considerations. The default Scan penalty is set to 2. A feedback penalty of 10000 is also
imposed on signals in feedback loops. Non-scan latches and memories would also be
penalized by 10000 to distinguish them from combinational and Scan logics. CO calculation
penalties allow user overrides.

Sequential Depth Measure Concepts
Similar to controllability/observability measures, sequential depth analysis measures the
sequential elements to go through in order to control/observe a node. Currently, this analysis
computes six sequential depth measures for each node. These six measures are: minimum
and maximum sequential depth from Topological Control Points (TCP), minimum and
maximum sequential depth from Topological Measure Points (TMP), minimum sequential
depth for setting a logic ZERO and ONE to a node.

TCP analysis starts by setting sequential depth of 0 to primary inputs, tied nodes, lineholds,
TI's, fixed value latches and test constraint latches. For the rest of the nodes, the minimum
and maximum TCP measures are calculated as the minimum and maximum TCP measures
among the inputs of a node, except for non-controllable latches and memory for which a 1 is
added to the output nodes. Controllable latches are considered as directly controllable so
their minimum and maximum TCP measures are the same as primary inputs.

Minimum and maximum TMP measures are calculated in a similar way as TCP measures,
except it starts by setting sequential depth of 0 to primary outputs and that measurable
latches are considered as if they were primary outputs.

The minimum sequential depth of setting a logic ZERO and ONE combines the above
concepts of deterministic controllability and minimum TCP measures.

Latch Tracing Analysis Concepts
Contrary to the deterministic controllability/observability measures and sequential depth
measures which would start processing from inputs to outputs or from outputs to inputs, latch

October 2015 79 Product Version 15.12
© 1999-2015 All Rights Reserved.

Encounter Test: Guide 4: Faults
Analyzing Faults

tracing starts with a node and traces to both inputs and outputs. Latches are separated into
the input and output cones of the focal node and are sorted according to the relative levels to
the node.

Random Resistant Fault Analysis (RRFA)
Analyze Random Resistance identifies sections of a design that are resistant to testing by flat,
uniformly distributed random patterns and provides analysis information to help improve the
random pattern testability of the design. This analysis information is based on signal
probabilities computed by simulating random patterns and counting the number of times each
net takes on the values 0, 1, X and Z. Analyze Random Resistance then uses the faults that
were not tested by the random pattern simulation to determine points in the design that tend
to block fault activation and propagation to observable points.

The analysis information provided by Analyze Random Resistance falls into four key areas:
■ Testpoint Insertion
Note: The primary use of the information provided by Analyze Random Resistance is for
testpoint insertion, which is dicusssed in this section.

Analyze Random Resistance
To perform Analyze Random Resistance using the graphical interface, refer to “Analyze
Random Resistance” in the Encounter Test: Reference: GUI.

To perform Analyze Random Resistance using command lines, refer to
“analyze_random_resistance” in the Encounter Test: Reference: Commands.

Input Files
■ Encounter Test Model from build_model
■ Testmode from build_testmode
■ Fault model from build_faultmodel

User Input (Optional)
■ Linehold

October 2015 80 Product Version 15.12
© 1999-2015 All Rights Reserved.

Encounter Test: Guide 4: Faults
Analyzing Faults

This user-created file is an optional input to test generation. It specifies design PIs,
latches, and nets to be held to specific values for each test that is generated. See
“Linehold File” in Encounter Test: Guide 5: ATPG for more information.
■ Test Sequence
This points to user specified clock sequences that have been read in using
read_sequence_definitions. The test sequence is required if you are processing
an LBIST testmode.
See “Coding Test Sequences” in Encounter Test: Guide 5: ATPG for an explanation
of how to manually create test (clock) sequences.
■ Probability input file
A file where controllabilities of inputs and observabilities of outputs are specified for
Analyze Random Resistance to use as starting values. The file is specified using the
analyze_random_resistance keyword probabilityfile.
■ notpfile
A file containing locations where testpoints should not be inserted. Locations may be
specified with one of the following statements:
block <instance_names> ;
cell <cell_names> ;
net <net_names> ;
pin <pin_names> ;

where:
❑ instance_names: list of names of hierarchical instances in the design. Testpoints
will not be placed on any pin on the boundary of the instances, nor anywhere inside
the instances.
❑ cell_names: list of names of cells in the design (may be technology cells, macros,
or any level of hierarchy.) Testpoints will not be placed on (or inside) any instance
of these cells.
❑ net_names: list of names of hierarchical nets in the design. Testpoints will not be
placed on these nets, or on any net segments that are electrically common to these
nets.
❑ pin_names: list of names of hierarchical pins in the design. Testpoints will not be
placed on these pins.
Note:

October 2015 81 Product Version 15.12
© 1999-2015 All Rights Reserved.

For additional information. Analyze Random Resistance recommends places in the design where these Observe Points should be added. .testmode.a-z.12 © 1999-2015 All Rights Reserved. ❑ The keywords block. Output ■ Fault status information Analyze Random Resistance experiments cannot be committed because no vectors are produced. October 2015 82 Product Version 15. the random testability of a design may be increased by connecting a net to a scannable latch or primary output. ■ TRA (Analyze Random Resistance) messages which can be analyzed with GUI Message Analysis. refer to Testpoint Insertion. net. It contains the list of recommended test points in a format that can be used for inserting into the Encounter Test Model or in DFT Synthesis.0-9 and . Figure 4-2 shows how an observe point is added in the design.) should be specified within quotation marks. cell. Test Points This section discusses the following topics: ■ Recommendations ■ Insertion Recommendations In some cases. and pin are case insensitive. thus improving the observability of the net. ■ Signal Probability information ■ TestPointInsertion.experiment – a file in testresults (or in the workdir if testresults does not exist). Encounter Test: Guide 4: Faults Analyzing Faults ❑ Names containing special characters (characters other than A-Z.

Encounter Test: Guide 4: Faults Analyzing Faults Figure 4-2 An Observe Test Point In other cases. Analyze Random Resistance recommends places in the design where these Control-1 and Control-0 test points should be added. Figure 4-3 Control-1 Test Point October 2015 83 Product Version 15. the random testability may be increased by adding an OR gate or an AND gate to a particular net with the other input connected to a scannable latch or primary input. Figure 4-3 shows how a control-1 test point is added in the design. For control-0. the dashed OR in the figure would be AND.12 © 1999-2015 All Rights Reserved. .

Refer to the following for additional information: ■ “Testpoint Insertion” on page 84 ■ “Edit Test Points” in the Encounter Test: Reference: GUI Testpoint Insertion The following figure shows a typical processing flow for Test Point Insertion. Analyze Random Resistance creates a TestPointInsertion file containing test point recommendations written in TSDL format. In this case. This file may be modified if desired and used as input to RC-DFT Synthesis to insert test points automatically into the design. Encounter Test: Guide 4: Faults Analyzing Faults In addition to graphical message analysis showing the recommended location for test points. This file may also be specified on the GUI Schematic Edit pull-down or with the edit_model command to add the test points to the Encounter Test model strictly for experimentation purposes. . the test points are modeled using primary inputs and primary outputs rather than scannable latches. October 2015 84 Product Version 15.12 © 1999-2015 All Rights Reserved.

Encounter Test: Guide 4: Faults
Analyzing Faults

Figure 4-4 Test Point Insertion Flow

Build a New Model

No Scan Yes
Inserted?

Build Test Mode with-
Assumed Scan Build Test Mode

Build Fault Model

Analyze Random Resistance to Identify Test Points

Insert Test Points into the Encounter Test model

Analyze Random Resistance to Assess
Effectiveness of the Inserted Test Points.

Testability
results No
satisfactory?

Yes

Insert Test Points in the design source
with DFT Synthesis

October 2015 85 Product Version 15.12
© 1999-2015 All Rights Reserved.

Encounter Test: Guide 4: Faults
Analyzing Faults

1. Build a New Model
Run your normal build_model command to build an Encounter Test model. There are
no unique requirements for building the model for this flow.
For complete information on Build Model, refer to “Performing Build Model” in the
Encounter Test: Guide 1: Models.
2. Build Test Mode
Run build_testmode to build the testmode. The results of Random Resistant Fault
Analysis will be more accurate if you use a testmode that configures the scanchains as
they will be when the design is tested.
However, if the test structures have not been inserted in the design (i.e. the latches are
not scannable), you can use the Assumed Scan feature to simulate the latches as
scannable. This feature can be used to identify specific latches to assume as scannable
(for a partial scan implementation) or will assume all latches to be scannable.
For complete information, refer to:
❑ “Performing Build Test Mode” in the Encounter Test: Guide 2: Testmodes.
❑ “Assumed Scan” in the Encounter Test: Guide 2: Testmodes.
❑ “Build Test Mode” in the Encounter Test: Reference: GUI.
3. Build Fault Model
Run build_faultmodel to build a fault model. As Random Resistant Fault Analysis
analyzes only static faults, the fault model does not need to include any other faults.
However, you can have other types; RRFA will ignore all except the static faults.
Refer to Build Fault Model for more information.
4. Analyze Random Resistance to Identify Test Points
Run analyze_random_resistance tpi=yes (default) to identify the test points to
improve testability.
Ensure you enter the experiment name for the output.
Refer to “Analyze Random Resistance” on page 80 for additional details.
5. Insert Test Points into the Encounter Test model
Command line: use the following command to insert the testpoints into the Encounter
Test model
edit_model tsdlfile=testresults/TestPointInsertion.testmode.experiment

October 2015 86 Product Version 15.12
© 1999-2015 All Rights Reserved.

Encounter Test: Guide 4: Faults
Analyzing Faults

You can edit the test point insertion file prior to using it as input to edit model.
GUI: Use the following scenario to insert test points into the Encounter Test model:

a. Bring up the Encounter Test GUI and set the Analysis Context to select the testmode
and experiment used for analyze_random_resistance. The easiest way to do
this is to select the analyze_random_resistance task from the Task View.

b. Select View Schematic to bring up the schematic display. There is no need to
display any logic on the schematic, but you may display anything you like.

c. Select Edit then select Test Points.

d. In the Edit Test Points window, select the File button next to File Name and
navigate to the TestPointInsertion file containing the testpoints from RRFA (that is,
TPI file). Select that file and click OK.

e. Specify the number of desired test points from the TPI file (you may include All or a
specified number of top test points). Note there is also an additional selection on this
window to add a specific type of testmode on a specific pin; you can ignore that for
this scenario. When you have the desired list, click OK at the bottom of the window.

f. Click the Apply button next to the File button and you will see the top of the window
is populated with the details of the test point information. You can edit the list by
selecting and using the right mouse button.

g. Click OK at the bottom of the window. The edits will be included in the Edits
Pending window and the hierarchical model in memory will be updated to reflect the
changes.
Note: The actual design does not change at this point. Step i actually modifies the
design.

h. You may trace through the design to view what has been done. If you want to add a
test point at a specific pin you may select that pin on the design view and use the
right mouse button to edit test points. A small test point edit window will allow you to
select the type(s) of test points you want to add at that point. These will also be
added to the Edits Pending window.

i. When you have all the desired edits , select Edits Complete, Reinitialize on the
Edits Pending window. The hierarchical model will be written to disk and the flat
model, test mode, and fault model will be recreated.
6. Analyze Random Resistance to Assess Effectiveness of the Inserted Test Points.
Run analyze_random_resistance again and see the test coverage information. If
you inserted test points with the GUI, the easiest way to do this is to go back to the

October 2015 87 Product Version 15.12
© 1999-2015 All Rights Reserved.

Encounter Test: Guide 4: Faults
Analyzing Faults

analyze_random_resistance task on the Task View to bring up the Form that is set
up the way you ran it the last time. Use a different experiment name if you do not want to
change the TestPointInsertion file.
If the results are not satisfactory, you can insert additional test points.
7. Insert Test Points in the design source with DFT Synthesis
Bring up RC-DFT Synthesis and use the command:
insert_dft rrfa_test_points -input_tp_file ./testresults/
TRAtestPointInsertionData.testmode.experiment

To select less than all test points in the file, use -max_number_of_testpoints
integer.
Refer to Using Encounter Test to Automatically Select and Insert Test Points in Design
For Test in Encounter RTL Compiler Guide for more information.

October 2015 88 Product Version 15.12
© 1999-2015 All Rights Reserved.

Delete Fault Status for All Existing Test Modes Use command build_faultmodel overwrite=no to rebuild the fault status information for the design without rebuilding the fault model. An example of the delete_faultmodel command is given below: delete_faultmodel workdir=/local/dlx sdtsnt=<yes|no> If the command does not find the faultModel file to delete. To delete a fault model using command line. . This effectively resets the status for all faults in all testmodes and invalidates any existing fault oriented test data. fault status and test objectives data. refer to “Delete Fault Model” in the Encounter Test: Reference: GUI.12 © 1999-2015 All Rights Reserved. October 2015 89 Product Version 15. To delete a fault model using the graphical interface. refer to “delete_faultmodel” in the Encounter Test: Reference: Commands. Delete Fault Model This action removes all the faultModel and faultStatus files and all dependencies on these files for the specified fault model. it continues to remove files that are dependent on these files and registration records under the assumption that the file has been manually deleted. Encounter Test: Guide 4: Faults 5 Deleting Fault Model Data This chapter covers the Encounter Test tasks for deleting fault model. This enables you to build multiple test modes in parallel without contention. This provides a way of removing registration records and dependency records from globalData even if the files do not exist. Note: Deleting the fault model does not affect test mode data and does not require any rebuilding of test modes.

Refer to Building Register Array and Random Resistant Fault List Files for Pattern Compaction for more information. Delete Package Test Objectives Use the delete_sdtsnt_objectives command to delete the objective model and status built using build_sdtsnt_objectives. Note: The alternate fault model named ##TB_SDT that is built along with the objective model is not deleted through either of the above mentioned commands. Delete Alternate Fault Model Use the delete_alternate_faultmodel command to delete a faultmodel built using build_alternate_faultmodel. The objective model and status can also be deleted using delete_faultmodel sdtsnt=yes. . This will delete both the fault model and the objective model. Use delete_alternate_faultmodel altfault=##TB_SDT to delete this model.12 © 1999-2015 All Rights Reserved. Encounter Test: Guide 4: Faults Deleting Fault Model Data Delete Committed Fault Status for a Test Mode Use command delete_committed_tests to delete all the committed data for a testmode and reset the fault status back to its initial state from build_faultmodel. Delete Fault Model Analysis Data Use the delete_fautmodel_analysis_data command to delete the register array and random resistant analysis data built using prepare_faultmodel_analysis or build_faultmodel registerarray=yes and randomresistant=yes. October 2015 90 Product Version 15. The alternate fault model to be deleted is identified with the ALTFAULT keyword.

the most common targets for testing are called stuck-at and transition faults. By default Encounter Test generates the following static pin faults: ■ Inputs of standard primitives stuck-at non-controlling ■ Inputs of other primitives stuck-at both zero and one October 2015 91 Product Version 15. In Encounter Test we refer to static and dynamic faults. Encounter Test: Guide 4: Faults 6 Concepts To generate tests for a design the first thing that must be done is identify the behaviors that require testing and the targets of those tests. These faults are analyzed and some of them are classified with fault attributes that allow test generation and fault simulation to know that they cannot be tested so they will not be selected for processing. Fault Types The following sections describe the types of faults created by Encounter Test. also called transition or delay faults. This can be on any pin in the model. the ones that are selected for processing may be used in test generation or fault simulation processes which mark them with a fault status. Dynamic faults. In the digital logic test industry. Since these static and dynamic faults are the logic model equivalent of physical defects. . you will sometimes see these referenced as logic faults. or they may be used in a diagnostics process to identify a failing location on the physical device. affect the time-dependent behavior of the design. Once the faults are included in a fault model. affect the behavior of the design without regard to timing. such as stuck-at faults. Static (Stuck-at) Static Pin Faults A static pin fault is stuck-at logic zero or logic one. Encounter Test supports several different fault types and other test objectives. Other faults or objectives are identified for special testing purposes such as ensuring that a driver in the physical design is working prior to applying tests to the entire design. Static faults.12 © 1999-2015 All Rights Reserved.

the pattern applied would not have constituted a test for that fault (S- A-1). The pattern applied to the faulty AND gate has an output value of 1. When this condition occurs. of the stuck faults (S-A-1 or S-A-0). The pattern applied to the fault-free AND gate has an output value of 0. . In other words. Encounter Test: Guide 4: Faults Concepts ■ Outputs of primitives stuck-at both zero and one A stuck-at fault assumes a logic gate input or output is fixed to either a logic 0 or 1. Figure 6-1 shows an AND gate with input A Stuck-At-1 (S-A-1). If they had responded with the same output value. Figure 6-1 S-A-1 AND gate The faulty AND gate perceives the “A” input as a logic 1 value regardless of the logic value placed on the input. The pattern shown in Figure 6-1 is a valid test of the input “A” S-A-1 because there is a difference between the faulty gate (fault machine) and the good gate (good machine). A good machine has no faults and a fault machine has one. multiple faults are not considered in Encounter Test.12 © 1999-2015 All Rights Reserved. this set of input patterns detects the fault. October 2015 92 Product Version 15. and only one.

ODD) ■ XORN (exclusive OR . An option during Build Fault Model will automatically look for fault rules that are named the same as the cells.EVEN) ■ TSD (Three-State Driver) ■ LATCH (Latch Primitive) ■ MUX (MUXes) ■ __DFF (Flip-Flops) Pattern faults ensure that virtually all possible defects associated with these logic functions can be effectively modeled. October 2015 93 Product Version 15. guaranteeing that all potential static defects will be covered. S-A-1 or S-A-0 faults on the I/O pins of a mux would not force these robust test patterns. Externally defined pattern faults may be specified in addition to those automatically generated by Encounter Test. The purpose for pattern faults on MUXes is to guarantee that adequate tests have been applied to test the defects that can occur on these designs. Static pattern faults for a MUX require each input to be selected and observed at both a 1 and 0 while all other data inputs are at the opposite state. MUXes are generally constructed from transistor-level designs that can exhibit Z or Contention states when faulty. a two-input XOR gate generates four static pattern faults. Encounter Test: Guide 4: Faults Concepts Static Pattern Faults A pattern fault is defined by a pattern specified on a set of inputs and the resulting good machine and fault machine values to expect on the outputs in order to detect the fault. it does not have to be referenced in the design source. For example. Encounter Test automatically generates pattern faults for the following primitive logic functions: ■ XOR (exclusive OR . See Build Fault Model Examples with Fault Rule Files for more information. __MUX2 primitive static pattern faults are also implemented in Encounter Test. which represent all four possible input value combinations (00. This is done by creating fault rules and placing them in a file which is referenced by the Cell Library or netlist. If the filename is the same as the name of the cell to which it applies.12 © 1999-2015 All Rights Reserved. . 11). 10. Additional static pattern faults may be provided by the technology supplier or designer in a fault rule. 01.

a dash (-) is shown. the faulty design value for any node is assumed to be the same as the good design value for that node. Until a fault's effect has propagated to a node. the faulty design "don't care" values are on nodes where even if the faulty design value was opposite the good design required value. the output pin value for the faulty design would still match the faulty design's behavior . October 2015 94 Product Version 15. in which case the test generator attempts to obtain the value required for the faulty design. although the fault model will represent these with an unknown (x) value. Fault simulators will always be checking the faulty design values against the required faulty design values for fault excitation.12 © 1999-2015 All Rights Reserved. There are output pin stuck-at faults automatically defined for an XOR gate in addition to the automatically generated pattern faults. Until the fault has been excited. . Automatic Static Pattern Faults for XOR/XNOR In Figure 6-2 on page 95.thus making it redundant to require the faulty design to be at the value needed to excite the defect. in which case a single value is shown. the faulty design value for that node is the same as the good design value. a two-input XOR or XNOR gate has four static pattern faults automatically generated for it. You may notice that for the automatically generated pattern faults. For cases where the value does not matter. except in the one case where the good design value is a don't care. Test generators will usually attempt to meet all good design required values to excite the fault. A good machine required value of (-) (don't care) is only of significance for sequential designs where the fault's effect could feed back to the required value node. The values in these figures are shown as (good machine)/(fault machine) requirements. except where both machine requirements are the same. Encounter Test: Guide 4: Faults Concepts Automatically Generated Static Pattern Faults The automatically generated static pattern faults are detailed in Figure 6-2 on page 95 through Figure 6-5 on page 97.

they also cover four defects "inside" the driver which affect the behavior of the driver only when a specific data value is present. .12 © 1999-2015 All Rights Reserved. The six static faults cover four input pin defects: data stuck zero or one and enable stuck zero or one. the enable stuck-ON defect is October 2015 95 Product Version 15. XNOR pattern faults have the opposite output values. Output values are shown as good value/faulty value. Figure 6-3 TSD Static Pattern Faults Automatically Generated by Encounter Test Enable stuck-ON is modeled by two pattern faults with the enable OFF and data input 0 and 1 respectively. Automatic Static Pattern Faults for TSD The following figure shows the static faults automatically generated for a TSD primitive. yielding 2**n patterns faults for an n-input XOR. If either of these pattern faults is detected. Encounter Test allows an XOR primitive to have up to 256 inputs. These internal defects are associated with transistors which are used to pull the output net to zero or one. Encounter Test: Guide 4: Faults Concepts Figure 6-2 XOR Static Pattern Faults Automatically Generated by Encounter Test All possible input patterns are considered.

the enable stuck-OFF defect is detected. There are static pattern faults to model defects on the input pins of the port. The pattern faults for one port of a multi-port latch always include requirements that the clock inputs to the other ports be off. the requirement to initialize the latch is removed for clock-stuck-off pattern fault modeling. Automatic Static Pattern Faults for Latch Primitives For Latch primitives. Encounter Test automatically generates a set of pattern faults for each port of the latch. However. these two faults also model defects on separate transistors for pulling the output net to ground or Vdd respectively. . Figure 6-4 Latch Static Pattern Faults Automatically Generated by Encounter Test There are eight static faults generated per port to essentially cover the eight possible values for clock. and so are tracked independently. If either of these pattern faults is detected. it is not always October 2015 96 Product Version 15. Enable stuck-OFF is modeled by two pattern faults with the enable ON and data input at 0 and 1 respectively. data and current state. However. except that when the clock input is ON. The static faults are shown in the following figure. Encounter Test: Guide 4: Faults Concepts detected. which is for a two-port latch. Note that for a single port latch. Stuck data input or output faults are modeled with pattern faults requiring the enable to be ON and one pattern fault requires a data value of zero while the other requires a data value of one. as well as internal latch feed-back loop defects. and so are tracked independently. these two faults also model defects on separate transistors for pulling the output net to ground or Vdd respectively.12 © 1999-2015 All Rights Reserved.

Stuck-at 1/0 faults on the I/O pins of a MUX would not force these robust test patterns. However. The reason for pattern faults on MUXes is to guarantee that adequate tests have been applied to test the defects that can occur in these designs. In Figure 6-4. MUXes are usually constructed from transistor-level designs that can exhibit Z or Contention states when faulty. -/0 1/0 SEL stuck 1 or data0 stuck 0 1/.12 © 1999-2015 All Rights Reserved. -/1 0/1 SEL stuck 1 or data0 stuck 1 DATA1 DOUT 0/. the clock- stuck-off defect is guaranteed detected since we would have verified that both a logic one and a logic zero were written through the latch. The values in parenthesis () on the output pin 0 are not included in the required values list for single port latches. Encounter Test pattern faults try to provide thorough tests to expose such faulty behavior. Figure 6-5 Status Faults for MUX2 Primitive SEL DATA0 DATA1 DOUT DATA0 _MUX2 0/. That is. There are then just six pattern faults for a single port latch. Normally the faulty design would assume the latch content stays at unknown (X) when the clock is stuck OFF. -/0 1/. The parenthetic "don't care" (/-) values on C1 are not included in the required values list for single port latches. by modeling the clock-stuck-off defect with two pattern faults which require writing different data value through the latch. the two Clock SA0 faults are redundant and are not present as distinct pattern faults for a single port latch. These modifications for the Clock SA0 faults on a single port latch result in their being described exactly the same as the Data SA0 and Data SA1 faults for the latch. . when both of these pattern faults have been detected. Automatic Static Pattern Faults for __MUX2 The static faults for the __MUX2 Primitive are shown in Figure 6-5 on page 97. Thus. 1/. and the DATA SA0 fault representing Data SA0 OR Clock SA0. The single port C1 values for good machine/fault machine will be 1/1. Static pattern faults for a MUX require each input to be selected and observed at both a 1 and 0 while all other data inputs are at the opposite state. 0/. 1/0 SEL stuck 0 or data1 stuck 0 SEL October 2015 97 Product Version 15. Encounter Test: Guide 4: Faults Concepts required to have a known previous state for the latch. the * denotes that Single port latches have a modified required values list for clock SA0 faults. with the Data SA1 fault really representing Data SA1 OR Clock SA0.

Additional shorted nets pattern faults may be provided by the technology supplier or designer in a fault rule. If dynamic faults are requested when building the fault model.12 © 1999-2015 All Rights Reserved. Automatically Generated Dynamic Pattern Faults The automatically generated dynamic pattern faults are detailed in Figure 6-6 on page 99 through Figure 6-8 on page 101. ROM. The shorted nets pattern faults are a form of static pattern faults. Dynamic (Transition) Dynamic Pin Faults Pin transitions from zero to one (slow-to-rise) or from one to zero (slow-to-fall). LATCH. The technology supplier or designer of a fault rule may provide additional dynamic pattern faults. RAM. More information about dynamic faults is included in “Delay Defects” in Encounter Test: Guide 5: ATPG. October 2015 98 Product Version 15. TSD. Dynamic Pattern Faults A sequence of patterns to set up the inputs and the resulting good machine/fault machine values to expect on the outputs in order to detect the fault. Encounter Test: Guide 4: Faults Concepts Shorted Nets Pattern Faults A shorthand method to represent two nets that are shorted together. . these dynamic pattern faults are automatically generated unless you use autopatternfaults=no on the build_faultmodel command line. This can be on any pin in the model. By default Encounter Test generates dynamic pattern faults for the following primitives: XOR. By default Encounter Test generates the following dynamic pin faults: ■ Inputs of primitives slow-to-rise and slow-to-fall ■ Outputs of primitives slow-to-rise and slow-to-fall More information about dynamic faults is included in “Delay Defects” in Encounter Test: Guide 5: ATPG.

Automatic Dynamic Pattern Faults for TSD The dynamic pattern faults automatically generated for a TSD primitive are shown in the following figure. If more than two inputs are specified. Enable slow-to-rise (ESR) and Enable slow-to-fall (ESF). Note: There are output pin transition faults automatically defined for an XOR gate in addition to the automatically generated pattern faults. Encounter Test allows an XOR primitive to have up to 256 inputs. a slow-falling input to rising output. a slow-rising input to falling output. Data slow-to-fall (DSF). In two- input XOR cases. The same pattern can be defined by fully specifying A and B as by specifying B and O. These six dynamic pattern faults model four potential defects: Data slow-to- rise (DSR). October 2015 99 Product Version 15. a slow-falling input to falling output. . Output values are shown as good value/ faulty value. Pin transitions are shown as initial value > final value.12 © 1999-2015 All Rights Reserved. Encounter Test: Guide 4: Faults Concepts Automatic Dynamic Pattern Faults for XOR An XOR gate automatically receives the dynamic pattern faults shown in Figure 6-6 on page 99. Figure 6-6 XOR Dynamic Pattern Faults Automatically Generated by Encounter Test All possible input pin to output pin transitions are considered yielding 4*n dynamic patterns faults for an n input XOR. significantly fewer faults are created. This figure shows that each input pin has four dynamic faults associated with it: a slow-rising input to rising output. these patterns are equivalent.

As shown in the following figure. Enable slow-to-rise (ESR) and enable slow-to-fall (ESF) are modeled with ORed pattern faults to show two possible ways they might be detected.12 © 1999-2015 All Rights Reserved. They cover the port being slow to change in either direction and the clock being slow to turn OFF. These pattern faults are in addition to the slow-to-rise and slow-to-fall transition faults automatically generated for each input pin of the Latch. . Encounter Test: Guide 4: Faults Concepts Figure 6-7 TSD Dynamic Pattern Faults Automatically Generated by Encounter Test Data slow-to-rise (DSR) and data slow-to-fall (DSF) are modeled with the enable required ON in the final state. These cover the clock or data pins slow-to-rise and slow-to-fall defects. there are four dynamic pattern faults generated per port. Encounter Test automatically generates a set of pattern faults for each port of the latch. Note: There are clock and data input pin transition faults automatically defined for a Latch in addition to the automatically generated pattern faults. October 2015 100 Product Version 15. This means that each port of a Latch has a total of eight dynamic faults automatically generated for it by Encounter Test. Automatic Dynamic Pattern Faults for Latch Primitives For Latch primitives.

12 © 1999-2015 All Rights Reserved. They cover the port being slow to change in either direction and the clock being slow to turn OFF. Parametric (Driver/Receiver) A set of pattern faults to force all drivers to drive zero and one (and Z for three-state drivers) and to force all receivers to receive zero and one. These pattern faults are in addition to the slow-to-rise and slow-to-fall transition faults automatically generated for each input pin of the Latch. Instead they are used to facilitate the testing of the drivers and receivers under worst-case conditions. . Encounter Test: Guide 4: Faults Concepts Figure 6-8 Latch Dynamic Pattern Faults Automatically Generated by Encounter Test There are four dynamic pattern faults generated per port. In Encounter Test. This means that each port of a Latch has a total of eight dynamic faults automatically generated for it by Encounter Test. the faults are identified as IDDq with a status of T (tested) or U (untested). The technology supplier or designer cannot provide pattern faults for driver/receiver objectives. IDDq IDDq faults represent defects that cause a CMOS device to use excessive current (IDD) compared with the normal current into the Vdd bus. These are not really faults in the sense of modeling a physical defect. the static faults are used for IDDq Tests. October 2015 101 Product Version 15. When an IDDq fault report is requested.

each individual fault in the group is marked tested as it is detected so you can get partial credit for testing the bridge between the two nets. However. Encounter Test: Guide 4: Faults Concepts Bridge A set of faults that define the defect that occurs when there is a short (bridge) between adjacent nets. This means all four of these faults must be tested in order to consider a bridge between the two corresponding nets as fully tested. NetB=1/X NetB=1/0 October 2015 102 Product Version 15. Also. Encounter Test provides the capability of including static bridge faults or dynamic bridge faults. NetB=0/0 NetA=1/0 SBRG0 on NetB NetA=0/0. . NetB=0/X NetB=0/1 SBRG0 on NetA NetA=1/X. NetB=1/1 NetA=0/1 SBRG1 on NetB NetA=1/1. For a single pair of nets. Table 6-1 Static Bridge Faults . The faults are shown in Table 6-1. there are four dynamic bridge faults defined in an AND group. for the same single pair of nets.12 © 1999-2015 All Rights Reserved. there are 4 static bridge faults defined in an AND group.BRG_S { NET NetA NET NetB } Required Values Propagation Values (good machine / fault (good machine / fault Description machine) machine) SBRG1on NetA NetA=0/X. The faults are shown in Table 6-2.

the logic is treated as if it is dangling. ■ Ignored .Blocked logic (Ib) October 2015 103 Product Version 15. These faults are omitted from the fault model entirely unless includeignore=yes is specified during “build_faultmodel”. NetB=0/0 NetB=0/0 NetA=0/0. it is not changed.Dangling/unconnected/unused logic (Iu) Faults on logic that does not feed any primary outputs nor observable latches and cannot be observed. NetA=1/X. NetA=0/0. Encounter Test: Guide 4: Faults Concepts Table 6-2 Dynamic Bridge Faults . Iu. It) When a fault is marked as Ignored. Ignored Faults (I.12 © 1999-2015 All Rights Reserved. DBRG0 on NetA NetA=1/0 NetB=1/1 NetB=0/X DBRG0 on NetB NetB=1/0 NetA=0/0. The following categories of faults are omitted from the fault model by default: ■ Ignored . Note: If all paths from the logic to observation points pass through blocks whose simulation function is unknown to Encounter Test (Blackboxes). Ib. After the fault is placed in one of these categories. NetA=0/1 NetB=1/1 NetB=1/1 DBRG1 on NetB NetB=0/1 NetA=1/1. it means that there is no way to test the fault. it has no effect on the design operation. NetA=0/X. NetB=0/0 NetB=1/X Fault Attributes These classes of attributes are determined during the process of building the fault model or initializing the fault status for a testmode. . NetA=1/1.BRG_D {NET NetA NET NetB} Initial Values Required Values Propagation Values (good machine / (good machine / (good machine / Description fault machine) fault machine) fault machine) DBRG1 on NetA NetA=1/1.

The following categories of faults are included in the fault model but are marked with an Ignored attribute: ■ Faults from previous categories when includeignore=yes is specified. ■ Ignored .12 © 1999-2015 All Rights Reserved. See “Preparing an Ignore Faults File” on page 40 for information on automatically creating a fault rule file with Ignores for a level of hierarchy. Ignored faults are not processed by test generation applications and are not included in the divisor when computing fault coverage unless ignorefaultstatistics=yes is specified on build_faultmodel. See “Pattern Faults and Fault Rules” on page 125 for information on manually creating a fault rule.Tied Logic (It) These are faults that cannot be activated (excited) because the logic values required from control points are blocked by TIE values. Note: Outputs of blackboxes are treated as TIE blocks. . Example 1: Ignore Faults for Unconnected Logic Example 2: Ignore faults that are blocked: October 2015 104 Product Version 15. Encounter Test: Guide 4: Faults Concepts These are fault that cannot be observed because all paths from the logic to observation points are blocked by values propagated from TIE blocks. ■ Faults that are identified as Ignored in a fault rule. They are tied to X unless blackboxoutputs was specified with a different tie value on build_model. See Build Fault Model Examples with Special Handling of Ignored Faults and Figure 6- 10 on page 115 for more information.

12 © 1999-2015 All Rights Reserved. . Encounter Test: Guide 4: Faults Concepts Example 3: Ignore faults for tied logic October 2015 105 Product Version 15.

The faults that are logically equivalent to an independent fault are called collapsed (or reduced) faults. Faults are collapsed from left to right except faults on primary inputs (which are retained). Pre-Collapsed By default. . Encounter Test: Guide 4: Faults Concepts Example 4: Ignore Faults feeding into/out of Blackboxes Collapsed (C) Faults that are logically equivalent are collapsed to a single representative fault. A collapsed fault always has the same status as its representative fault.12 © 1999-2015 All Rights Reserved. OR/NOR. BUF/INV gates: stuck-at non-controlling on the inputs and both faults on the output. The faults on these blocks are: ■ AND/NAND gates have s-a-1 faults on the inputs and both s-a-1 and s-a-0 on the output. The representative fault is called an independent fault or an Equivalence Class Representative (ECR). Generating tests only for independent faults is a commonly used time-saving technique. the fault model includes only the following faults on single AND/NAND. s-a-0 faults on the inputs are pre-collapsed October 2015 106 Product Version 15. This optimizes test generation/fault simulation by focusing their efforts on a single fault even though it actually represents multiple potential defects in the hardware. Encounter Test terms these faults as pre-collapsed since they are collapsed by definition rather than through an analysis of the design.

The possibly testable at best faults can be identified as: ■ (3) -. it does not provide any test coverage statistics for the groups (defects). I) Fault grouping is used for modeling defects whose behavior is complex. All coverage reporting is in terms of the individual faults. Possibly testable at best faults may be identified in a fault rule. When any fault in an OR group is detected. they are all tested. Encounter Test: Guide 4: Faults Concepts ■ OR/NOR gates have s-a-0 faults on the inputs and both sa-1 and s-a-0 on the output. but it may be able to be Possibly Tested (P). ■ (X) -. ■ AND Group (&) . but not all of the faults in the group are tested.s-a- 0 and s-a-1 faults on the input are pre-collapsed Grouping (&. One effect of this is to give partial credit for an AND group where some. ■ OR Group (I) . and so will appear consecutively in a fault list. A fault group is identified by the & (AND) or | (OR) symbol next to each fault in the group. October 2015 107 Product Version 15. s- a-1 faults on the inputs are pre-collapsed ■ BUF/INV gates have no faults on the input and both s-a-0 and s-a-0 on the output. While Encounter Test supports fault grouping for better modeling of defects. . Possibly testable at best faults (PTAB) This is a characteristic of the fault that means there is no way Encounter Test can generate a test that is guaranteed to test the fault. all faults within that group are marked as detected. All faults within a group have consecutive indexes.All faults in the group must be tested to expose the defect.If any one of the faults is tested. to help reduce the run time.PTAB faults that cause three-state contention on one or more nets or that cause High-Z on one or more three-state nets without Termination or Keepers. Possibly testable at best faults are not processed by test generation applications (by default).12 © 1999-2015 All Rights Reserved.PTAB faults that cause TIE-X signals to propagate to observation points (POs or scannable latches). There are separate statistics maintained for these faults. Refer to Preparing an Ignore Faults File on page 40. This eliminates any further test pattern generation or fault simulation effort on that group.

PX. Inactive faults are not processed by test generation or simulation run in this testmode. If a possible test is generated for a PTAB fault. Unlike Ignored faults. If. and therefore. An active fault is classified in exactly one of these categories: Tested (T) Some test has been generated in this test mode that will fail in the presence of this fault. untestable. Inactive faults ( i) An inactive fault is one that cannot be detected in this test mode due to being blocked by a TI (Test Inhibit) primary input or fixed-value latch. the status is marked as P3. every fault that is not ignored is either “active” in the testmode or “inactive”. IDDq Faults. tested in a different testmode. Active faults can be testable. or TC. application of the test stimulus will cause a difference (0/1 or 1/0) at a tester-observable point October 2015 108 Product Version 15. the status is marked as T3. Driver/Receiver Objectives. Active faults An active fault is any fault that propagates to a PO or scannable flop while the circuit is in the scan state or test constraint state for the testmode. Inactive faults may be active. Stuck Driver and Shorted Nets Objectives may all be given a Test Status based on the success or failure of the test generation/fault simulation process to generate test patterns for them. See Identifying Inactive Logic in the Encounter Test: Guide 2: Testmodes for more explanation of inactive logic. Encounter Test: Guide 4: Faults Concepts ■ (C) -.12 © 1999-2015 All Rights Reserved. you request the possibly tested faults marked as Tested after some number of possible tests have been generated. Thus. Active and Inactive (i) When the testmode fault status is initialized. during ATPG. TX. It is only the active faults that are given a Fault Test Status. or aborted. Fault Test Status Logic Faults.PTAB faults that cause all clocks (or the only clock) on one or more memory elements to be Stuck Off. redundant. . or PC.

the response in the presence of the fault. Encounter Test: Guide 4: Faults Concepts (primary output or scannable reg) in the expected good response vs. Tested by Possibly Detected Limit (Tpdl) The fault is really possibly tested but you have requested that it be marked tested after a specific number of patterns possibly tested it. A fault can be marked tested in one of several ways: Tested by Simulation (T) The fault simulation determined that the fault is detected by the patterns (may be patterns from ATPG or manual patterns) Tested by Implication (Ti) The fault is expected to be detected when patterns are applied at the tester. October 2015 109 Product Version 15. If a fault is possibly tested many times. it is possible that there is a difference that could be observed between the good value and the value in the presence of the fault. to specify how many times a possibly detected fault is going to be simulated against additional patterns and another keyword. Tested User Specified (Tus) The fault has not been processed with ATPG or fault simulation in this design. One example is scan chain faults that are marked tested by apriori fault markoff. non- scannable memory elements to have their clocks "stuck off". to specify that if the pdl limit is reached the fault should be marked tested rather than possibly tested.12 © 1999-2015 All Rights Reserved. markpdlfaultstested=yes. It is not known whether patterns created in this testmode also would have detected the fault since it is removed from processing as soon as the other testmode’s results were committed. but no fault simulation was performed to ensure it. pdl=n. . Encounter Test fault simulation provides a keyword. Tested in Another Mode (Tm) The fault is active in more than one testmode that are participating in Cross-Mode Markoff (MARKOFF) and it was marked Tested in another mode. A fault rule with the DETECTED statement was used as input to mark the fault tested during the building of the fault model. If such memory elements are seen to write both zero and one during ATPG fault imply process. then the "clock stuck off" faults are detected by "implication". Another example are faults that cause single port.

Untestable The test generator was unable to create a test for the fault in this test mode. When the patterns are applied at the tester the value will be seen as a 0 or 1. . Redundant (R) Test generation or Fault Analysis determined that the fault is untestable due to logical redundancy or tied logic that have nothing to do with the design state caused by any test mode. Faults marked redundant in the master fault status are not processed in any test mode. The following untestable reasons are identified and can be reported. Faults can remain untested – not processed after ATPG if some run stopping criterion (set by keywords starting with “max”) was reached before these faults were considered for test generation or fault analysis and no patterns generated for other faults happened to detect these too. Encounter Test: Guide 4: Faults Concepts Possibly Tested (P) The fault may or may not be tested by the patterns. Note that if you run ATPG multiple times it is possible for the untestable reason to be different due to a variety of reasons October 2015 110 Product Version 15. A fault rule with the REDUNDANT statement was used as input to mark the fault redundant during the building of the fault model. Aborted (A) ATPG processed a fault but was unable to generate a test or prove it untestable.12 © 1999-2015 All Rights Reserved. Untested – Not Processed (u) This is the status of all active faults before they are processed by test generation and/or fault simulation. A test has been generated in this test mode where the good value is known (0 or 1) but the value in the presence of the fault is unknown by the simulator (X). This happens when some part of the ATPG process exceeds an internal limit or there is a problem that prevents the test from being completed. Redundant: User Specified (Rus) The fault was not processed by test generation or fault analysis in this design. if it differs from the good value then the fault is tested. In some cases increasing the setting of the effort keyword in ATPG will allow the fault status to be resolved.

but not limited to. Non-terminated three-states and clock-stuck-off are specific types of X- sources that are categorized separately. This general category of other X-sources includes the following: ❑ TIEX ❑ sourceless logic ❑ feedback loop (not broken by a latch) ❑ ROM with unknown contents ❑ RAM contents if the Read_enable is off ❑ Faults that cause more than one port of a latch to turn on October 2015 111 Product Version 15. The linehold file used during ATPG prevents testing of this fault. different lineholds. or specified via a user file). Untestable: X-sourced or sinked (Uxs) A test for this fault requires values on one or more nets that cannot be set to known values (the nets are at X). which are inconsistent with logic value(s) required to test the fault. . test sequence. ignoremeasures. Untestable: Undetermined (Ud) No test can be generated for this fault. Encounter Test: Guide 4: Faults Concepts including. but no single reason to explain why could be determined. constraints. Untestable: User Specified (Uus) The fault was not processed in this experiment. timings. or keyword settings. Untestable: Linehold inhibits fault control/observe (Ulh) The test generator encountered logic value(s) originating from linehold(s) (LH flagged test function pin. A fault subset was created with the fault marked as untestable and that fault subset was used for the experiment that resulted in this status.12 © 1999-2015 All Rights Reserved.

This status is further categoried by the path along which the fault needs to be detected: ■ Untestable fault between Primary Inputs (PIs) and Primary Outputs (POs) (Unio) ■ Untestable fault between Primary Inputs (PIs) and Latches or Flip-Flops (Unil) ■ Untestable fault between Latches or Flip-Flops and Primary Outputs (POs) (Unlo) ■ Untestable fault within intra-domain logic (Unra) ■ Untestable fault within inter-domain logic (Uner) Untestable: Testmode inhibits fault control/observe (Utm) A test for this fault cannot be produced due to conflicts with pins that are already at value from setting the design into the state set up by the testmode. Refer to Delay and Timed Test in Encounter Test: Guide 5: ATPG for more information. Unra. Unil. If your IC manufacturer can support a different termination value. The user can control the selected sequence filter. but the tester- supplied termination does not permit termination to the state required by the test.12 © 1999-2015 All Rights Reserved. it may be able to test the fault. test mode constraints (TCs) or lineholds. This. if create_logic_delay_tests is set to process only intra-domain faults that require repeating clocks. Untestable: Sequential depth (Usd) This fault was discovered to be untestable during multiple time image processing. Encounter Test: Guide 4: Faults Concepts As an example. will prevent testing the OSA0 fault on the AND gate. Untestable: Global termination (Ugt) A test for this fault requires termination on some three-state primary output. this makes it impossible to drive a 1 on the output of the AND gate. you can change the global tester October 2015 112 Product Version 15. For example. Unlo. Untestable: Seq or control not specified to target fault (Unio . a test sequence. then faults that are not in intra-domain are untestable. if a TIEX block feeds an AND gate. If the sequential test generator was not used. A test for this fault conflicts with the TC (and perhaps the TI) pins or fixed value regs. . in turn. Uner) The fault is in a portion of the logic that cannot be tested due to user control or test sequence. Faults conflicting with a TI pin are normally identified as inactive in the testmode.

12 © 1999-2015 All Rights Reserved. Untestable: Constraint (Ucn) A test for this fault cannot be produced without violating the clocking constraints specified by the automatic or user specified clock sequence(s). For ATCov (adjusted test coverage). the globally ignored faults are subtracted from the denominator. Note that changing the tester termination may cause the fault to become testable or to become untestable for a different reason. refer to “Report Fault Coverage Statistics” on page 54. Fault/Test Coverage Calculations The normal static test coverage is calculated as follows: TCov= # Tested static faults # globally active static faults X 100 (Test Coverage) The ignore fault coverage is calculated as follows: TCov= # Tested static faults X 100 # globally active static faults + globally ignored static faults (Fault Coverage) The following figures show the complete set of coverage calculations. . The calculations for dynamic faults are done similarly and the ignored counts are added to the denominator in the test coverage calculation. For more details. Encounter Test: Guide 4: Faults Concepts termination value during test generation with the globalterm keyword. October 2015 113 Product Version 15.

Possibly Detected Coverage) # faults Tested + #faults possibly tested APCov= #faults .# faults Untestable due to redundancies X 100 (Tested. .# faults Untestable due to redundancies (Tested.12 © 1999-2015 All Rights Reserved. Redundant.# faults Untestable due to redundancies X 100 (Redundant. Encounter Test: Guide 4: Faults Concepts Figure 6-9 Test Coverage Formulas Used By Encounter Test TCov= # faults Tested X 100 #faults (Test Coverage) # faults Tested ATCov= X 100 (Redundant Test Coverage) #faults . Possibly Detected Coverage) # faults Tested + #faults possibly tested APCov= #faults .# faults Untestable due to redundancies # faults Tested ATCov= X 100 #faults .Redundant Test Coverage) # faults Tested + #faults possibly tested PCov= #faults X 100 (Possiby Detected Coverage) # faults Tested + #faults possibly tested PCov= #faults X 100 (Tested. Possibly Detected Coverage) The following testmode coverage calculations are used when build_faultmodel option ignorefaultstatistics=yes is specified: October 2015 114 Product Version 15.

the patterns would need to be resimulated against the other fault model. . Note: Patterns generated using one fault model do not cause faults to be marked off in any of the other fault models.12 © 1999-2015 All Rights Reserved.#globally ignored faults) # Tested faults + # faults Untestable due to redundancies + #globally ignored faults + # ATPG untestable faults + # faults possibly tested) ATPG Effectiveness= X 100 # faults + globally ignored static faults Fault Modeling The faults defined in the previous section are related to the design model in what is termed a Fault Model. October 2015 115 Product Version 15. Encounter Test: Guide 4: Faults Concepts Figure 6-10 Test Coverage Formulas with Ignore Faults TCov= # Tested global faults X 100 (Fault Coverage) # global faults + globally ignored static faults # Tested in mode ATCov= X 100 (# test mode faults .# faults Untestable due to (Test Mode Adjusted Test Coverage) redundancies . These fault models can be an exact copy of the regular fault model if desired. This fault model includes all the static and dynamic pin and pattern faults as well as the driver/receiver objectives and Iddq faults. This Objective Model can be created at the same time as the standard fault model. their support has been generalized so they may be used for test generation if the need arises. the standard Encounter Test fault model is created and used as input. These unique fault models were initially supported for use with diagnostics -. ■ Stuck-Driver and Shorted Nets testing (Interconnect/IO Wrap Testing).to allow for better modeling of some types of manufacturing defects such as bridging faults. Refer to Figure 1-1 on page 15 for a depiction of the flow. or may be created separately. ■ For most test generation tasks. If that is desired. ■ The Path Delay fault model can be used as an input to Path Delay Test Generation. However. requires a different fault model that includes only the objectives for these tests. ■ Any number of alternate fault models may also be created. See “Path Delay” on page 118 for additional information.

. with the fault effect propagating at net B as a good design 1/faulty design zero (this assumes logic zero dominates a logic one). A static pattern fault represents a static defect by identifying a set of values required on specific logic nets to excite the particular defect being modeled. Detecting either of these faults automatically gives credit to the other fault. An ORing group implies the defect is considered detected if any faults in the group are detected. The first fault requires net A to be zero and net B to be a one. The group can be identified as either an ORing group or an ANDing group. Comets are used to group testmodes for the purpose of reporting fault coverage. Once excited. with the fault effect propagating from net A as a 1/0. The second fault in the group requires net A to be a one and net B to be a zero. There are two types of comets: October 2015 116 Product Version 15. Encounter Test: Guide 4: Faults Concepts Pin Faults and Pattern Faults These two basic fault types are modeled in Encounter Test as: ■ Pin Faults ❑ Static (Stuck-At) Faults ❑ Dynamic (Transition) Faults ■ Pattern Faults ❑ Static Pattern Faults ❑ Dynamic Pattern Faults ❑ Shorted net faults Refer to “Fault Types” on page 91 for additional information. Pin faults do not always accurately model certain static defects. you can specify that the testmode belongs to one or more groups called comets. An ANDing group implies the defect is considered detected only if all faults in the group are detected. Partial credit is given for those faults in an ANDing group which have been detected. the defect's effect appears at the output of a logic gate or on some specific net and must be propagated to an observable point just like any other fault. Pattern faults can be grouped such that two or more pattern faults represent a single defect. Encounter Test provides support for pattern faults.12 © 1999-2015 All Rights Reserved. As an example. For better defect modeling. a short between two nets can be represented by two pattern faults in an ORing group. Encounter Test provides a shorthand means of defining a two-net short defect within a fault rule file for an entity in the hierarchy of the design. Cross-Mode Markoff (MARKOFF) When you create a testmode.

When you report the tested faults in a testmode belonging to a markoff comet. . ■ Statistics-Only (STATS_ONLY) Comet The statistics-only comet allows you to obtain fault statistics for a group of testmodes without having cross-mode markoff. B. Note that fault f1 has now been tested for each comet. f2. suppose a test generation run is committed for testmode A that tested both faults f1 and f2. If a tested fault is active in other testmodes. if you have multiple testmodes that use the October 2015 117 Product Version 15. then commit_tests also processes the other testmodes that have the same markoff comet defined and marks the fault as tested in another mode (Tm). a cross-mode markoff comet is created for each tester description rule name and all testmodes that use that tester description rule are automatically included in the same comet. A. and C. but are not marked in modes B and C. These faults are marked tested in mode A. fault f3 is marked tested (T) in mode C and tested in another mode (Tm) in both modes A and B. Encounter Test: Guide 4: Faults Concepts ■ Cross-Mode Markoff (MARKOFF) Comet The cross-mode markoff comet allows you to obtain cumulative fault coverage across multiple testmodes and reduces test generation time by not reprocessing the fault in several different testmodes. with three faults f1. which tests fault f1. so this is not recommended.12 © 1999-2015 All Rights Reserved. It also reduces tester time by only generating a single test for each fault. suppose a test generation is run in mode B. and Comet2 consisting of modes B and C. because no faults have yet been tested by modes belonging to Comet2. fault f1 is marked tested (T) in mode B and tested in another mode (Tm) in mode C. This ensures that if a fault is tested in one testmode. or if there are multiple markoff comets but they do not have any testmodes in common. it will not have another test generated in another testmode targeted for the same tester. Next. Now. Next. Encounter Test automatically creates a mark-off comet for the Tester Description Rule (TDR). a test generation is run in mode C that tests fault f3. When an experiment is committed. therefore. and mode C does not belong to any other markoff comets. If a single cross-mode markoff comet is defined for the design. the status indicates which faults are tested in that testmode and which faults are tested in another mode (Tm). Here is an example of how it works when testmodes are included in more than one markoff comet. By default. and f3 active in all three testmodes. The rules get a little more complicated if a testmode belongs to more than one markoff comet. the tested status of the faults for that testmode is saved. When this experiment for mode B is committed. then the rules are simple. Note: As stated previously. Consider a design with three testmodes. Assume two comets have been defined: Comet1 consisting of modes A and C. When the experiment for mode C is committed.

assume you are using a tester named HALLEY and you have four testmodes named MODE1. test generation time and the volume of vector data when using cross-mode markoff. include the following comets statement in the mode definition files for MODE1 and MODE2: COMETS = HALLEY STATS_ONLY. As an example. BORELLY to include MODE1 and MODE2 and HARTLEY to include MODE3 and MODE4. You want to define two comets. simply define the TDR as a stats_only comet in every testmode. If you want to disable cross-mode markoff altogether. . A path delay fault consists of the combinational network between October 2015 118 Product Version 15. MODE2.12 © 1999-2015 All Rights Reserved. ❑ This makes HALLEY a stats_only comet so it will not be considered during cross- mode markoff for this testmode. By default MODE1 and MODE2 would be in BORELLY and HALLEY. Other Test Objectives Path Delay A signal path with a total delay exceeding the clock interval is a path delay fault. If you choose to define your own comets. they will automatically get the benefit of cross-mode markoff. MODE3. it is recommended that you define the TDR comet as stats_only to avoid the complexity of having a testmode in more than one comet. BORELLY MARKOFF . To avoid this. However. MODE4. This must be done in the mode definition file for each testmode that uses that TDR. HARTLEY MARKOFF . Encounter Test: Guide 4: Faults Concepts same TDR. this is not recommended as there is significant benefit to tester time. represented by functional pins or signals. ❑ BORELLY will be used for cross-mode markoff so once a fault is tested in MODE1 it will be marked off in MODE2 and vice versa. ❑ HARTLEY will be used for cross-mode markoff so once a fault is tested in MODE1 it will be marked off in MODE2 and vice versa. include the following comets statement: COMETS = HALLEY STATS_ONLY. In the mode definition files for MODE3 and MODE4. ❑ This makes HALLEY a stats_only comet so it will not be considered during cross- mode markoff for this testmodes. and MODE3 and MODE4 would be in HARTLEY and HALLEY which would make the cross-mode markoff more complex.

For a chip. the target nets are the package pin nets and the nets that interconnect the components. Figure 6-11 shows an example of a path delay test objective.12 © 1999-2015 All Rights Reserved. D and captured at DFF2. Both of these objectives are based upon target nets. . October 2015 119 Product Version 15. the target nets are those connected to bidirectional package pins. A fault model may be built for use in Path Delay Test Generation using prepare_path_delay with a pathfile input. The transition from 0 to 1 is released from DFF1 along path A. Figure 6-11 Path Delay Fault Example Package Test Objectives There are three types of test objectives for Package Test: ■ Stuck Driver Test (SDT) ■ Shorted Nets Test (SNT) ■ Slow-to-Disable Objective Types Stuck Driver and Shorted Net Objectives Stuck driver and shorted net objectives are defined for I/O wrap and interconnect test purposes. C. Encounter Test: Guide 4: Faults Concepts a latch/primary input and a latch/primary output for which a transition in the specified direction does not arrive at the path output in time for proper capture. B. For higher level packages.

six shorted nets objectives are defined for eight target nets (N=*) as follows: Target Objective Number Net 1 2 3 4 5 6 ------------------------------ n1 0 0 0 1 1 1 n2 0 0 1 1 1 0 n3 0 1 0 1 0 1 n4 0 1 1 1 0 0 n5 1 0 0 0 1 1 n6 1 0 1 0 1 0 n7 1 1 0 0 0 1 n8 1 1 1 0 0 0 Slow-To-Disable Objectives A slow-to-disable objective is a stuck driver objective defined to detect a driver which is slow to transition to its inactive (disable) state.12 © 1999-2015 All Rights Reserved. Each net participating in a shorted nets test will have one of its drivers supplying a value to the net at same time as other nets participating in the test. A receiver is any combinational primitive logic function which is not a buffer or resister. The values driven on to the nets is determined by a logorithmic counting pattern known as O=2*logN (or counting and complement) where N is the number of target nets and O is the number of shorted nets objectives. When the net is a multi-source net. A driver is any combinational primitive logic function which is not a buffer. resistor or mux. . slow-to-disable objectives is optional and only applies to interconnect testing. For example. and test generation for. You must provide an objectivefile to the Build Package Test Objectives process for slow-to-disable objectives to be defined. Encounter Test: Guide 4: Faults Concepts Stuck Driver Objectives Stuck driver objectives are defined to check the integrity of the path from a driver to each of its receivers. Objective Status ^ = inactive Objective is inactive u = untested Objective is untested October 2015 120 Product Version 15. Typical examples of receivers are ANDs/ORs. and POs. the drivers are the primitives that source the net. Definition of. Shorted Nets Objectives Shorted nets objectives are defined to check the integrity of the paths for a set of target nets.

Tc = Tested and precondition The objective was tested even though failed preconditioning failed.12 © 1999-2015 All Rights Reserved. !mc = multi-clock The objective was not tested and multiple clocks were required. run with contentionreport=hard to see if the test can be generated. If the design can tolerate soft contention. So. for example. !nt = non-terminated 3 state The objective was not tested and a likely cause is that a non-terminated three-state causes an unknown (X) value that prevents the test. This is determined by analysis done when the objective model is built. Encounter Test: Guide 4: Faults Concepts T = tested Objective is Tested c = precondition failed The receiver latch was unable to be preconditioned to the opposite of the final state of the objective. This does not affect the ability to generate a test for the objective. !ti = test inhibit The objective was not tested and a likely cause is a Test Inhibit value that prevents the test. the receiver will be preconditioned to 0. . it is not a condition caused by the experiment environment. This means that the test is not as high quality because the receiver latch could have been at the final state prior to applying the test. This status will be shown in addition to the testability status. for a driver-stuck-at-1 objective. October 2015 121 Product Version 15. !3s = 3-state contention The objective was not tested because the test would cause three-state contention.

The test for this objective may require more pins. When the boundary=internal testmode limits the number of pins that the tester can contact at one time.12 © 1999-2015 All Rights Reserved. than are available during the test. !ud = undetermined reason The objective was not tested but the reason cannot be determined. !pm = PMU conflict The objective was not tested and a likely cause is the constraints of testing with parametric measuring unit(s). . use sos=no to allow simultaneous output switching. October 2015 122 Product Version 15. the process is limited to using the tester contacted pins plus additional pins that can be contacted using the available parametric measuring units (PMUs). !nc = non-contacted pin The objective was not tested and a likely cause is that the test requires one or more pins that are not contacted in this boundary=internal testmode. Try running without lineholds or with a different set of lineholds. If the SOS constraints are not required for the test. Encounter Test: Guide 4: Faults Concepts !so = SOS conflict The objective was not tested because simultaneous output switching requirement would be violated. !tg = Test gen constraint The objective was not tested and a likely cause is that the test cannot be generated without violating pattern constraints specified in the design source or in the constraints file input to build_model. !lh = user linehold The objective was not tested and a likely cause is that the specified lineholds prevent the test. or a different combination of pins. Note: Lineholds specified with the LH test function may be overridden in a linehold file.

$3s = 3-state contention The generation of the test was aborted protection while trying to globally protect the design from 3-state contention. #sb = Incomplete Test: Single The test for the objective could not be burnout completed due to 3-state contention. . !nd = no delay test path The slow-to-disable objective was not (apriori) tried because there is no delay test path. October 2015 123 Product Version 15. #tr = Incomplete Test: Timing The test for the objective could not be Reject completed because it violated timing. If the specific test for the objective causes contention. !no = not observable (apriori) The objective was not tried because it is not observable.12 © 1999-2015 All Rights Reserved. #sc = scan conflict The test for the objective could not be completed due to conflicting requirements with the scan latches. Try running with contentionprevent=no to get past this issue. !tl = tester load (apriori) The objective was not tried because tester loading inhibits observability. #uc = Incomplete Test: User The test for the objective could not be clock seq completed because it does not match the specified testsequence. the objective will be marked with !3s. $A = aborted The generation of the test was aborted. Encounter Test: Guide 4: Faults Concepts !nr = not controllable The objective was not tried because it (apriori) is not controllable.

12 © 1999-2015 All Rights Reserved. . Encounter Test: Guide 4: Faults Concepts October 2015 124 Product Version 15.

which always starts by identifying the entity to which it pertains. . Encounter Test: Guide 4: Faults A Pattern Faults and Fault Rules Fault Rule File Syntax Following is the syntax specification for the fault rule file.12 © 1999-2015 All Rights Reserved. Figure A-1 Fault Rule File Syntax October 2015 125 Product Version 15.

Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules October 2015 126 Product Version 15.12 © 1999-2015 All Rights Reserved. .

.12 © 1999-2015 All Rights Reserved. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules October 2015 127 Product Version 15.

Fault Rule File Element Descriptions The following describes the elements of the fault rule file syntax: ■ ENTITY This defines the name of the containing block (a Verilog module) to which this fault rule file applies.f. except within a comment. That is. or the { }. entityname. keyword. NOFAULT and FAULTONLY are the only two fault rule statements that are allowed outside an ENTITY. netname. A single line comment delineated with // is also supported. It can contain any characters except newline and the double quote delimiter (").I0. The characters that appear in uppercase indicate the abbreviated form of the keyword. A line containing only a comment is valid.nl. lower or mixed case within the fault rule file. it must be enclosed in quotes (for example.l. If the name matches any of the allowed fault rule keywords. Newline characters may be inserted anywhere in the fault rule file. However. For example. "z" or "short0"). variable or quoted string. 6. and pinname can be any character string that does not contain blanks. tabs. Comments delineated with a /* */ can be included wherever a space can exist. 2. Lower case characters indicate variables. newlines. Example A-1 Multiple ENTITY Specification ############################################################ ENTITY = CLKINVX1 { IGNORE { SA0 PIN "Pin. Keywords may be specified in upper.A0" } October 2015 128 Product Version 15. /* */ delimiters.12 © 1999-2015 All Rights Reserved. It is acceptable to have more than one model entity included in the same fault rule file. 7. 5. Note: A curly brace is required in the syntax if specifying multiple entities as shown in Example A-1. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Syntax Notes: 1. a comment may not span lines. 3. NOFAULT and FAULTONLY cannot be specified within the context of an ENTITY statement. " ". the keyword "entity" is recognized when specified as either ENTITY or ENT.CLKINVX1. . "text" (a quoted string delineated with " ") may not span lines. a comment cannot contain any newline characters. 4.

■ DYNAMIC Begins the definition of a dynamic pattern fault.12 © 1999-2015 All Rights Reserved. ■ PATH October 2015 129 Product Version 15.ENABLE" } IGNORE { SA1 PIN "Pin. A static pattern fault specifies a set of REQUIRED net or pin values to excite the defect and a single PROPAGATION net or pin value to identify where the effect of the defect first appears.f.__i0.tsd.f. ■ OR Provides for the grouping of two or more pattern faults that model a single defect. A static pattern fault is used to model a defect which can be detected regardless of the speed at which the test patterns are applied.PDUDGZ. A dynamic pattern fault is used to model a defect which requires a sequence of patterns to be applied to the design within a specific period of time. A dynamic pattern fault specifies both a set of INITIAL net or pin values and a set of REQUIRED net or pin values which together identify the nets which must go through a specific value change (transition).l. ■ STATIC Begins the definition of a static pattern fault.PDUDGZ.l.01" } } ENTITY = PDUDGZ { IGNORE { SA0 PIN "Pin. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules IGNORE { SA1 PIN "Pin. Refer to Grouping (&.__i0.f. I) for more information.l.nl. An OR group implies the defect is considered detected if any fault in the group is detected. Refer to Grouping (&. ■ AND Provides for the grouping of two or more pattern faults that model a single defect.nl.nl. ■ DATE This provides a mechanism for auditing the fault rule. I) for more information.CLKINVX1. An AND group implies the defect is considered detected only if all faults in the group are detected.tsd. .DOUT" } } ############################################################ ■ TIME This provides a mechanism for auditing the fault rule. and a PROPAGATION net value to identify where the effect of the defect first appears.I0.

To specify a October 2015 130 Product Version 15. ■ SHORT0 A shorthand means of specifying two pattern faults which represent two nets shorted together with the effect that whichever net has a logical value of zero pulls the other net to zero as well.12 © 1999-2015 All Rights Reserved. ■ PROPAGATION Specifies the net or pin where the defect's effect first appears. the INITIAL net values must exist immediately before a clocking event that produces the REQUIRED net values. In order for the defect to be considered "excited". ■ SHORT1 A shorthand means of specifying two pattern faults which represent two nets shorted together with the effect that whichever net has a logical value of one pulls the other net to one as well. ■ BRG_S Static bridge fault between the specified nets or pins. ■ INITIAL A list of nets or pins and their values that represent the initial conditions required to excite a dynamic pattern fault. These nets and values are used in conjunction with the REQUIRED net value pairs to fully specify the excitation requirements for a dynamic pattern fault or a path pattern fault. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Identifies a fault that is to be marked as a "pathfault" in the fault model that is to be built with this fault rule. . ■ REQUIRED A list of nets or pins and their values which are required to excite the defect. ■ BRG_SD Both static and dynamic bridge faults between the specified nets or pins. Also specifies the good machine/faulty machine values for that net or pin in the presence of the fault excitation conditions. for example “net01”. ■ BRG_D Dynamic bridge fault between the specified nets or pins. use the simple name of the net. ■ NET The net must be within the context of module on the ENTITY statement. To specify a net defined in the module.

net1 would refer to usage-block "usage1" within the definition block and within that block. and the simple name of the net or pin within the block which defines it. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules net within a lower level entity. "usage1. These faults will be marked ignore in the fault model that is built with this fault rule. The instance names and simple name are joined with periods (. . or they may be hierarchical. all October 2015 131 Product Version 15. the "usage2" usage-block and the net "net1" within it.usage2. See example in Figure A-2. whose pin name should always be "01". The only pin on a logic gate which could be reliably specified in a fault rule file is the single output pin. Unlike faults that are removed with a PFLT pin attribute. for example. the pins being specified should not be pins on primitive gates since the Encounter Test Build Model process may rename the pins on primitive gates. for example “usage1. use the hierarchical name of the net.pin01". For example. A hierarchical name is comprised of instance names for each level in the hierarchy down to the block containing the net or pin. ■ IGNORE Identifies a fault that is to be ignored. Figure A-2 Example of NET Statement ■ PIN A pin name can be used instead of a net name when it is more convenient to do so. usage1. Pin names may be either simple. "pin01".net01”.12 © 1999-2015 All Rights Reserved. for example. however.) between them.

12 © 1999-2015 All Rights Reserved.pinZ} Note: IGNORE NET and IGNORE PATTERN do not allow wildcards ■ PTAB Identifies a fault that is to be marked "Possibly Testable at Best" in the fault model that is built with this fault rule. ■ DETECTED Identifies a fault that is to be marked "detected" in the fault model that is to be built with this fault rule. ■ REDUNDANT Identifies a fault that is to be marked "redundant" in the fault model that is to be built with this fault rule. This element supports the use of wildcards as shown in the example below. will not be included in any fault reports or statistics for the fault model built with this rule.usage*. The fault will not exist and. ■ SA0 A pin stuck-at-zero fault. The wildcard is * character. . ❑ untestable due to three-state contention. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules faults that are found to be equivalent to an ignored fault (during fault collapsing) are also marked ignore. The Write Ignore Faults application uses this syntax for faults that are identified as: ❑ untestable due to unknown (X) source. therefore. IGNORE { SA1 PIN rlm. ■ SR October 2015 132 Product Version 15. ❑ possibly testable at best. ■ OMIT Identifies the fault that is to be omitted from the fault model. ■ SA1 A pin stuck-at-one fault. ❑ untestable due to non-terminated three-state.

■ DRV0 . Each pattern fault owned by a block is assigned a unique identifier. This is the hierarchical name of the block that contains the pattern fault. Either short or proper form of the name is allowed.All drivers drive Ø. The basic syntax allows for specifying both the good design and the faulty design logic values required to excite the defect. may be viewed on the GUI. ■ DRV1 . with a slash (/) used to separate the good and faulty design values. but is carried forward into the Fault Model File. ■ RequiredValues A logic value or value pair used to represent the logic value(s) required for a net or pin in order to excite the defect. it is assumed that the same value is required in both good and faulty designs to excite the defect. ❑ BLOCK The block containing the pattern fault. which may include any characters except carriage return or newline.All drivers drive 1.All receivers receive Ø ■ RCV1 . but is ignored by all other Encounter Test applications and tools.12 © 1999-2015 All Rights Reserved. ❑ PIN The pin containing the fault. ■ Text Text is a comment that is not discarded by the Fault Model Builder. No syntax checking is done on the text inside the quotation marks. ■ SF A pin slow-to-fall fault. October 2015 133 Product Version 15. . the good design value is first.All receivers receive 1 ■ PATTERN A pattern fault. For Ignore and PTAB faults. it is printed in Fault Model listings. Text is used only for informational purposes. If only one value is specified. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules A pin slow-to-rise fault. When both are specified.The maximum length is 1024 characters. ■ RCV0 . The pattern fault identification is the unique identifier that is printed in the fault listing for this fault. the hierarchical pin name refers to the specific pin (input or output) that contains the specified fault. and is printed in the diagnostics callout report.

The latter can be used when the propagation node will have the correct value regardless of what the node has on it at the time. The type of faults to be excluded are static. but the fault requires the value in order to excite the defect (cause the fault simulator to inject it). a "don't care". including the master latch or primary output where the result of the transition is to be observed. or 0 if V ends up being a 1. such as 1/ X or 0/X. A NOFAULT statement with no STATIC/DYNAMIC designation excludes both STATIC and DYNAMIC faults. Wildcards can be specified for the Instance name. Refer to Refer to Path File in Encounter Test: Guide 5: ATPG for additional information. V means a value of 0 or 1. It is important that the good design propagation value that is specified should be the value obtained whenever the good design required values are obtained. to model an input pin stuck-at-1 defect. ❑ Instance . For example. The former can be used to specify additional requirements to enable test generation to get the propagation node to its proper state so that there will be a difference between the good design and the faulty design. It is placed at other points along the path. The logic that is not be faulted is specified with Block.12 © 1999-2015 All Rights Reserved. dynamic. the pattern fault required value should be specified as 0/X for the pin with the fault since the defect itself will cause the faulty design to assume a logic 1 value if all other requirements are met. Instance. Encounter Test allows a required value to be "don't care" in the faulty design. Cell. or Module.The name of the instance whose faults are to be excluded. . A value of X is achieved in the good machine by the absence of a pin specification. The values are specified for both the good design and the faulty design respectively. ■ PropagationValues A logic value pair used to represent the logic values that will propagate from the specified net or pin whenever the defect has been excited. NOFAULT STATIC Instance a*c in October 2015 134 Product Version 15. A value of P is used only in the specification of a path pattern fault. such as X/0 or X/1. If the listed required values are not sufficient to produce the specified good design propagation value. for example. 1 if V ends up being a 0. that is. In these cases a logic value of X should be specified. Path pattern faults are normally defined only by the path test applications based on user specification of paths. the logic value on the net or pin may not matter (a "don't care" condition). Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules In some cases. For example. and ~V is used in conjunction with V to specify the opposite value of V. ■ NOFAULT Identifies faults to be excluded by identifying logic that is not to have faults. or both. it is unpredictable whether the fault will be detected by any tests that are automatically generated. with a slash (/) used to separate them. It is also possible to specify that the good design is a "don't care".

dynamic. This is a known problem. ❑ Cell or Module . The pattern "*rry[0]" needs to be specified as "*rry[[]0[]]". For example. Wildcards represent one or more alphanumeric characters and are not case sensitive. You can specify more than one FaultOnly statement in a fault rule file. Following is an explanation of the keywords used in the FAULTONLY statement: ❑ Instance . Note: MODULE is a synonym for cell in this statement. The identifier can be specified in mixed case.The short or proper form of the hierarchical name of the block whose faults are to be excluded.The name of the instance for the logic to fault. Faults are placed on all appropriate blocks within this block. it is necessary to enclose each bracket character in a set of brackets "[]" in order for each bracket character to be treated as part of the name. Names containing special characters must be enclosed in dowble quotes. . if a string with the wildcard pattern "*rry[0]" is specified while searching for a block name "carry[0]" in the fault model. Wildcards can be specified for the block name. ■ FAULTONLY Identifies specific logic to fault while excluding faults on all other blocks. All blocks specified will be faulted. all the way down the hierarchy. October 2015 135 Product Version 15. if square brackets are part of the search string. Note: When using a "*" wildcard character. For example. NOFAULT DYNAMIC Block *Z. ❑ Block .The name of the module whose faults are to be excluded. A FAULTONLY statement with no STATIC/DYNAMIC designation affects both STATIC and DYNAMIC faults. For example. a syntax error is generated on Linux platform. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules CELL xyz (wildcard is allowed only for the instance name. It is recommended not use wildcards for NOFAULT within FAULTONLY blocks. The type of faults included only on this logic can be static. For example. You may also use wildcards such as asterisk (*) for the Instance name. Cell is a synonym of Module.12 © 1999-2015 All Rights Reserved. Therefore while specifying patterns with "[" and "]" brackets. not the cell name). for example INSTANCE < name / wildcard name> IN CELL name. the search is successful on AIX but not on Linux. Wildcards can be specified for the Cell or Module name. CELL x*z or Module ab*. not the cell name in this statement). specifying FAULTONLY STATIC Instance a*c in CELL xyz will match FAULTONLY STATIC Instance abc in CELL xyz (wildcard is allowed only for the instance name. or both.

Note: Wildcards represent one or more alphanumeric characters and are not case sensitive. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules ❑ Block .Identifies a specific library cell name. BLOCK abc* will match BLOCK abcde. ❑ Instance name .. Wildcards may be specified for the block name such as BLOCK <name/ wildcard name>.Identifies a specific instance within a cell or module. Note: Cell and module are synonyms. If NOFAULT and FAULTONLY are specified on the same block: ❑ For CELL and INSTANCE in CELL.The name of the module with the logic for faults. */ October 2015 136 Product Version 15. the last statement specified in the cell will win. CELL *z will match CELL xyz and CELL 123xyz. This can be specified in both short and proper form of the block name. ❑ Cell name . If a name contains special characters. irrespective of the order in which the statements are specified. abcdef etc. ❑ Module . ❑ For an instance specific BLOCK. . This can be specified in mixed case. ❑ Cell or Module . How to Code a Fault Rule File Example 1: Using Normal Pattern Files /* This is the fault rule file for lsrfhxx */ ENTITY = lsrfhxx /* this is specified up front */ /* Here are all the static pattern fault definitions. For example. it must be specified within double quotes. NOFAULT will override FAULTONLY.Identifies a specific hierarchical block in the Encounter Test hierarchical model.Statement keyword that precedes a cell name. it must be specified within double quotes.. You may specify wildcards for the Cell name such as CELL <name/wildcard name>. ❑ Block name . It is recommended not use wildcards for NOFAULT within FAULTONLY blocks. it must be specified within double quotes. If a name contains special characters. For example. If a name contains special characters. NOFAULT can be used on modules and instances inside a FAULTONLY block to remove faults inside the FAULTONLY block.The short or proper name of the hierarchical name of the block for the logic of faults.12 © 1999-2015 All Rights Reserved.

.12 © 1999-2015 All Rights Reserved.. */ Short1 { NET netone NET nettwo } /* end SHORT 1 */ October 2015 137 Product Version 15. . Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Static { "internal resistive short to ground" /* optional text */ Required /* required values */ {Net netone 0 Net nettwo 1 net three 1 } Propagation { /* propagation value */ net three 1/0 /* fault effect */ } } STATIC { "internal pin zz01 not connected (open)" REQ { Pin a20 0 Net net2 1 } PROP { net three 0/1 } /* comment */ /* comment */ } /* comment */ /* Here are all the dynamic faults. */ Or { /* the following two pattern faults represent one defect */ Dynamic { "bridging fault between internal nodes X and Y" Initial { PIN a01 1 PIN a02 0 } REQ { PIN a01 1 PIN a02 1 PIN 02 1 } PROP{ net abc 0/1 } } DYNAMIC { INIT { PIN a01 0 PIN a02 1 } REQ { PIN a01 1 PIN a02 1 PIN 02 1 } PROP{ net abc 0/1 } } } /* end OR */ /* shorted nets...

nl.pin10" } IGNORE { PATT fault10 BLOCK "block2.latch" } PTAB { SA1 PIN "block2.f.reset_reg_0 NoFault Cell SDFFRX1 NoFault Module SDFFRX1 Note: An entity name is not required at the top of the file Example 3: Using a Fault Rule File create_diag_tests_transition faultrulefile=<file with listed Instances and blocks> Example 4: Combining Nofault and Individual Pattern Fault Specifications The following example shows how to combine user-defined fault specification and nofault statement for a specific block.COMPRESSION Composite Fault Support Composite faults can be simulated to correctly model complex faulty behavior.SEL } } Nofault BLOCK Block.12 © 1999-2015 All Rights Reserved.Dmux. .u_sys_reset_block. October 2015 138 Product Version 15. A composite fault is a fault that has multiple behaviors. Entity=top { IGNORE { SR PIN LOG. The behaviors are grouped such that the components are dependent on each other.l. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Short0 { NET net3 NET net4 } /* end SHORT 0 */ IGNORE { SA0 PIN "block1.pin3" } Example 2: Removing Faults NoFault Instance reset_reg_0 In Module sys_reset_block NoFault Block dig_top. Note that the section for Entity=top should have both braces { and } before the nofault statement.top.

Stuck Fault 2 is also tested. the composite fault is not tested when A=B=1. In Diagnostic Fault Isolation. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules An illustration of a dependent composite fault is depicted in the partial design diagram in the following figure. When both faults are present. Using the same argument. Typically. but Net B is at 1/1 and the output of the XOR gate is 0/1. Net E becomes 0/0 and the composite fault is not detected.1/0 } } STATIC{ ACTIVATION { NET B = 1 } PROPAGATION { PIN XOR. . Figure A-3 Dependent Composite Fault In the preceding figure.1/0 } } } Refer to “Fault Compositing ” in the Encounter Test: Guide 7: Diagnostics for additional information. the composite fault would be represented as a pattern fault: OR { STATIC{ ACTIVATION { NET B = 1 } PROPAGATION { PIN OR.A1 . October 2015 139 Product Version 15. However. specifying the General Purpose Simulation option composite=yes creates OR'ed pattern groups then simulates the groups as composite faults.A1 . if you consider the composite fault where both Stuck Fault 1 and Stuck Fault 2 are present at the same time.12 © 1999-2015 All Rights Reserved. Stuck Fault 1 is tested because Net D is at 1/0.

Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Creating Bridge Fault Definitions Bridge fault model can be created using the build_bridge_faultmodel command. the "SHORT0" specification is a short way of writing a full two-pattern ORed pattern fault. BRG_ indicates a bridge fault is to be created. Other kinds of behavior caused by shorted nets. require that you write the full October 2015 140 Product Version 15. . The SHORT0 specification describes a bridged net where the bridge acts like an AND gate. 2.12 © 1999-2015 All Rights Reserved. D. A "SHORT1" specification is also a short way of writing two ORed pattern faults in the Encounter Test fault model. 3. The first is "SHORT0 NET=xyz NET=asdf". Specifying Bridge Faults in a Fault Rule File Static and dynamic bridge faults can be specified in the fault rule file as shown in the Fault Rule File Syntax section above. Specifying a Shorted Net Fault in a Fault Rule File A shorted net fault can be specified in a Encounter Test fault rule file using one of the following methods: 1. Example: ENTITY = module2 { BRG_S { NET net01 NET net02 } } Creating Shorted Net Fault Definitions Shorted Net Faults can be created using the create_shorted_net_faults command. such as a dominant net short. the S. A SHORT1 specification describes a bridged net where the bridge acts like an OR gate. or SD after the underscore indicates whether Static. or both Static and Dynamic bridge faults are to be created between the specified nets or pins. This specification in the fault rule file creates two pattern faults in the Encounter Test fault model. which cannot be described with the SHORT specification. with the specification of an OR to group the patterns together. Refer to Table Table 6-1 on page 102 and Table 6-2 on page 103 for the faults created using the given definitions. In effect. Dynamic.

Fault Rule File Equivalent Pattern Fault Fault Model SHORT0{NET A NET B} OR{ Fault Index STATIC { "Net A=0 forces Net B=0" 1 SPAT at REQUIRED {NET A=0 NET B=1} net B PROPAGATION {NET B=1/0} Fault Index } 2 SPAT at STATIC {"Net B=0 forces Net A=0" net A REQUIRED {NET A=1 NET B=0} PROPAGATION {NET A=1/0} } } SHORT1{NET A NET B} OR { Fault Index STATIC {"Net A=1 forces Net B=1" 3 SPAT at REQUIRED {NET A=1 NET B=0} net B PROPAGATION {NET B=0/1} } Fault Index STATIC {"Net B=1 forces Net A=1" 4 SPAT at REQUIRED {NET A=0 NET B=1} net A PROPAGATION {NET A=0/1} } } October 2015 141 Product Version 15. 4.12 © 1999-2015 All Rights Reserved. The following table demonstrates this point for shorting two nets. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules specification of the two pattern faults in the Encounter Test fault rule file. Use the command create_shorted_net_faults. . A and B. Refer to Creating Shorted Net Fault Definitions.

refer to “create_shorted_net_faults” in the Encounter Test: Reference: Commands. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Fault Rule File Equivalent Pattern Fault Fault Model OR{ Same as fault rule file Fault Index STATIC {"Net A=1 specification 5 SPAT at Dominates Net B=0" net B REQUIRED {NET A=1 NET B=0} Fault Index PROPAGATION { NET B=0/ 6 SPAT at 1} net B } STATIC {"Net A=0 Dominates Net B=1" REQUIRED {NET A=0 NET B=1} PROPAGATION { NET B=1/ 0} } } To simulate the effects of the "SHORT1" from a fault rule file. ■ To view the command parameters.n_2403 THE_REG_FILE. Using Create Shorted Net Command Faults may be automatically defined to model shorted net defects using the command create_shorted_net_faults.n_2403 October 2015 142 Product Version 15.n_4156 THE_REG_FILE. The resulting output file can then be used as fault rule input to Build Fault Model.n_2403 THE_REG_FILE. An example of the create_shorted_net_faults command is shown below: create_shorted_net_faults cellname=moduleA outputfile=moduleA_flts \ filename=netpairfile The following is a sample of a simple net pair file: THE_REG_FILE. You must specify both components of the ORed pattern fault.n_44 THE_REG_FILE.n_88 THE_REG_FILE. . This command uses an input list of net pairs and generates a set of shorted net faults. you must specify "gmach=3.4" when resimulating the patterns in order to get the full effect of the shorted net.12 © 1999-2015 All Rights Reserved.

ORed. . Encounter Test supports the following dynamic faults: delayOr net1 is slow to fall if net2 is 1 and vice versa . The specified keyword values for create_shorted_net_faults indicate a static or dynamic fault spec for a net pair not associated with any specific fault spec. This fault type is the default for dynamic faults. staticAll staticBoth and staticDomBoth between net1 and net2 split This optional modifier indicates individual components are not defined as ORed pattern faults. This fault type can specify more than two nets.2 sets of ORed faults. staticDomBoth net1 dominates net2 and net2 dominates net1 . The static and dynamic fault specs are optional..2 fault definitions. This fault type only cannot specify more than two nets. delayDomBoth Both net1 dominates net2 and net2 dominates net1 delay Creates the delay version of the defined static fault spec. delayDom net1 makes net2 slow to get to the opposite value . This fault type can specify more than two nets. October 2015 143 Product Version 15. staticDom net1 dominates net2 .ORed.12 © 1999-2015 All Rights Reserved. Encounter Test supports the following static faults: staticOr Short1 of net1 with net2 staticAnd Short0 of net1 with net2 staticBoth Short1 and Short0 of net1 with net2 . This fault type can specify more than two nets. delayAll delayBoth and delayDomBoth between net1 and net2 delayTrans Transition on net1 slows net2 for opposite trans .ORed delayBoth Both delayOr and delayAnd. This fault type cannot specify more than two nets.ORed pattern faults.. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Net-Pair File Syntax The following is an example of the syntax for each line of the net-pair file: static fault spec dynamic fault spec netname1 netname2 . This fault type is the default for static faults.ORed delayAnd net1 is slow to rise if net2 is 0 and vice versa .

Input File Line Resulting Output staticOr netA netB Short1 { Net "netA" Net "netB" } staticAnd netA netB Short0 { Net "netA" Net "netB" } staticBoth netA netB Short0 { Net "netA" Net "netB" } Short1 { Net "netA" Net "netB" } staticDom netA netB OR{ Static{ "dominant short" REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Static{ "dominant short" REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } } October 2015 144 Product Version 15. . Fault Rule File Output The following table shows examples of lines from an input net-pair file and the associated output fault rule definitions. Note: Net names may be coded with single or double quotes to avoid confusion with a fault specification. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules delayTransBoth Both net1 trans slow net2 and net2 trans slow net1 split This optional modifier indicates individual components are not defined as ORed pattern faults.12 © 1999-2015 All Rights Reserved.

12 © 1999-2015 All Rights Reserved. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Input File Line Resulting Output staticDomBoth netA netB AND{ Static{ "dominant short" REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Static{ "dominant short" REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } Static{ "dominant short" REQ { Net "netB"=0 Net "netA"=1/X } PROP { Net "netA"=1/0 } } Static{ "dominant short" REQ { Net "netB"=1 Net "netA"=0/X } PROP { Net "netA"=0/1 } } } staticAll netA netB Short0 { Net "netA" Net "netB" } Short1 { Net "netA" Net "netB" } AND{ Static{ "dominant short" REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Static{ "dominant short" REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } Static{ "dominant short" REQ { Net "netB"=0 Net "netA"=1/X } PROP { Net "netA"=1/0 } } Static{ "dominant short" REQ { Net "netB"=1 Net "netA"=0/X } PROP { Net "netA"=0/1 } } } October 2015 145 Product Version 15. .

.12 © 1999-2015 All Rights Reserved. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Input File Line Resulting Output staticDom Split netA netB Static{ "dominant short" REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Static{ "dominant short" REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } delayAnd netA netB OR { Dynamic { "Delay short 0" INIT { Net "netB"=0 } REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Dynamic { "Delay short 0" INIT { Net "netA"=0 } REQ { Net "netA"=1/X Net "netB"=0 } PROP { Net "netA"=1/0 } } } delayOr netA netB OR { Dynamic { "Delay short 1" INIT { Net "netB"=1 } REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } Dynamic { "Delay short 1" INIT { Net "netA"=1 } REQ { Net "netA"=0/X Net "netB"=1 } PROP { Net "netA"=0/1 } } } October 2015 146 Product Version 15.

Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Input File Line Resulting Output delayBoth netA netB OR { Dynamic { "Delay short 0" INIT { Net "netB"=0 } REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Dynamic { "Delay short 0" INIT { Net "netA"=0 } REQ { Net "netA"=1/X Net "netB"=0 } PROP { Net "netA"=1/0 } } } OR { Dynamic { "Delay short 1" INIT { Net "netB"=1 } REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } Dynamic { "Delay short 1" INIT { Net "netA"=1 } REQ { Net "netA"=0/X Net "netB"=1 } PROP { Net "netA"=0/1 } } } delayDom netA netB OR { Dynamic { "Delay dominant short" INIT { Net "netB"=0 } REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Dynamic { "Delay dominant short" INIT { Net "netB"=1 } REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } } October 2015 147 Product Version 15. .12 © 1999-2015 All Rights Reserved.

.12 © 1999-2015 All Rights Reserved. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Input File Line Resulting Output delayDomBoth netA netB OR { Dynamic { "Delay dominant short" INIT { Net "netA"=0 Net "netB"=0 } REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Dynamic { "Delay dominant short" INIT { Net "netA"=1 Net "netB"=1 } REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } } OR { Dynamic { "Delay dominant short" INIT { Net "netB"=0 Net "netA"=0 } REQ { Net "netB"=0 Net "netA"=1/X } PROP { Net "netA"=1/0 } } Dynamic { "Delay dominant short" INIT { Net "netB"=0 Net "netA"=1 } REQ { Net "netB"=1 Net "netA"=0/X } PROP { Net "netA"=0/1 } } } delayDom Split netA netB Dynamic { "Delay dominant short" INIT { Net "netB"=0 } REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Dynamic { "Delay dominant short" INIT { Net "netB"=1 } REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } October 2015 148 Product Version 15.

. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Input File Line Resulting Output delayAll netA netB OR { Dynamic { "Delay short 0" INIT { Net "netB"=0 } REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Dynamic { "Delay short 0" INIT { Net "netA"=0 } REQ { Net "netA"=1/X Net "netB"=0 } PROP { Net "netA"=1/0 } } } OR { Dynamic { "Delay short 1" INIT { Net "netB"=1 } REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } Dynamic { "Delay short 1" INIT { Net "netA"=1 } REQ { Net "netA"=0/X Net "netB"=1 } PROP { Net "netA"=0/1 } } } AND { Dynamic { "Delay dominant short" INIT { Net "netA"=0 Net "netB"=0 } REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Dynamic { "Delay dominant short" INIT { Net "netA"=1 Net "netB"=1 } REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } } OR { Dynamic { "Delay dominant short" INIT { Net "netB"=0 Net "netA"=0 } REQ { Net "netB"=0 Net "netA"=1/X } PROP { Net "netA"=1/0 } } Dynamic { "Delay dominant short" INIT { Net "netB"=1 Net "netA"=1 } REQ { Net "netB"=1 Net "netA"=0/X } PROP { Net "netA"=0/1 } } } October 2015 149 Product Version 15.12 © 1999-2015 All Rights Reserved.

12 © 1999-2015 All Rights Reserved. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Input File Line Resulting Output delayTrans netA netB OR { Dynamic { "Delay transition interference" INIT { Net "netA"=1 Net "netB"=0 } REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Dynamic { "Delay transition interference" INIT { Net "netA"=0 Net "netB"=1 } REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } } delayTransBoth netA netB OR { Dynamic { "Delay transition interference" INIT { Net "netA"=1 Net "netB"=0 } REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Dynamic { "Delay transition interference" INIT { Net "netA"=0 Net "netB"=1 } REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } } OR { Dynamic { "Delay transition interference" INIT { Net "netB"=1 Net "netA"=0 } REQ { Net "netB"=0 Net "netA"=1/X } PROP { Net "netA"=1/0 } } Dynamic { "Delay transition interference" INIT { Net "netB"=0 Net "netA"=1 } REQ { Net "netB"=1 Net "netA"=0/X } PROP { Net "netA"=0/1 } } } October 2015 150 Product Version 15. .

.12 © 1999-2015 All Rights Reserved. Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Input File Line Resulting Output delayTrans netA netB netC OR { Dynamic { "Delay transition interference" INIT { Net "netA"=1 Net "netB"=1 Net "netC"=0 } REQ { Net "netA"=0 Net "netB"=0 Net "netC"=1/X } PROP { Net "netC"=1/0 } } Dynamic { "Delay transition interference" INIT { Net "netA"=0 Net "netB"=0 Net "netC"=1 } REQ { Net "netA"=1 Net "netB"=1 Net "netC"=0/X } PROP { Net "netC"=0/1 } } } staticDom delay netA netB OR{ Static{ "dominant short" REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Static{ "dominant short" REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } } OR { Dynamic { "Delay dominant short" INIT { Net "netB"=0 } REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Dynamic { "Delay dominant short" INIT { Net "netB"=1 } REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } } October 2015 151 Product Version 15.

Encounter Test: Guide 4: Faults Pattern Faults and Fault Rules Input File Line Resulting Output netA netB AND{ Static{ "dominant short" Command line options: REQ { Net "netA"=0 Net "netB"=1/X } defaultStatic=staticDomBoth PROP { Net "netB"=1/0 } } defaultDelay=delay split=no Static{ "dominant short" REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } Static{ "dominant short" REQ { Net "netB"=0 Net "netA"=1/X } PROP { Net "netA"=1/0 } } Static{ "dominant short" REQ { Net "netB"=1 Net "netA"=0/X } PROP { Net "netA"=0/1 } } } AND { Dynamic { "Delay dominant short" INIT { Net "netA"=0 Net "netB"=0 } REQ { Net "netA"=0 Net "netB"=1/X } PROP { Net "netB"=1/0 } } Dynamic { "Delay dominant short" INIT { Net "netA"=1 Net "netB"=1 } REQ { Net "netA"=1 Net "netB"=0/X } PROP { Net "netB"=0/1 } } } OR { Dynamic { "Delay dominant short" INIT { Net "netB"=0 Net "netA"=0 } REQ { Net "netB"=0 Net "netA"=1/X } PROP { Net "netA"=1/0 } } Dynamic { "Delay dominant short" INIT { Net "netB"=1 Net "netA"=1 } REQ { Net "netB"=1 Net "netA"=0/X } PROP { Net "netA"=0/1 } } } October 2015 152 Product Version 15. .12 © 1999-2015 All Rights Reserved.

12 © 1999-2015 All Rights Reserved. Figure B-1 Core Level Flow 1. Build Model – run build_model with the options you normally use October 2015 153 Product Version 15. Encounter Test: Guide 4: Faults B Hierarchical Fault Processing Flow This is a flow for taking faults tested at the core level out of consideration when the 1500 wrapped cores are used on an SOC Core Level Flow At the core level the following steps are run for each 1500 wrapped core to be placed on the SOC. .

Build and Verify INTEST Testmode – run build_testmode with a mode definition file that includes Scan_type gsd boundary=internal 3. Create and Commit scanchain and logic tests using the following command lines: create_scanchain_tests testmode=INTEST experiment=srt commit_tests testmode=INTEST inexperiment=srt create_logic_tests testmode=INTEST experiment=tg1 commit_tests testmode=INTEST inexperiment=tg1 5. October 2015 154 Product Version 15. Prepare Detected Faults – run prepare_detected_faults testmode=INTEST outfaultfile=corefile# (where # is an arbitrary number to make the corefiles unique) SOC Level Flow At the SOC level the following steps are run to include each 1500 wrapped core. Encounter Test: Guide 4: Faults Hierarchical Fault Processing Flow 2. Build the fault model including ignorefaults in the fault statistics – run build_faultmodel ignorefaultstatistics=yes includeignore=yes 4. .12 © 1999-2015 All Rights Reserved.

you may point to the tbdata/hierModel2 for the core.files that lists the location of the corefile# from prepare_detected_faults on each testmode one per line with no punctuation. .files ignorefaultstatistics=yes includeignore=yes 4. Create and Commit Scanchain and Logic Tests for SOC. instead of pointing to Verilog for the core. Then build_faultmodel faultrulefile=allcores. Encounter Test: Guide 4: Faults Hierarchical Fault Processing Flow Figure B-2 SOC Level Flow 1. Build and Verify SOC testmode with cores configured in EXTEST mode – run build_testmode and then verify_test_structures using your normal options and an assignfile that configures the testmode into EXTEST mode. This is somewhat faster than parsing the Verilog and ensures that the source is exactly what was used for ATPG. 2. create_scanchain_tests testmode=SOC experiment=srt commit_tests testmode=SOC inexperiment=srt create_logic_tests testmode=SOC experiment=tg1 commit_tests testmode=SOC inexperiment=tg1 October 2015 155 Product Version 15. Build Fault model—create a file named allcores. 3.12 © 1999-2015 All Rights Reserved. Build Model – run build_model with your normal options.

Encounter Test: Guide 4: Faults Hierarchical Fault Processing Flow 5. Report Fault Statistics – run report_fault_statistics testmode=SOC hierstart=top hierend=n (where n is the number of levels down from the top required to print the core level statistics) October 2015 156 Product Version 15.12 © 1999-2015 All Rights Reserved. .

Encounter Test: Guide 4: Faults C Building Register Array and Random Resistant Fault List Files for Pattern Compaction As designs grow larger. a reduction in test pattern counts for large designs with arrays is possible. ■ If using prepare_faultmodel_analysis to generate these lists of faults. The syntax for the command is shown below: prepare_faultmodel_analysis WORKDIR=<directory> where workdir is the name of the working directory. When combined with test generation compaction options. and identify special random pattern resistant structures. The command prepare_faultmodel_analysis can be used to identify large register arrays. .12 © 1999-2015 All Rights Reserved. Output Files Two sets of data are produced in the tbdata directory: October 2015 157 Product Version 15. also known as low-power registers. a fault model must exist from build_faultmodel. or via the command line by specifying build_faultmodel keywords randomresistant=yes and registerarray=yes. The register array and random fault list files can also be produced as part of the Build Fault Model process by selecting the screen options to Identify random resistant faults and Identify register array faults. Input Files ■ Encounter Test model from build_model. To prepare fault model analysis. refer to “prepare_faultmodel_analysis” in the Encounter Test: Reference: Commands. The benefit of the analysis is that faults that can be tested using the same write or read port address can be merged into a single test. various techniques are employed by Encounter Test to reduce pattern counts.

. Encounter Test: Guide 4: Faults Building Register Array and Random Resistant Fault List Files for Pattern Compaction ■ registerArrayList is produced if analysis is performed for register arrays.12 © 1999-2015 All Rights Reserved. Use the following command line syntax to use this data: create_logic_tests workdir=mywd testmode=FULLSCAN experiment=stg1 randomresistant=yes registerarray=yes create_logic_delay_tests workdir=mywd testmode=FULLSCAN experiment=dtg1 randomresistant=yes registerarray=yes create_iddq_tests workdir=mywd testmode=FULLSCAN experiment=itg1 randomresistant=yes registerarray=yes October 2015 158 Product Version 15. These are optional inputs to Logic Test Generation and IDDq Test Generation. ■ resistantFaultList is produced if analysis is performed for random resistant faults.

91 dynamic pin 98 input files 80 fault models 115 inserting test points 84 pattern 93 performing 91 pattern. specifying shorted net compound fault 138 fault 140 concepts fault types 91 analyze random resistance 80. Encounter Test: Guide 4: Faults Index A dynamic pattern faults 98. 116 dynamic pin faults 98 active faults 108 analyze faults input files 75. contacting 13 stuck driver and shorted net objectives 119 fault-free AND gate 92 D faults dynamic pattern 116 detected faults file 42 pattern 116 diagnostics procedure pin 116 composite faults 138 shorted net 116 driver/receiver objectives 101 static pattern 116 October 2015 159 Product Version 15. viewing 94 process 69 CO (controllability/observability) fault list and fault subset analysis 78 reading and creating 36 comets fault model cross-mode markoff 116. 117 creating 15 overview 116 fault rule file elements 128 statistics-only 117 fault rule file example 136 composite fault 138 fault rule file.12 . 77 F overview 76 performing 74 fault restrictions 74 driver/receiver 101 analyze random resistance dynamic pattern 98 concepts 80. shorted net 98 test points 82 possibly testable at best 107 analyzing race conditions with MEG slow-to-disable 120 rrfa 82 stuck driver 119 types 91 fault analysis C deterministic 76 overview 69 circuit values. 91 driver/receiver objectives 101 fault analysis 69 dynamic pattern 98 testability measurements 76 dynamic pin 98 controllability/observability (CO) shorted nets 98 analysis 78 slow-to-disable objectives 120 create a fault model static pattern 93 output files 17 static pin 91 customer service.

Encounter Test: Guide 4: Faults stuck-at 92 stuck-At faults 116 P transition 116 pattern fault 93 pattern faults 116 pin faults 116 G possible value set (PVS) 78 possibly testable at best faults 107 global attributes PTAB (possibly testable at best) faults 107 possibly testable at best (PTAB) 107 PVS (possible value set) 78 global term 0 112 H R read fault list and create fault subset help. creating 140 message analysis.12 . interactive shorted nets pattern fault 98 single port latch 97 slow-to-disable objectives 120 N static pattern faults 116 stuck driver and shorted net objectives 101 newlink gui_tfahints 75 stuck driver objectives 119 newlink uxs 111 stuck-At faults 116 non-contacted pin stuck-at faults 92 sequential conflict 112 O T test points objectives recommendations 82 driver/receiver test points. accessing 13 input files 37 manually created input 38 output files 39 I output from view fault list 37 removing scan faults from consideration ignore faults file 40 overview 40 inactive faults 108 L S S-A-1 AND gate 92 latch tracing 79 scan faults 40 linehold 111 removing from consideration 40 sequential conflict 112 sequential depth 79 M shorted net faults 116 shorted net faults. inserting 84 slow-to-disable 120 testability measurements stuck driver 119 controllability/observability 78 stuck driver and shorted net 119 latch tracing 79 overview 76 October 2015 160 Product Version 15.

12 . latch 79 transition faults 116 U untestable faults global term 0 112 linehold 111 sequential conflict 112 testable domain 112 tg state 112 using Encounter Test online help 13 W write detected faults file output file write ignore faults file creating 40 X x-source 111 October 2015 161 Product Version 15. Encounter Test: Guide 4: Faults performing 77 possible value set (PVS) 78 sequential depth 79 tg state 112 tracing.

Encounter Test: Guide 4: Faults October 2015 162 Product Version 15.12 .