You are on page 1of 192

Steam Generator Management Program: Pressurized Water Reactor Steam Generator Examination Guidelines, Revision 7

WARNING: Please read the Export Control Agreement on the back cover.

Place Image Here

Effective December 6, 2006, this report has been made publicly available in accordance with Section 734.3(b)(3) and published in accordance with Section 734.7 of the U.S. Export Administration Regulations. As a result of this publication, this report is subject to only copyright protection and does not require any license agreement from EPRI. This notice supersedes the export control restrictions and any proprietary licensed material notices embedded in the document prior to publication.

Steam Generator Management Program: Pressurized Water Reactor Steam Generator Examination Guidelines: Revision 7
1013706

Final Report, October 2007

EPRI Project Manager S. Swilley

ELECTRIC POWER RESEARCH INSTITUTE 3420 Hillview Avenue, Palo Alto, California 94304-1338 PO Box 10412, Palo Alto, California 94303-0813 USA 800.313.3774 650.855.2121 askepri@epri.com www.epri.com

DISCLAIMER OF WARRANTIES AND LIMITATION OF LIABILITIES


THIS DOCUMENT WAS PREPARED BY THE ORGANIZATION(S) NAMED BELOW AS AN ACCOUNT OF WORK SPONSORED OR COSPONSORED BY THE ELECTRIC POWER RESEARCH INSTITUTE, INC. (EPRI). NEITHER EPRI, ANY MEMBER OF EPRI, ANY COSPONSOR, THE ORGANIZATION(S) BELOW, NOR ANY PERSON ACTING ON BEHALF OF ANY OF THEM: (A) MAKES ANY WARRANTY OR REPRESENTATION WHATSOEVER, EXPRESS OR IMPLIED, (I) WITH RESPECT TO THE USE OF ANY INFORMATION, APPARATUS, METHOD, PROCESS, OR SIMILAR ITEM DISCLOSED IN THIS DOCUMENT, INCLUDING MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, OR (II) THAT SUCH USE DOES NOT INFRINGE ON OR INTERFERE WITH PRIVATELY OWNED RIGHTS, INCLUDING ANY PARTY'S INTELLECTUAL PROPERTY, OR (III) THAT THIS DOCUMENT IS SUITABLE TO ANY PARTICULAR USER'S CIRCUMSTANCE; OR (B) ASSUMES RESPONSIBILITY FOR ANY DAMAGES OR OTHER LIABILITY WHATSOEVER (INCLUDING ANY CONSEQUENTIAL DAMAGES, EVEN IF EPRI OR ANY EPRI REPRESENTATIVE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES) RESULTING FROM YOUR SELECTION OR USE OF THIS DOCUMENT OR ANY INFORMATION, APPARATUS, METHOD, PROCESS, OR SIMILAR ITEM DISCLOSED IN THIS DOCUMENT. ORGANIZATION(S) THAT PREPARED THIS DOCUMENT Electric Power Research Institute (EPRI)

NOTE
For further information about EPRI, call the EPRI Customer Assistance Center at 800.313.3774 or e-mail askepri@epri.com. Electric Power Research Institute, EPRI, and TOGETHERSHAPING THE FUTURE OF ELECTRICITY are registered service marks of the Electric Power Research Institute, Inc. Copyright 2007 Electric Power Research Institute, Inc. All rights reserved.

CITATIONS
This report was prepared by Electric Power Research Institute (EPRI) 1300 West W.T. Harris Boulevard Charlotte, NC 28262 Principal Investigator S. Swilley This report describes research sponsored by EPRI. The report is a corporate document that should be cited in the literature in the following manner: Steam Generator Management Program: Pressurized Water Reactor Steam Generator Examination Guidelines: Revision 7. EPRI, Palo Alto, CA: 2007. 1013706.

iii

PRODUCT DESCRIPTION

This report provides requirements for examination plans and processes that are necessary to meet the performance criteria set forth in the Nuclear Energy Institute (NEI) 97-06, Steam Generator Program. Results and Findings This document continues to provide recommendations and requirements to the industry for sampling and inspection of steam generator (SG) tubing. The qualification for techniques and personnel is described in detail. Additionally, this document recognizes the improved performance of new SG tubing materials in the absence of foreign objects and in the presence of good chemistry controls. Data quality parameters continue to be an important attribute of an effective SG inspection. Inspection frequencies as described in Technical Specification Task Force (TSTF) report TSTF 449, Steam Generator Tube Integrity, are contained within this document. With these recommendations and requirements in place and implemented by the licensee, this document makes a significant contribution to meeting the requirements of NEI 97-06. Challenges and Objectives SG tubing, when degraded, can provide a challenge to the required safety functions in terms of structural stability and leakage. It is imperative that the condition of the SG be known so that appropriate repairs and operating parameters can be applied. Nondestructive examination (NDE) is the first step in determining the condition of the SG. This document is intended to be used by utility engineers, program managers, NDE personnel, and plant management in support of: Implementation of NEI 97-06 Meeting the sampling and inspection requirements of TSF 449 Technique and personnel qualification for input to condition monitoring and operational assessments

Applications, Value, and Use The fundamental elements of NEI 97-06 represent a balance of prevention, inspection, evaluation, repair, and leakage monitoring measures. Implementation of these elements requires maintenance of this document to ensure applicability to the licensee for management of SG issues.

EPRI Perspective NEI 97-06 requires condition monitoring and operational assessment of the SG tubing during and after an outage. These guidelines provide the recommendations and requirements for inspection, technique validation, data quality, and qualification of techniques and personnel. This document reflects the current industry practices, with enhancements where the industry deemed necessary. Data management requirements were addressed for the first time in this document. The damage mechanisms in Appendix G were updated as a result of SG replacement projects. A new appendix on qualification of ultrasonic testing (UT) personnel was included to separate UT personnel qualification from Appendix G; Appendix J was separated from Appendix H for UT technique qualifications in the same manner. Appendix I was added to address the determination of NDE system uncertainties for tube integrity assessments (performance demonstration). The inspection frequencies, as described in TSTF 449, were included to ensure compliance with the licensees technical specifications. The use of the terms mandatory, shall, and recommend (should) was addressed in accordance with the EPRI Steam Generator Management Program Administrative Procedure, (1011274). Additionally, the performance-based requirements previously contained in Section 4 have been deleted. Since Revision 6 was published, interim guidance was issued four times: Interim Guidance on Steam Generator Tube Leak at Comanche Peak Unit 1, April 2003 Interim Guidance for Implementation of EPRI PWR Steam Generator Examination Guidelines: Revision 6, April 2003 Interim Guidance for EPRI PWR Steam Generator Examination Guidelines: Revision 6, Section 6.3.3.3, September 2003 Interim Guidance for EPRI PWR Steam Generator Examination Guidelines: Revision 6, Sections 6.2.4, 6.3.3.3, 6.5, and Appendix H supplements H1 and H2, March 2004

This revision took into account the interim guidance documents issued since the last revision. Upon issuance of this document, all previous interim guidance is superseded. Approach An ad hoc utility committee under the direction of the NDE Issue Resolution Group (IRG) was convened to complete this revision. The main committee was organized with utility members chairing individual subcommittees for each report section and appendix. Each subcommittee contained participants from both utility and vendor organizations. The objective of this approach was to ensure that input was received from across the industry and that the most qualified personnel were available to work on the applicable sections. The subcommittees developed draft material to be reviewed by the main committee. Numerous teleconferences, web conferences, and meetings were conducted over an 18-month period to develop the final draft. Keywords Nuclear steam generators In-service inspection Eddy current PWR SG tubing SGMP

vi

ACKNOWLEDGMENTS
This report was prepared by the PWR Steam Generator Examination Guidelines Ad Hoc Committee Ad Hoc Main Committee membership principal contributors: T. Pettus D. Mayes G. Navratil H. Smith G. Boyers S. Redner T. Bipes C. Connor A. Matheny C. Webber S. Swilley AmerenUE Duke Energy Exelon Exelon Florida Power & Light Nuclear Management Company Progress Energy PSEG Nuclear Southern California Edison Tennessee Valley Authority EPRI

Ad Hoc Sub-Committee membership principal contributors: M. Boudreaux M. Harris B. Miranda D. Hansen C. Visconti T. Mayer E. Korkowski R. Lieder T. Smith D. Ellis T. Weyandt AREVA NP AREVA NP AREVA NP Arizona Public Service Corestar Dominion Florida Power & Light Florida Power & Light Southern Company TXU Power TXU Power vii

R. Procratsky R. Maurer J. Siegel N. Farenbaugh R. Guill S. Kenefick

Westinghouse Westinghouse Zetec Zetec EPRI EPRI

viii

CONTENTS

1 INTRODUCTION AND BACKGROUND ................................................................................1-1 1.1 1.2 1.3 Purpose .......................................................................................................................1-1 Scope...........................................................................................................................1-2 Background..................................................................................................................1-2

2 COMPLIANCE RESPONSIBILITIES .....................................................................................2-1 2.1 2.2 2.3 Introduction ..................................................................................................................2-1 Management Responsibilities ......................................................................................2-1 Examination and Engineering Responsibilities ............................................................2-2

3 EXAMINATION REQUIREMENTS .........................................................................................3-1 3.1 3.2 Introduction ..................................................................................................................3-1 SG PSI Requirements .................................................................................................3-1 Inspection of Tubes.............................................................................................3-1 Inspection of Tube Sleeves.................................................................................3-2 Inspection of Tube Plugs.....................................................................................3-2 Inspection of Other Tube Repairs .......................................................................3-2

3.2.1 3.2.2 3.2.3 3.2.4 3.3 3.4

SG In-Service Inspection (ISI) Requirements ..............................................................3-2 First ISI ........................................................................................................................3-4 First ISI of Tubes.................................................................................................3-4 First ISI of Sleeves ..............................................................................................3-4 First ISI Examination of Plugs .............................................................................3-4 First ISI of Other Tube Repairs ...........................................................................3-4 Subsequent Inspections of Tubing......................................................................3-4 Subsequent Inspection of Sleeves......................................................................3-5 Subsequent Inspection of Plugs..........................................................................3-5 Subsequent Inspection of Other Repairs ............................................................3-5

3.4.1 3.4.2 3.4.3 3.4.4 3.5 3.5.1 3.5.2 3.5.3 3.5.4

Subsequent ISIs ..........................................................................................................3-4

ix

3.6 3.7 3.8

Tube Examination Scope and Sampling Plans............................................................3-5 Classification of Sample Plan Results .........................................................................3-6 Expansion of the Examination .....................................................................................3-7 Expansion Requirements for Sample Plans........................................................3-7 Expansion Requirements for Foreign Objects and Possible Loose Parts...........3-7 Forced Outage Guidance........................................................................................3-9

3.8.1 3.8.2 3.9 3.10

Secondary Side Visual Examination ............................................................................3-9

4 SAMPLING REQUIREMENTS FOR PERFORMANCE-BASED EXAMINATIONS (DELETED)................................................................................................................................4-1 5 STEAM GENERATOR ASSESSMENTS ...............................................................................5-1 5.1 Introduction ..................................................................................................................5-1

6 SYSTEM PERFORMANCE ....................................................................................................6-1 6.1 6.2 Introduction ..................................................................................................................6-1 Technique Performance Requirements .......................................................................6-1 Site-Validated Techniques ..................................................................................6-2 Diagnostic Techniques........................................................................................6-2 Calibration Requirements....................................................................................6-3 Bobbin Probe Calibration Standards.............................................................6-3 Rotating and Array Probe Calibration Standards..........................................6-3 Other Calibration Standards .........................................................................6-3 Voltage Normalization Requirements ...........................................................6-4 Sample Rate Requirements..........................................................................6-5

6.2.1 6.2.2 6.2.3

6.2.3.1 6.2.3.2 6.2.3.3 6.2.3.4 6.2.3.5 6.3 6.3.1 6.3.2 6.3.3

Analysis Performance Requirements...........................................................................6-5 QDA ....................................................................................................................6-5 Site-Specific Performance Demonstration (SSPD) Requirements......................6-5 Data Analysis ......................................................................................................6-5 Site-Specific Analysis Guidelines..................................................................6-6 Independent Analysis Teams........................................................................6-8 Automated Data Analysis..............................................................................6-8 Resolution .....................................................................................................6-9 IQDA Oversight...........................................................................................6-10 Data Analysis Feedback .............................................................................6-10

6.3.3.1 6.3.3.2 6.3.3.3 6.3.3.4 6.3.3.5 6.3.3.6

6.4

Data Management .....................................................................................................6-11 Data Management Personnel Requirements ....................................................6-11 System Process Requirements.........................................................................6-11 Inspection Planning and Sample Plan Expansion ......................................6-11 Uploading Analysis Results ........................................................................6-12 Rescheduling ..............................................................................................6-12 Inspection Progress ....................................................................................6-12 Scope Completion Accountability ...............................................................6-12 Final Closeout .............................................................................................6-12 Final Archiving ............................................................................................6-13

6.4.1 6.4.2

6.4.2.1 6.4.2.2 6.4.2.3 6.4.2.4 6.4.2.5 6.4.2.6 6.4.2.7 6.5 6.6 6.7 6.8 6.9 6.5.1

Data Quality Requirements........................................................................................6-13 Probe Quality Parameters.................................................................................6-16 Human Performance Requirements ..........................................................................6-17 Verification Responsibilities .......................................................................................6-18 Contractor Oversight..................................................................................................6-19 Requirements for Visual Inspection ...........................................................................6-19 Mechanical Plugs ..............................................................................................6-19 Welded Plugs ....................................................................................................6-19 Standardized Analysis Report Format ..................................................................6-20 Standardized Raw Data Format............................................................................6-20

6.9.1 6.9.2 6.10 6.11

7 SUMMARY OF REQUIREMENTS .........................................................................................7-1 7.1 7.2 7.3 7.4 7.5 7.6 Introduction and Background .......................................................................................7-1 Compliance Responsibilities ........................................................................................7-1 Sampling Requirements for Prescriptive-Based Examinations....................................7-1 Sampling Requirements for Performance-Based Examinations..................................7-6 SG Assessments .........................................................................................................7-6 System Performance ...................................................................................................7-7

8 REFERENCES .......................................................................................................................8-1 A SAMPLING BASIS ............................................................................................................... A-1 A.1 Sample Size................................................................................................................ A-1 A.1.1 Level of Confidence Relative to Sample Size and the Number of Tubes Affected by Degradation .................................................................................................. A-1 A.1.2 Statistical Basis for Level of Confidence ............................................................ A-2

xi

B APPENDIX ............................................................................................................................ B-1 C APPENDIX ............................................................................................................................ C-1 D APPENDIX ............................................................................................................................ D-1 E APPENDIX ............................................................................................................................ E-1 F TERMINOLOGY .....................................................................................................................F-1 F.1 F.2 F.3 Definitions ....................................................................................................................F-1 Acronyms .....................................................................................................................F-3 Three-Letter Codes......................................................................................................F-6

G QUALIFICATION OF EDDY CURRENT EXAMINATION PERSONNEL FOR ANALYSIS OF EDDY CURRENT EXAMINATION DATA ....................................................... G-1 G.1 G.2 G.2.1 G.3 G.3.1 Scope ..................................................................................................................... G-1 Qualification Level.................................................................................................. G-1 General Requirements ....................................................................................... G-1 Written Practice...................................................................................................... G-2 General Requirements ....................................................................................... G-2 Training ....................................................................................................... G-2 Examinations............................................................................................... G-2 G.3.1.1 G.3.1.2 G.3.2 G.3.3 G.3.4 G.3.5 G.3.6 G.4 G.4.1

Responsibilities .................................................................................................. G-2 Use of an Outside Agency ................................................................................. G-2 Transfer of QDA Qualifications .......................................................................... G-2 Confidentiality..................................................................................................... G-3 Availability of Training Course Materials ............................................................ G-3 Qualification Requirements .................................................................................... G-3 Training .............................................................................................................. G-3 Program, Facilities, and Materials............................................................... G-3 QDA Training Course Content and Duration............................................... G-3 SSPD Training ............................................................................................ G-3 Qualification Examinations and Data Sets .................................................. G-4 Written Examinations .......................................................................... G-4 Practical Examination .......................................................................... G-4

G.4.1.1 G.4.1.2 G.4.1.3 G.4.2 G.4.2.1

QDA Examinations............................................................................................. G-4 G.4.2.1.1 G.4.2.1.2

xii

G.4.2.2 G.4.2.3

Analyst Qualification Criteria ....................................................................... G-7 Re-examination ........................................................................................... G-9 Written Examination ............................................................................ G-9 Practical Examination .......................................................................... G-9

G.4.2.3.1 G.4.2.3.2 G.4.2.4 G.4.2.5 G.4.2.6 G.4.3 G.4.3.1

Interrupted Service...................................................................................... G-9 QDA Requalification.................................................................................... G-9 QDA Qualification Records ....................................................................... G-10 Examinations and Data Sets..................................................................... G-11 Written Examinations ........................................................................ G-11 Practical Examinations ...................................................................... G-11 Written Examination .......................................................................... G-11 Practical Examination ........................................................................ G-12

SSPD Examinations......................................................................................... G-11 G.4.3.1.1 G.4.3.1.2

G.4.3.2

Examination Grading................................................................................. G-11

G.4.3.2.1 G.4.3.2.2 G.4.3.3 G.4.3.4 G.5

Re-Examination......................................................................................... G-12 SSPD Performance Feedback .................................................................. G-12

Re-Grading QDA and SSPD Examinations ......................................................... G-12

Attachment G-1 QDA Training And Laboratory Program Contents.................................... G-13 H PERFORMANCE DEMONSTRATION FOR EDDY CURRENT EXAMINATION ................. H-1 H.1 H.2 H.2.1 Scope ..................................................................................................................... H-1 General Examination System Requirements ......................................................... H-1 Technique Requirements ................................................................................... H-1 Examination Technique Specification Sheet ............................................... H-1 General Variables All Types of Probes (Bobbin, Rotating and Array)............................................................................................ H-2 Specific Essential Variables for Bobbin Coil ........................................ H-2 Specific Variables for Rotating Probes ................................................ H-3 Specific Variables for Array Probes ..................................................... H-3 H.2.1.1.1 H.2.1.1.2 H.2.1.1.3 H.2.1.1.4 H.3 H.3.1 H.3.2 H.3.3 H.3.4 H.4 H.2.1.1

Performance Demonstration .................................................................................. H-3 General .............................................................................................................. H-3 Essential Variable Ranges ................................................................................. H-3 Requalification.................................................................................................... H-3 Disqualification of Techniques ........................................................................... H-4 Essential Variable Tolerances................................................................................ H-4

xiii

H.4.1

Instruments and Probes ..................................................................................... H-4 Cable ........................................................................................................... H-4 Frequency/Material/Wall Thickness ............................................................ H-5 Array Probe Sensing Area Coverage .......................................................... H-6

H.4.1.1 H.4.1.2 H.4.1.3 H.4.2 H.4.3 H.5 H1.1 H1.2

Computerized System Algorithms...................................................................... H-6 Calibration Methods ........................................................................................... H-6 Record of Qualification........................................................................................... H-6 Equipment Characterization .................................................................... H-7 Scope ..................................................................................................................... H-7 Eddy Current Instrument ........................................................................................ H-7 Signal Generation ........................................................................................... H-7 Total Harmonic Distortion.......................................................................... H-7 Output Impedance..................................................................................... H-7 Input Impedance........................................................................................ H-8 Amplifier Linearity and Stability ................................................................. H-8 Phase Linearity.......................................................................................... H-9 Bandwidth and Demodulation Filter Response ....................................... H-10 A/D Resolution ........................................................................................ H-10 Dynamic Range....................................................................................... H-11 Digitizing Rate ......................................................................................... H-11

Supplement H1

H1.2.1

H1.2.1.1 H1.2.1.2 H1.2.2 H1.2.2.1 H1.2.2.2 H1.2.2.3 H1.2.2.4 H1.2.3 H1.2.3.1 H1.2.3.2 H1.2.3.3 H1.2.4 H1.3 H1.3.1 H1.3.2 H1.3.3

Amplification, Demodulation and Filtering....................................................... H-8

A/D Conversion............................................................................................. H-10

Channel Crosstalk ....................................................................................... H-11 Impedance .................................................................................................... H-12 Center Frequency ......................................................................................... H-12 Coil Configuration Sensing Area................................................................... H-12 Bobbin Probes......................................................................................... H-12 Effective Scan Field Width............................................................... H-12 Fill Factor Coefficient....................................................................... H-13 Depth Coefficient ............................................................................. H-13 Axial Length Coefficient (ALC) ........................................................ H-14 Transverse Width Coefficient (TWC) ............................................... H-14 Phase-to-Depth Curve (PDC) .......................................................... H-15

Probe Characterization ........................................................................................ H-12

H1.3.3.1

H1.3.3.1.1 H1.3.3.1.2 H1.3.3.1.3 H1.3.3.1.4 H1.3.3.1.5 H1.3.3.1.6

xiv

H1.3.3.1.7 H1.3.3.2 H1.3.3.2.1 H1.3.3.2.2 H1.3.3.2.3 H1.3.3.2.4 H1.3.3.2.5 H1.3.3.2.6 H1.3.3.2.7 H1.3.3.3 H1.3.3.3.1 H1.3.3.3.2 H1.3.3.3.3 H1.3.3.3.4 H1.3.3.3.5 H1.3.3.3.6 H1.3.3.3.7 Supplement H2 H2.1 H2.2

Direct Current Saturation Strength (DCSS) ..................................... H-15 Effective Scan Field Width............................................................... H-15 Effective Track Field Width (ETFW) ................................................ H-15 Lift-Off Value (LOV) ......................................................................... H-16 Depth Coefficient ............................................................................. H-16 ALC.................................................................................................. H-16 TWC ................................................................................................ H-16 PDC ................................................................................................. H-17 Effective Scan Field Width............................................................... H-17 Diametral Offset Value (DOV) ......................................................... H-17 Depth Coefficient ............................................................................. H-17 ALC.................................................................................................. H-17 TWC ................................................................................................ H-17 PDC ................................................................................................. H-18 Tube Circumference Coverage ....................................................... H-18

Rotating Probes....................................................................................... H-15

Array Probes ........................................................................................... H-17

Qualification Requirements for Examination of SG Tubing ................... H-19

General ................................................................................................................ H-19 Qualification Data Set Requirements ................................................................... H-19 General ......................................................................................................... H-19 Detection Data Set........................................................................................ H-21 Sizing Data Set ............................................................................................. H-21 Detection Acceptance Criteria ...................................................................... H-22 Technique Sizing Performance..................................................................... H-24 Protocol for PDD Revision..................................................................... H-25

H2.2.1 H2.2.2 H2.2.3 H2.3 H2.3.1 H2.3.2 H3.1 H3.2

Acceptance Criteria.............................................................................................. H-22

Supplement H3

Scope ................................................................................................................... H-25 Administrative Responsibilities............................................................................. H-25 New Techniques ........................................................................................... H-25 Existing Techniques...................................................................................... H-25

H3.2.1 H3.2.2 H3.3 H3.4 H3.5

Technique Originator Responsibilities.................................................................. H-25 PDD Curator Responsibilities............................................................................... H-26 PDD Peer Review Procedure............................................................................... H-26

xv

H3.5.1 H3.5.2 H4.1 H4.2 H4.3 H4.4 H4.5

Team Structure ............................................................................................. H-26 Conduct of the Review.................................................................................. H-26 Protocol for Technique Peer Review ..................................................... H-27

Supplement H4

Scope ................................................................................................................... H-27 Administrative Responsibilities............................................................................. H-27 Technique Originator Responsibilities.................................................................. H-27 Examination Technique Specification Sheet Curator Responsibilities................. H-27 Peer Review Procedure ....................................................................................... H-27 Team Structure ............................................................................................. H-27 Conduct of the Review.................................................................................. H-28

H4.5.1 H4.5.2

I NDE SYSTEM MEASUREMENT UNCERTAINTIES FOR TUBE INTEGRITY ASSESSMENTS.........................................................................................................................I-1 I.1 I.2 Scope............................................................................................................................I-1 Requirements ...............................................................................................................I-1 I.2.1 I.2.2 I.2.3 I.3 I.3.1 I.3.2 I.3.3 I.3.4 I.3.5 I.4 I1.1 I1.2 I2.1 I2.2 I2.3 General ................................................................................................................I-1 Personnel Requirements......................................................................................I-1 Technique Requirements .....................................................................................I-1 Data Set Requirements........................................................................................I-2 Protocol ................................................................................................................I-2 Process Overview ................................................................................................I-2 Acceptance Criteria..............................................................................................I-2 Determining Noise-Dependent NDE System Performance..................................I-3 ETSS Data Set Requirements ....................................................................I-4

Performance Demonstration .........................................................................................I-2

Record of Qualification..................................................................................................I-3 Data Acceptability Guidelines ..................................................................................I-4 Data Set Requirements............................................................................................I-4 Performance Demonstration .......................................................................I-8 General Requirements .............................................................................................I-8 Probability of Detection ............................................................................................I-8 Sizing .......................................................................................................................I-8 NDE System Performance Measures for Tube Integrity Assessments...............................................................................................I-9

Supplement I1

Supplement I2

Supplement I3 I3.1

NDE System Performance Measures ......................................................................I-9

xvi

I3.1.1 I3.2.2 I4.1 I4.2 I4.3 I4.4 I4.5 I4.5.1 I4.5.2

Probability of Detection ........................................................................................I-9 Sizing ..............................................................................................................I-11 Protocol for NDE System Peer Review....................................................I-13 Scope .....................................................................................................................I-13 Administrative Responsibilities...............................................................................I-13 Technique Originator Responsibilities....................................................................I-13 Curator Responsibilities .........................................................................................I-13 Peer Review Procedure .........................................................................................I-13 Team Structure ..................................................................................................I-13 Conduct of the Review .......................................................................................I-14

Supplement I4

J PERFORMANCE DEMONSTRATION FOR ULTRASONIC EXAMINATION........................J-1 J.1 J.2 Scope........................................................................................................................... J-1 Performance Demonstration ........................................................................................ J-1 General ............................................................................................................... J-1 Essential Variable Ranges .................................................................................. J-1 Requalification..................................................................................................... J-2 Disqualification of Techniques ............................................................................ J-2 Instruments and Search Units............................................................................. J-2 Computerized System Algorithms....................................................................... J-2 Calibration Methods ............................................................................................ J-3 Examination Technique Specification Sheet....................................................... J-3 Equipment Variables ..................................................................................... J-3 Technique Variables ..................................................................................... J-4 Equipment Characterization ...................................................................... J-5

J.2.1 J.2.2 J.2.3 J.2.4 J.3 J.3.1 J.3.2 J.3.3 J.4 J.4.1

Essential Variable Tolerances ..................................................................................... J-2

Record of Qualification................................................................................................. J-3 J.4.1.1 J.4.1.2

Supplement J1 J1.1 J1.2

Scope ...................................................................................................................... J-5 Pulsers, Receivers, Search Unit, and Cable ........................................................... J-5 Search Unit ...................................................................................................... J-5 Transducer Wave Propagation Mode ......................................................... J-5 Transducer Measured Angle....................................................................... J-5 Search Unit Transducer Evaluation ............................................................ J-6 Focused Search Units................................................................................. J-6 S/N Ratio..................................................................................................... J-7 J1.2.1.1 J1.2.1.2 J1.2.1.3 J1.2.1.4 J1.2.1.5

J1.2.1

xvii

J1.2.1.6 J1.2.1.7 J1.2.2 J1.2.3 J1.3 J2.1 J2.2

Confirmation................................................................................................ J-7 Requirements for Pulse Echo Tip Diffraction Sizing Techniques................ J-7

Substituting a Pulser/Receiver or Ultrasonic Instrument.................................. J-8 Cable ............................................................................................................... J-8 Qualification Requirements for Examination of SG Tubing ..................... J-10

Documentation and Review of Changes................................................................. J-9 General ................................................................................................................. J-10 Qualification Data Set Requirements .................................................................... J-10 General .......................................................................................................... J-10 Detection Data Set......................................................................................... J-12 Sizing Data Set .............................................................................................. J-12 Detection Acceptance Criteria ....................................................................... J-13 Technique Sizing Performance...................................................................... J-15

Supplement J2

J2.2.1 J2.2.2 J2.2.3 J2.3 J2.3.1 J2.3.2

Acceptance Criteria............................................................................................... J-13

K QUALIFICATION OF PERSONNEL FOR ANALYSIS OF ULTRASONIC EXAMINATION DATA.............................................................................................................. K-1 K.1 K.2 K.3 Scope.......................................................................................................................... K-1 Responsibilities and Records...................................................................................... K-1 Written Practice Requirements ................................................................................... K-2 Training and Examination .................................................................................. K-2 Responsibilities .................................................................................................. K-2 Use of an Outside Agency ................................................................................. K-2 Transfer Qualifications ....................................................................................... K-2 Confidentiality..................................................................................................... K-2 Availability of Training Course Materials ............................................................ K-2 Training .............................................................................................................. K-3 Written Examination ........................................................................................... K-3 Practical Examination......................................................................................... K-4 General........................................................................................................ K-4 Practical Data Set........................................................................................ K-4 Analysis Qualification Criteria...................................................................... K-4

K.3.1 K.3.2 K.3.3 K.3.4 K.3.5 K.3.6 K.4 K.4.1 K.4.2 K.4.3

Qualification Requirements......................................................................................... K-3

K.4.3.1 K.4.3.2 K.4.3.3

xviii

K.5

Re-examination........................................................................................................... K-6 Written Examination ........................................................................................... K-7 Practical Examination......................................................................................... K-7

K.5.1 K.5.2 K.6 K.7

Interrupted Service ..................................................................................................... K-7 SSPD .......................................................................................................................... K-7 SSPD Training ................................................................................................... K-7 Written Examinations ......................................................................................... K-8 Practical Examination......................................................................................... K-8 Detection and Orientation............................................................................ K-8 Sizing........................................................................................................... K-8 UQDA Training Program ......................................................................... K-9

K.7.1 K.7.2 K.7.3

K.7.3.1 K.7.3.2 K.7.4

Retesting ............................................................................................................ K-8

Attachment K-1

xix

LIST OF FIGURES
Figure G-1 Qualification Flow Chart......................................................................................... G-8 Figure G-2 Requalification Flow Chart................................................................................... G-10 Figure H-1 Instrument Linearity ............................................................................................... H-9 Figure I-1 Correlation Coefficient, r, at 95% confidence level for a positive correlation.............I-6 Figure I-2 Minimum Number of Required NDD Grading Units as a Function of Population false call Rate (Plot assumes zero misses i.e. all the grading units are reported as NDD) ................................................................................................................I-7 Figure I-3 Calculation of POD and Measurement Uncertainties Using GLM Excel Spreadsheet......................................................................................................................I-10 Figure I-4 Using GLM Weighted Average to Describe Multiple Analysis Team POD Results ..............................................................................................................................I-11 Figure I-5 NDE System Sizing Error for Ten Analysis Teams .................................................I-12

xxi

LIST OF TABLES
Table 3-1 Expansion of Sample Plans for Non-Cracks (Thinning and Wear) ...........................3-8 Table 3-2 Expansion of Sample Plans for Cracks (PWSCC, Outside Diameter StressCorrosion Cracking [ODSCC], and Intergranular Attack [IGA])..........................................3-9 Table 6-1 Site-Specific Data Analysis Guidelines and/or Site Specific ETSSs Contents..........6-7 Table 6-2 Generic Data Quality Parameters ............................................................................6-14 Table 6-3 Bobbin Probe Data Quality Parameters...................................................................6-15 Table 6-4 Rotating Probe Data Quality Parameters ................................................................6-15 Table 6-5 Array Probe Data Quality Parameters .....................................................................6-16 Table 6-6 Probe Manufacturing Quality Parameters................................................................6-17 Table A-1 Tube Sample Sizes and Associated Numbers of Affected Tubes in SGs Required to Provide a 0.90 Level of Confidence That At Least One Affected Tube Is Sampled from the SG....................................................................................................... A-2 Table F-1 Three-Letter Codes ..................................................................................................F-7 Table G-1 Damage Mechanisms Included in the Practical Examination ................................. G-4 Table G-2 Example Performance Demonstration Test Matrices for Flaw Detection................ G-5 Table G-3 Example Performance Demonstration Test Matrices for Flaw Detection and Sizing ................................................................................................................................ G-6 Table H-1 Sample Size Correlation Coefficient ..................................................................... H-20 Table H-2 Minimum Number of Flawed Grading Units (60% TW) That Must Be Detected to Meet the Technique Acceptance Criterion of POD 0.80 at a 90% Confidence Level ............................................................................................................ H-21 Table I-1 Data Sufficiency Requirements for POD ....................................................................I-5 Table I-2 Data Sufficiency Requirements for Sizing ..................................................................I-5 Table J-1 Sample Size Correlation Coefficient ....................................................................... J-11 Table J-2 Minimum Number of Flawed Grading Units (60% TW) That Must Be Detected to Meet the Technique Acceptance Criterion of POD 0.80 at a 90% Confidence Level ............................................................................................................. J-12 Table K-1 Damage Mechanisms Categories ........................................................................... K-3 Table K-2 Example Performance Demonstration Test Matrices for Flaw Detection ................ K-5 Table K-3 Example Performance Demonstration Test Matrices for Flaw Detection and Sizing ................................................................................................................................ K-6

xxiii

INTRODUCTION AND BACKGROUND

1
1.1

Purpose

This document is the seventh revision of the PWR Steam Generator Examination Guidelines [1], originally published in 1981 and revised in 1984, 1988, 1992, 1996, 1997, and 2002. The purpose of this document is to provide requirements and guidance for meeting the following objectives: Identification of degradation that effects SG tube integrity, Qualification of NDE systems used to detect and/or size degradation.

The requirements provided within this document promote accountability for the examination process, and provide for subsequent verification of compliance by internal or external audit. The protocol for deviating from specific requirements of this document is in the EPRI Steam Generator Management Program Administrative Procedures (SGMP) [2]. The requirements provided by this document assist in meeting the objectives of NEI 97-06, Steam Generator Program Guidelines [3]. NEI 97-06 brings consistency to the development and implementation of a program that monitors the condition of SG tubes against acceptable performance criteria in a manner sufficient to provide reasonable assurance that the SGs remain capable of fulfilling their intended safety function. The EPRI Steam Generator Management Program Administrative Procedures [2] includes protocol that is to be followed by guideline revision committees with regard to establishing the level of implementation expected by a licensee. In particular, three categories or elements have been established: mandatory requirements, shall requirements, and recommendations. These categories are clearly defined in [2]. These categories are summarized below: 1. Mandatory requirements are important to steam generator tube integrity and should not be deviated from by any licensee. Steam generator tube integrity is defined as meeting the performance criteria as specified in NEI 97-06 [3]. 2. Shall requirements are important to long-term steam generator reliability but could be subject to legitimate deviations due to plant differences and special situations. Shall requirements appear throughout the document and are listed in Section 7. 3. Recommendations are good or best practices that licensees should try to implement when practical. Written documentation is not required when a recommendation is not implemented. This document uses the term should interchangeably with the term it is recommended. 1-1

Introduction and Background

1.2

Scope

This revision adheres to the guidance of NEI 97-06 and the SGMP Administrative Procedure. This document provides the requirements for performing SG tube inspections, including selection of sample plans, validation of NDE systems to detect and size tubing flaws, and specification of requirements for qualification of examination techniques and data analysis personnel. The development and revision protocol of this guideline and subsequent revisions are governed by the EPRI Steam Generator Management Program Administrative Procedures [2].

1.3

Background

Tube degradation in pressurized water reactor (PWR) SGs has resulted from both corrosion and mechanical mechanisms. The main contributors to tube degradation are the following: Intergranular corrosion, Stress-corrosion cracking (SCC), Thinning, Denting, Mechanical wear, and Fatigue.

Alone or in combination, the above mechanisms have been responsible for the early replacement of SGs in a significant number of plants worldwide. Tube ruptures and leakage events resulted in numerous forced outages or have had an adverse effect on the schedule of planned outages. The prevention of these events continues to be a major goal of the industry. Improvements in operations and maintenance practices, inspection programs, and structural analyses have been successful in reducing lost capacity due to SG related forced outages. As SGs accrue service life, the degradation mechanisms cited above become more likely to challenge their operability. A major industry initiative, NEI 97-06 has been developed to manage the effects of these mechanisms. NEI 97-06 is the product of the efforts of the utilities, nuclear steam supply system (NSSS) vendors, inservice inspection (ISI) vendors, and various consultants to establish safe and technically sound practices to maximize the service life of nuclear SGs. Additional guidance is found in Sections V and XI of the American Society of Mechanical Engineers (ASME) Code (requirements for eddy current testing [ECT]) [4], and the American Society for Nondestructive Testing (ASNT) SNT-TC-1A [5] and American National Standards Institute (ANSI) CP-189 (minimum standards for qualification of NDE personnel) [6].

1-2

COMPLIANCE RESPONSIBILITIES

2
2.1

Introduction

The objective of this section is to identify organizational responsibilities necessary to ensure examination activities are effectively implemented for safe and reliable SG operation. The Code of Federal Regulations, 10CFR50, Appendix B [7], requires each licensee to assume responsibility for those planned and systematic actions, such as inspection, necessary to provide adequate confidence that a structure, system, or component performs satisfactorily in service. Because the responsibility for SG examination resides with the licensee operating the unit, a consistently successful SG program requires the commitment of licensee management and a clear communication of that commitment to those responsible for the SG program. When the licensee contracts portions of the SG Program, the responsibility for program implementation and compliance with requirements always remains with the licensee. It is recommended that the authority and duties of persons and organizations planning and performing inspections be clearly established in written procedures and policies. Although licensee organizational structures vary, there are common programmatic functions that are necessary to effectively implement the SG examination process. This section addresses the general role of those functions and is not intended to specify any particular organizational structure. It emphasizes the desirability of operating in a proactive rather than a reactive mode and the timely application of examination programs to enhance nuclear safety, reliability, longevity, and overall economic viability of SGs.

2.2

Management Responsibilities

Nuclear station management is responsible for providing sufficient resources and attention to the SG program. It is recommended that specific management responsibilities include the following: Ensure that areas of SG technical responsibility have been clearly assigned either to a single organization with coordination authority or to multiple organizations with an effective coordination and interface mechanism. Support establishment of appropriate levels of training in NDE and structural mechanics for SG engineers to enable them to evaluate inspection results associated with unusual conditions and assess their impact on tube integrity.

2-1

Compliance Responsibilities

Provide resources for NDE training and experience of selected licensee personnel with the goal of achieving formal Level III certification and Qualified Data Analyst (QDA) qualification. Ensure plant modification and configuration control programs have the flexibility to accommodate SG repair activities such as plugging, sleeving, and tube pulls. Provide independent and knowledgeable auditing organizations to periodically verify compliance with SG examination requirements. Designate a licensee individual, preferably a QDA, to be responsible for NDE within the SG program. This individual is knowledgeable in SG tubing NDE and provides ongoing, routine direction for the inspection element of the SG program. The expectation is reasonable continuity in this designation.

2.3

Examination and Engineering Responsibilities

The responsibilities of the SG examination and engineering function include planning, directing, and evaluating SG examination activities. This includes technical oversight of all contracted work associated with SG examinations and repairs. Engineering personnel provide a central control of SG examination activities and participate on the licensees SG management committee to ensure integration of examination requirements. Examination and engineering responsibilities include the following: 2-2 Maintain open communication with plants NSSS engineering staff regarding potential degradation mechanisms. Ensure that each organization (for example, licensee or vendor) that conducts SG NDE inspections has a written program for conducting examinations. Develop eddy current examination specifications (detailed work scope) for inclusion in the bid package. Develop written guidelines and a documented plant-specific performance demonstration program for all data analysts. Prepare for supplemental examination methods and techniques to be utilized for indication resolution and characterization. Develop a plan to ensure that examination results are dispositioned in accordance with established criteria. Prepare for tube plugging, stabilization or repair, if applicable, including rerolling or sleeving. Ensure that tube repair methods have been thoroughly evaluated prior to generation of a repair list. Prepare for tube pulls and/or in situ pressure testing if condition monitoring or operational assessments so dictate. Establish and maintain a database management system to track inspection results and repairs.

Compliance Responsibilities

Maintain or have access to modification records, drawings, and the SG technical manual. Maintain management involvement by keeping them informed and seeking their endorsement. Communicate information with other responsible individuals involved with the plants SG program. Following each examination, develop a final report that documents the results. Communicate examination results to other plants, vendors, and EPRI. Attend eddy current NDE training programs to enhance awareness of the significant aspects of examination techniques and processes. Prepare for supplemental examination methods and techniques to be utilized for tubes surrounding repairs that could damage in-service tubes.

2-3

EXAMINATION REQUIREMENTS

3
3.1

Introduction

This section outlines pre-service inspection (PSI), ISI requirements, and sample plan expansions for SG materials fabricated from Alloy 600 Mill Annealed, Alloy 600 thermally treated (TT), Alloy 690 TT, and Alloy 800 materials. Additional inspections may be required based on the degradation and/or tube integrity assessments. It is recommended that the requirements contained within this Section for Alloy 600 TT be applied to Alloy 800. Extended inspection periods for Alloy 600 TT, and Alloy 690 TT are predicated on the premise that these materials will not experience in-service corrosion degradation within the first inspection period. Eddy current inspections shall be performed using techniques selected in accordance with Section 6 for degradation mechanism(s) identified as existing or potential in the degradation assessment (DA).

3.2

SG PSI Requirements

3.2.1 Inspection of Tubes The PSI shall be performed after tube installation for replacement SGs or after field hydrostatic tests for new plants, but in both cases, prior to initial power operation to provide a definitive baseline record against which future in-service inspections can be compared. A full-length examination on 100% of the tubing in each SG shall be performed using techniques that are expected to be employed during future inservice inspections. This requirement should not be construed to inhibit the use of new technology or to imply that the techniques used during the PSI will always remain acceptable for future use (i.e., different techniques may be appropriate based on the DA). In addition, when only the bobbin coil is employed during the PSI, techniques capable of signal characterization are recommended on a representative sample of abnormal conditions (for example, tubesheet expansion anomalies, manufacturing burnish marks, unusual signals, dings) observed during the PSI to provide a basis for signal characterization and historical comparisons.

3-1

Examination Requirements

3.2.2 Inspection of Tube Sleeves A PSI, over the full length of all sleeves, shall be performed prior to the SG being returned to service. The PSI shall also include a licensee review, in addition to vendor reviews, of the installation process parameters for each installed sleeve to ensure that the sleeve installation process was performed as intended. 3.2.3 Inspection of Tube Plugs The PSI of tube plugs shall be performed after installation. The PSI shall verify each plug was installed in the proper location and in accordance with pre-established acceptance criteria. Methods for location verification include (but are not limited to) visual and/or the use of an independent tube identification system. A licensee review (in addition to any required vendor reviews) of the pre-established acceptance criteria (for example, weld process parameters, torque traces for rolled plugs, expander travel distance for mechanical plugs) for each plug shall be performed prior to service. Refer to Section 6.9.2 for the PSI of welded plugs for additional requirements 3.2.4 Inspection of Other Tube Repairs For all other types of tube repairs, the PSI shall be performed after installation. As a minimum, a licensee review (in addition to any required vendor reviews) of pre-established acceptance criteria (for example, torque trace, hydraulic time pressure curve) for each repair shall be performed prior to service.

3.3

SG In-Service Inspection (ISI) Requirements

Per NEI 97-06 required implementation of TSTF 449, the following minimum inspection requirements apply:
Provisions for SG tube inspections. Periodic SG tube inspections shall be performed. The number and portions of the tubes inspected and methods of inspection shall be performed with the objective of detecting flaws of any type (e.g., volumetric flaws, axial and circumferential cracks) that may be present along the length of the tube, from the tube-to-tubesheet weld at the tube inlet to the tube-to-tubesheet weld at the tube outlet, and that may satisfy the applicable tube repair criteria. The tube-to-tubesheet weld is not part of the tube. In addition to meeting the requirements of d.1, d.2, and d.3 below, the inspection scope, inspection methods, and inspection intervals shall be such as to ensure that SG tube integrity is maintained until the next SG inspection. An assessment of degradation shall be performed to determine the type and location of flaws to which the tubes may be susceptible and, based on this assessment, to determine which inspection methods need to be employed and at what locations.

3-2

Examination Requirements

Reviewers Note Plants are to include the appropriate Frequency (e.g., select the appropriate Item 2.) for their SG design. The first Item 2 is applicable to SGs with Alloy 600 mill annealed tubing. The second Item 2 is applicable to SGs with Alloy 600 thermally treated tubing. The third Item 2 is applicable to SGs with Alloy 690 thermally treated tubing. d.1. Inspect 100% of the tubes in each SG during the first refueling outage following SG replacement. [d.2. Inspect 100% of the tubes at sequential periods of 60 effective full power months. The first sequential period shall be considered to begin after the first inservice inspection of the SGs. No SG shall operate for more than 24 effective full power months or one refueling outage (whichever is less) without being inspected.] [d.2. Inspect 100% of the tubes at sequential periods of 120, 90, and, thereafter, 60 effective full power months. The first sequential period shall be considered to begin after the first inservice inspection of the SGs. In addition, inspect 50% of the tubes by the refueling outage nearest the midpoint of the period and the remaining 50% by the refueling outage nearest the end of the period. No SG shall operate for more than 48 effective full power months or two refueling outages (whichever is less) without being inspected.] [d.2. Inspect 100% of the tubes at sequential periods of 144, 108, 72, and, thereafter, 60 effective full power months. The first sequential period shall be considered to begin after the first inservice inspection of the SGs. In addition, inspect 50% of the tubes by the refueling outage nearest the midpoint of the period and the remaining 50% by the refueling outage nearest the end of the period. No SG shall operate for more than 72 effective full power months or three refueling outages (whichever is less) without being inspected.] d.3. If crack indications are found in any SG tube, then the next inspection for each SG for the degradation mechanism that caused the crack indication shall not exceed 24 effective full power months or one refueling outage (whichever is less). If definitive information, such as from examination of a pulled tube, diagnostic non-destructive testing, or engineering evaluation indicates that a crack-like indication is not associated with a crack(s), then the indication need not be treated as a crack.

Initial examination scope and sample plan requirements are provided in Section 3.6, requirements for the classification of sample plan results are provided in Section 3.7, and requirements for sample plan expansions are provided in Sections 3.8.

3-3

Examination Requirements

3.4

First ISI

3.4.1 First ISI of Tubes The first ISI establishes the operational baseline to determine if degradation has initiated, whether there are differences in SG behavior, and for future historical comparisons. As a minimum, a full-length bobbin coil (or a technique(s) that is expected to be employed during future inservice inspections) examination on 100% of the tubing in each SG shall be performed. In addition, when only the bobbin coil is employed during the first ISI, techniques capable of signal characterization are recommended on a representative sample of abnormal conditions (for example, tubesheet expansion anomalies, manufacturing burnish marks, unusual signals, dings) observed during the inspection. 3.4.2 First ISI of Sleeves The first ISI establishes the operational baseline to determine if degradation has initiated, to establish potential differences between the PSI and the first ISI and for future historical comparisons; therefore all sleeves shall be examined after their first cycle of operation. 3.4.3 First ISI Examination of Plugs A visual inspection on 100% of the installed plugs shall be performed in accordance with Section 6.9 at the next scheduled primary side inspection of the applicable SG. 3.4.4 First ISI of Other Tube Repairs The first ISI establishes the operational baseline to determine if degradation has initiated, to provide a basis for signal characterization (when applicable), to establish potential differences between baseline and the first ISI and for future historical comparisons (when applicable). Therefore, all other types of tube repairs shall be inspected after their first cycle of operation.

3.5

Subsequent ISIs

3.5.1 Subsequent Inspections of Tubing Subsequent inspection guidance is provided in Sections 3.3, 3.6, 3.7 and 3.8. Further adjustments to the inspection scope or periodicity may be warranted based upon the DA, condition monitoring and/or operational assessment conducted in accordance with the EPRI Steam Generator Integrity Assessment Guidelines [8].

3-4

Examination Requirements

3.5.2 Subsequent Inspection of Sleeves For subsequent inspection guidance, apply the applicable tubing requirement provided in Sections 3.3 (material specific), 3.6, 3.7 and 3.8. The parent tube pressure boundary behind the sleeve shall be inspected to the parent tube requirement. 3.5.3 Subsequent Inspection of Plugs Alloy 600 Mill Annealed Plug Material - During subsequent examinations, all plugs in all SGs shall be examined at the end of each fuel cycle. When plug designs preclude a volumetric examination, all installed plugs shall be examined visually in accordance with Section 6. Alloy 600 TT and 690 TT Plug Material - Visual examination shall be performed on all installed plugs in accordance with Section 6.9 when the SG is opened for scheduled primary side examinations.

Leaking or defective plugs shall be addressed in the Licensees corrective action program. 3.5.4 Subsequent Inspection of Other Repairs Alloy 600 Mill Annealed Material - During subsequent examinations, all other repairs in all SGs shall be examined at the end of each fuel cycle. Alloys 600 TT and 690 TT Materials - For subsequent inspection guidance apply the tubing requirements provided in Sections 3.3 (material specific), 3.6, 3.7 and 3.8. If an inspection is required to be performed on tubes for reasons unrelated to the repaired sections of the tubes, then the repair(s) may be considered a separate population.

3.6

Tube Examination Scope and Sampling Plans

a. Planning the SG examination scope begins with establishing sampling plans for each existing or potential degradation mechanism in accordance with the requirements in Section 3.3. The examination scope may include several discrete pre-defined sample plans based on the degradation mechanisms, examination techniques to be used and the regions of the tube to be inspected. For example, a full-length bobbin coil inspection may be one sample plan. Rotating coil examinations may include three discrete sample plans; (a) hot leg expansion transitions, (b) low row u-bends and (c) dents and dings. The same tube(s) may be selected for multiple sample plans. For each sampling plan in the examination scope, a minimum sample size of 20% shall be examined. The basis for the minimum sample size is provided in Appendix A. Sampling serves to monitor the general condition of the SG by detecting the onset of new degradation or the recurrence of previously experienced degradation. The classification of results in accordance with Section 3.7 for each discrete sample plan would drive expansion in accordance with Section 3.8 independently using the technique that detected the degradation and applied to the region affected.

3-5

Examination Requirements

b. Sampling shall be performed such that all tubing is examined within the periods established in Section 3.3. Tube selection within a sample plan shall be governed by the following requirements: 1. The tubes are selected on a random or systematic basis, and evenly distributed across the tube bundle to the extent practical (e.g. As Low As Reasonably Achievable (ALARA) considerations). 2. All inservice tubes that have prior indications of degradation and/or historical possible loose parts (PLP) are examined through the area of interest when the applicable SGs are open for examination. 3. Peripheral tubes, including tubes adjacent to no-tube lane regions, are included or added to a sample plan or considered a separate sample plan when there is reason to expect that loose parts are present or were introduced into the SG secondary side. A secondary-side foreign object search and retrieval (FOSAR) examination may be used to meet this requirement. c. When cracking and volumetric indications occur in the same region, it is recognized that the bobbin coil technique is not capable of distinguishing between the two mechanisms. Therefore, upon initial detection of cracking the licensee shall inspect 100% of the current bobbin indications in that region to correctly characterize cracking and volumetric indications with a technique capable of characterization. During subsequent examinations, inspect all new indications and sample historical indications with a technique capable of characterization.

3.7

Classification of Sample Plan Results

The results of each sample plan shall be classified into one of the following categories by detection technique and location/region per SG examined. Sample plan result categories listed in this section serve as the basis for expanding the examinations beyond that initially sampled as discussed in the next section. a. C-1: Less than 5% of total tubes inspected are degraded tubes and none of the inspected tubes are defective. b. C-2: Between 5% and 10% of the total tubes inspected are degraded tubes, or one or more tubes, but not more than 1% of the total tubes inspected, are defective. c. C-3: More than 10% of the total tubes inspected are degraded tubes or more than 1% of the inspected tubes are defective. Note: In all inspections, previously degraded tubes must exhibit significant (>10%) further wall penetrations or growth greater than 25% of the repair limit to be included in the above percentage calculations. Example: When a previously degraded tube, 39% through-wall (TW), grows to 49% TW, exceeding the repair limit of 40% TW, it need not be included in the above calculations as either a degraded or defective tube. Growth rates are calculated on a per fuel cycle basis.

3-6

Examination Requirements

3.8

Expansion of the Examination

3.8.1 Expansion Requirements for Sample Plans Based on the classification results for each sample plan examined, additional inspections may be required. Sample plan expansions shall be performed in accordance with Table 3-1 and Table 3-2. 3.8.2 Expansion Requirements for Foreign Objects and Possible Loose Parts Requirements are provided in Chapter 10 of the EPRI Steam Generator Integrity Assessment Guidelines. The expansion requirements of Table 3-1 do not apply.

3-7

Examination Requirements Table 3-1 Expansion of Sample Plans for Non-Cracks (Thinning and Wear) Expansion of a Sample Plan or Expansion Into an Unscheduled SG Results C-1 C-2 Action Required None Inspect an additional 20% of all remaining tubes or tube sections in the affected sample plan in this SG. C-1 None

First Expansion Results Action Required

Second Expansion Results Action Required

C-2

Inspect an additional 20% of all remaining tubes or tube sections in the affected sample plan in this SG.

C-1

None

C-2

Inspect all remaining tubes or tube sections in the affected sample plan in this SG. Inspect all remaining tubes or tube sections in the affected sample plan in this SG and a 20% sample of the affected sample plan in each unscheduled SG.

C-3

C-3

Inspect all remaining tubes or tube sections in the affected sample plan in this SG and a 20% sample of the affected sample plan in each unscheduled SG.

C-3

Inspect all remaining tubes or tube sections in the affected sample plan in this SG and a 20% sample of the affected sample plan in each unscheduled SG.

3-8

Examination Requirements Table 3-2 Expansion of Sample Plans for Cracks (PWSCC, Outside Diameter Stress-Corrosion Cracking [ODSCC], and Intergranular Attack [IGA]) Expansion of a Sample Plan or Expansion into an Unscheduled SG Results C-1 C-2 or C-3 Action Required None Inspect all remaining tubes or tube sections in the affected sample plan in this SG and a 20% sample of the affected sample plan in each unscheduled SG.

First Expansion Results Action Required

Second Expansion Results Action Required

3.9

Secondary Side Visual Examination

Guidance for this topic has been relocated to the EPRI Steam Generator Integrity Assessment Guidelines.

3.10 Forced Outage Guidance


Forced outage examinations shall be performed during plant shutdown subsequent to any of the following conditions: a. SG Primary-to-secondary leakage leading to plant shutdown b. Seismic occurrence greater than the Operating Basis Earthquake c. Loss-of-coolant accident requiring actuation of the engineered safeguards d. Main steam line or feedwater line break For unscheduled examinations following primary-to-secondary leakage leading to plant shutdown, follow the Operational Leakage guidance in the EPRI Steam Generator Integrity Assessment Guidelines. For the other unscheduled examinations identified above leading to plant shutdown, follow the guidance in Section 3.6.

3-9

SAMPLING REQUIREMENTS FOR PERFORMANCEBASED EXAMINATIONS (DELETED)

This section has been deleted.

4-1

STEAM GENERATOR ASSESSMENTS

5
5.1

Introduction

The following assessments are important elements of a written comprehensive SG program: Degradation Assessment Condition monitoring assessment Operational assessment Primary-to-secondary leakage assessment Self-assessment

Detailed requirements are provided in the EPRI Steam Generator Integrity Assessment Guidelines and the NEI 97-06 Steam Generator Program Guidelines.

5-1

SYSTEM PERFORMANCE

6
6.1

Introduction

This section provides requirements for technique performance, analysis performance, data management performance, data quality, human performance, verification, contractor oversight and visual inspection. In combination, they are referred to as system performance and each part adds value to ensure the tube examination results are useful in condition monitoring and operational assessments. Under field conditions, system performance is influenced by the capabilities of the process, procedures, examination equipment, techniques, and personnel. Ultimately, the licensee is responsible for the quality and accuracy of the inspection, providing oversight, review, and approval of all processes and procedures being used, work activities, and the integrity of the final results. Ultrasonic testing is not generally used as a production examination technique and therefore the specific requirements in this section do not apply. However, when ultrasonic testing is used as an input to tube integrity assessments, it is recommended that personnel and techniques be qualified in accordance with Appendices J and K and the intent of this sections requirements apply to the extent practical.

6.2

Technique Performance Requirements

Inspection of SG tubes shall be conducted using qualified techniques capable of detecting and characterizing the degradation mechanisms identified in the DA. The qualification of NDE techniques shall comply with the requirements specified in Appendix H or Appendix I. It is recommended that the DA consider Examination Technique Specification Sheet (ETSS) in effect six months prior to the examination. Incorporating new or revised ETSSs in effect for less than six months prior to examination is optional. When implementing a change in technology, it is recommended that sufficient data also be collected for comparison purposes using the technique previously documented in the last condition monitoring and operational assessment. Sufficient data is defined as the amount of data required to establish a relationship between the techniques used in the past and present inspections. The bridging of technology is essential for maintaining consistent condition monitoring and operational assessments.

6-1

System Performance

6.2.1 Site-Validated Techniques The purpose of the site validation of examination techniques is to ensure that the detection and sizing capabilities developed in accordance with Appendix H or Appendix I are applicable to site-specific conditions for each damage mechanism identified in the DA. This shall be accomplished through a documented pre-inspection review of: a. All essential variables on the qualified ETSS as compared to the site-specific ETSS to determine equivalency. This includes dent or ding analysis and reporting when addressed in the qualified ETSS (i.e., is the site dent or ding calibration and sizing criteria [voltage, channel and normalization] equivalent to the ETSS?). b. Site-specific signals (for existing degradation mechanisms only) compared to ETSS signals to determine if the degradation mechanisms are characterized correctly. c. A qualified techniques tubing essential variables to ensure that the application is consistent with site-specific SG conditions. This review establishes that the tubing extraneous test variables (for example, denting, deposits, tube geometry, noise) of the tubes in the qualification data set are comparable in voltage, phase and signal characteristics to in-generator signals, or determines if the potential exists to degrade the probability of detection (POD). It is recommended that the review document be prepared or reviewed by a QDA and approved by the individual(s) responsible for SG tube integrity for use in the DA. If the review determines that a technique cannot be site-validated, then it is recommended that one of the following actions be taken: a. Explore other techniques. b. Develop or augment the ETSS with representative data (for example, pulled tubes, lab samples, flaw/noise injection). c. Determine and document an alternate POD. 6.2.2 Diagnostic Techniques Some techniques may be useful for diagnostic purposes to aid in the evaluation of a specific condition or degradation mechanism within a SG. Diagnostic techniques do not require specific Appendix H or Appendix I qualification. However, if non qualified diagnostic techniques are to be used, it is recommended that specific conditions being evaluated should be mocked up to demonstrate the techniques capability prior to use. The results from a diagnostic technique may be used as input to the tube integrity assessments.

6-2

System Performance

6.2.3 Calibration Requirements Standardization around uniform calibration standard designs, signal voltage normalization, and minimum sample rate requirements facilitates meaningful comparisons between data produced at different plants, by different NDE vendors, using variations of qualified techniques. 6.2.3.1 Bobbin Probe Calibration Standards

ASME defines the bobbin probe calibration standard requirements. 6.2.3.2 Rotating and Array Probe Calibration Standards

a. The calibration standard design shall meet the following criteria: 1. Rotating and Array Probe calibration standards Electro discharge machining (EDM) notches with a minimum length of 0.375 in. (9.525 mm) and a width of 0.005 in. (+0.001 in. -0.002 in.) (0.127 mm [+0.025 mm 0.051 mm]): o 100% TW axial and circumferential EDM notches o 60% TW inside diameter (ID) and outside diameter (OD) axial and circumferential EDM notches o 40% TW ID and OD axial and circumferential EDM notches 2. Additional array probe calibration standard artifacts: o 30% TW OD circumferential groove 360 with a minimum width of 0.500 in. (12.700 mm) (+0.002 in. [+0.051 mm] tolerance on depth) o An artifact sufficient to establish channel ordering, such as a 30% TW OD spiral groove o 0.008 in. (+0.001 in. -0.002 in.) (0.203 mm [+0.025 mm -0.051 mm]) radial variation (360) for an axial distance sufficient to cover the entire array b. Where paragraph a. above cannot be met, alternative notches may be used. Equivalency shall be established for voltage, phase, and span settings in relation to ETSS calibration requirements. c. It is recommended that standards include a liftoff reference signal to facilitate evaluation of signals influenced by local profile changes. d. Any additional requirements are specified in the plant specific procedure or ETSS. 6.2.3.3 Other Calibration Standards

The design of special calibration standards required for other plant specific ETSSs which cover unique degradation mechanisms or repair processes such as plugs and sleeves, shall be defined in the plant specific procedure or ETSS. 6-3

System Performance

6.2.3.4

Voltage Normalization Requirements

A standardized method for voltage normalization of data provides the industry with comparable results, which can be used in a database to perform comparisons, trending, or help justify results transients, or anomalies. The following voltage normalization values shall supersede ETSS values provided the normalization does not invalidate voltage based sizing techniques or in situ pressure testing screening criteria. a. For the bobbin coil technique, the voltage normalization shall be accomplished as follows: 1. Using the four 20% TW flat-bottom holes located on the ASME standard, set the peak-topeak voltage to 4 volts for each channel and to 2.75 volts for the prime/quarter mix channel, or 2. When using a repair criterion based on the correlation of a technique parameter to a structural parameter (for example, voltage versus burst pressure) normalized to a single reference standard, the transfer standard method of voltage correction, as defined by the alternate repair criteria, may be applied for NDE parameter normalization in the field to maintain consistency to that correlation, or 3. Normalizing all standards to a single site-reference standard (the transfer standard method of correction, similar to the alternate repair criteria method) is acceptable, provided the single site-reference standard is normalized to 4 volts peak-to-peak on the four 20% TW flat-bottom holes to develop the transfer voltage. b. For the rotating probe technique, the voltage values shall be set to 20 volts peak-to-peak on the appropriate 100% TW notch, at the maximum amplitude response near the center of the flaw, individually for each channel. Appropriate notch means a circumferential notch for a circumferential sensitive channel and an axial notch for axial sensitive and nondirectional channels. c. For the array probe technique, the voltage value shall be set to 5 volts peak-to-peak using the 30% TW 360-degree groove, individually for each channel. Acceptable exceptions to the voltage normalizations above are: To maintain consistency with past practice or for another specific purpose (i.e., AVB wear sizing, sludge height measurement, profilometry), alternate voltage normalization methodologies and values can be applied.

If the 100% TW EDM notch signal is saturated, it is acceptable to normalize on an alternate notch with a voltage setting that correlates with the 100% TW notch. The need may arise for changing voltage normalization practices to comply with those identified in this document. A correction factor for previous examination results may need to be developed and applied to historical data.

6-4

System Performance

6.2.3.5

Sample Rate Requirements

For bobbin probe examinations, instrument digitization rate and pull speed shall be adjusted to obtain a minimum sample rate of 30 samples/inch (30 samples/25.4 mm) as established in the ASME code. For rotating and array probe techniques, the minimum sample rates shall be in accordance with the ETSS or as site validated.

6.3

Analysis Performance Requirements

This section provides requirements for data analyst qualification, data analysis, site-specific analysis guidelines, independent analysis teams, automated data analysis, resolution and independent QDA oversight. As such, there are many individual elements to a successful SG inspection. It is recommended that the licensee monitor each of these areas individually and collectively to ascertain whether corrections or adjustments are necessary. 6.3.1 QDA Appendix G specifies personnel training and qualification requirements for NDE personnel who analyze eddy current data for PWR SG tubing. All analysis personnel shall be a QDA in accordance with Appendix G. 6.3.2 Site-Specific Performance Demonstration (SSPD) Requirements Appendix G specifies personnel training and qualification requirements for a SSPD. All personnel involved in the analysis of degradation shall successfully complete an SSPD for the current outage in accordance with Appendix G. It is recommended that these examinations be administered immediately prior to analysis activities. 6.3.3 Data Analysis The use of multiple-frequency instrumentation with its numerous possible analysis channels and the diversity of flaw types that might be encountered during a SG inspection have necessitated the use of a highly structured approach towards data analysis. Such an approach is necessary to ensure that the most appropriate analysis practices are used for a given flaw type and that data are analyzed in a consistent and reliable manner. Important elements of the structured approach to data analysis include: Written site-specific analysis guidelines Independent analysis teams Automated analysis systems (if applicable)

6-5

System Performance

Resolution IQDA oversight Data analysis feedback Site-Specific Analysis Guidelines

6.3.3.1

The purposes of analysis guidelines are to provide structure to the analysis process and to promote consistency and reliability in performing data analysis. The site-specific analysis guidelines shall be based on the ETSS for all techniques planned for use. However, site-specific data analysis guidelines shall be prepared in a manner that will encourage an analyst to identify possible flaw signals where new conditions are encountered or in situations where the analyst is uncertain about the correct classification of a signal. In these circumstances, the analyst should alert the lead analyst about the possible need for analysis guideline modification. It is recommended that the material provided in Table 6-1 be included within the site-specific data analysis guidelines and/or site specific ETSSs. Additional process controls and analysis guidance are to be established as follows: It is recommended that the roles and responsibilities of all personnel (for example, engineering, analysis, acquisition, resolution, system administration, data management) involved in the inspection be documented and communicated. The licensee shall implement a process to track changes made in the instructions, guidelines, and/or procedures used by acquisition operators, analysts, and data managers to document that program changes have been acknowledged and understood. When historical reviews are performed to determine if supplemental testing is warranted, the licensee shall define what constitutes change in terms of voltage and/or phase angle from the Lissajous signal in the current inspection as compared to the same Lissajous signal in the first ISI recorded on optical disk. During the first ISI, the history review comparison is to the PSI data. When cracking mechanisms are present, the process of historical review is reviewing the actual raw or processed eddy current data; review of database report entries is not acceptable. When cracking mechanisms are not present, review of database report entries is acceptable. Additional precautionary guidance shall be provided to data analysts on reporting indications that read zero percent TW if the analyst believes the indication is real degradation being influenced, or not, by extraneous test variables.

6-6

System Performance Table 6-1 Site-Specific Data Analysis Guidelines and/or Site Specific ETSSs Contents Scope/Purpose Applicability References S/G Design Definitions Prerequisites Responsibilities Personnel Qualifications Calibration Flaw Identification Criteria Reporting Requirements Defines the scope and purpose of the guidelines Defines the method and technique for which the analysis guidelines are applicable Specifies all referenced documents Specifies S/G design configuration Defines all three-letter codes Specifies any additional site-specific analyst training requirements Specifies the roles and responsibilities of all personnel Defines minimum analyst qualification and certification requirements, and SSPD requirements Establishes mixes, process channels, voltage normalization settings, rotation and span settings, and calibration curve generation Defines normal and exceptional signal behavior characteristics required to be identified as flaws Provides test extent and minimum reporting levels for tube wall degradation and damage precursors Appropriate three-letter codes are to be used to convey information about observed degradation indications and SG conditions Evaluation Determines analysis strip chart settings and Lissajous viewing window channel requirements for initial data review Specifies channels to be used for evaluation in different regions of the tube and evaluation method Specifies how phase angles are to be assigned, how signal amplitudes are to be measured, and how lengths are determined Specifies how location is to be determined Specifies data quality requirements Specifies precautions for complex signals and potential for mixed mode indications Recording Defines requirements for which signals are to be recorded, hard copy printouts for reportable indications, and information to be included in the final report Specifies discrepancy conditions and resolution procedures Specifies historical review process requirements Specifies the post resolution data modification process that may be required to correct discrepancies found by data management, tube integrity engineering, or others

Resolution History Review Data Modifications

6-7

System Performance

6.3.3.2

Independent Analysis Teams

Each of the signals encountered during the SG inspection needs to be recognized and correctly classified. A single missed or incorrectly classified defect indication can lead to a plant shutdown or a tube rupture event. To reduce the likelihood of these consequences, analysis for tube degradation data shall be completed by two independent analysis teams (designated as primary and secondary). To maintain independence, the analyses shall be done separately (without knowledge of the other teams results) and not as a joint effort. It is recommended the two teams be from different vendor organizations. Reporting of anomalous signals not caused by degradation (such as bulges, dents, dings, possible loose parts, tube support anomalies, sludge, and etc.) need not be subjected to two-party analysis. Detailed crack profiling/sizing for tube integrity analysis may be performed by a single analyst, with an independent technical review. The independent technical review shall be conducted by a QDA and as a minimum, include review of the analysis set up, maximum depth, maximum voltage, length, and orientation, when provided. 6.3.3.3 Automated Data Analysis

a. Automated analysis of eddy current data is achieved by systems incorporating software that allows the processing of eddy current data and its presentation in the form of an analyzed output. These systems are typically applied in one of the following ways: 1. Detection only mode: The software detects signals and the analyst applies manual analysis to provide results. 2. Interactive mode: The software detects and analyzes the signals and the analyst reviews the results identified by the software and compares them with his/her own analysis of the signals before the results are accepted, modified, or rejected. 3. Fully automated mode: The software detects and analyzes the signals and the analysis results are accepted with no human intervention. b. Application of automated analysis systems shall be limited to the mode that was qualified. c. Automated analysis systems used for degradation analysis are to be qualified through performance demonstration as follows: 1. The initial generic qualification shall be demonstrated on the applicable EPRI Automated Analysis Performance Demonstration Database (AAPDD) which validates detection and sizing/characterization algorithms for each applicable known damage mechanism found in Table G-1. Different algorithms may be required based on variations in AAPDD essential variables (for example, instrument types, drive voltages, tubing sizes, coil excitation frequencies). 2. It is recommended to use the initial generic performance algorithms as a source of information for help in establishing the site specific algorithms.

6-8

System Performance

3. The site-specific performance capability of an automated analysis system shall be demonstrated and documented in accordance with the SSPD practical examination requirements in Appendix G. 4. Where appropriate, the SSPD qualified algorithms may be adjusted without requalification provided the adjustments are conservative or equivalent, and documented. 5. When non-conservative adjustments are made to the qualified algorithms, re-qualification of the adjusted algorithms on the SSPD shall be performed. 6. When adjustments are made to the qualified algorithms during an outage, an assessment of previously analyzed data from that outage shall be conducted to determine the need for reanalysis. 7. A process shall be established to maintain control of the qualified algorithm revision(s). 8. An analyst with experience in the automated system being utilized shall verify the following prior to each inspection: o Appropriate tube or region coverage exists for detection of degradation. o Equivalent or conservative algorithm rules exist when compared to the site specific guidelines. o Damage mechanisms, for which the automated data analysis system is being used, are properly screened. d. SSPD qualifications of automated analysis systems shall be demonstrated independent of human intervention. Human intervention is defined as an analyst operating the automated analysis system deleting, adding, or changing a result. e. Key points to be observed in the use of dual automated analysis systems for degradation analysis are the following: 1. Both teams may use automated analysis systems for detection provided they are independent systems. However, if the detection algorithms are the same, they are not considered independent; and at least one team shall analyze all data manually to ensure the detection algorithms are not missing degradation. 2. Both teams may use automated analysis systems for sizing/characterization provided they are independent systems. However, at least one of the two analysis teams or the resolution team shall review all the analysis results manually to verify the sizing/characterization algorithm. 6.3.3.4 Resolution

A discrepancy resolution team (representing primary and secondary) shall review and resolve discrepancies between the results of the two independent analysis teams. It is important that the site-specific data analysis guidelines contain an appropriate delineation of analyst responsibilities, clear definitions of discrepancy conditions, and a well-defined process for discrepancy resolution.

6-9

System Performance

An extremely important consideration is the deletion of a reported indication of degradation made by either of the two analysis teams. If the primary or secondary analyst reports degradation including I codes, as defined in Appendix F, then at least two resolution analysts (not from the same team) shall concur and consider review of prior history and/or other techniques before the analysis result is overruled. In addition, the resolution team shall consider feedback from the primary and secondary analysts as required in Section 6.7 and feedback from the IQDA. Administrative discrepancies may be corrected by a single resolution analyst. 6.3.3.5 IQDA Oversight

a. The licensee shall designate one or more IQDA(s) who is not part of the resolution team, to perform analysis oversight. It is recommended that the following primary duties are prioritized by the licensee: 1. Review all repairable calls rejected by resolution. 2. Randomly sample data to determine if the resolution analysts are resolving calls in a consistent and conservative manner and that calls are being dispositioned properly. 3. Provide any necessary feedback to the resolution team (see Section 6.3.3.4). 4. In the event that the two resolution analysts cannot agree, mediate concurrence or the most conservative result will prevail. 5. Monitor data analysis feedback process. 6. Randomly sample NDD and NDF calls by production analysts. 6.3.3.6 Data Analysis Feedback

Each licensee shall ensure that a Data Analysis feedback process that monitors analyst performance is implemented. It is recommended that the following aspects be identified in the feedback process: a. Overall process control requirements (i.e who is responsible for ensuring the process is performed as intended, roles, responsibilities, etc) b. Missed indication review requirements c. Overcall review requirements d. End of job review requirements e. Analyst comment resolution process f. Auto Analysis review requirements (when applicable)

6-10

System Performance

6.4

Data Management

Data management systems are used to manage the inspection, account for the examination scope, and provide accurate reporting of the results. The purpose of this section is to define the requirements for data management personnel and system process requirements. 6.4.1 Data Management Personnel Requirements It is recommended that organizations performing data management provide training and experience for data managers in the following categories: Data management processes and practices as related to SG eddy current inspections. The operation of the data management software program(s) utilized.

6.4.2 System Process Requirements Data management includes the initial inspection planning, sample plan expansion, uploading analysis results, rescheduling, inspection progress, scope completion accountability, final closeout, and final archiving. Historically, some plants, (typically those with Alloy 600 Mill Annealed tubing) have had complex inspection and expansion plans due to multiple degradation mechanisms at multiple locations that rely on multiple tube entries in each tube with different techniques. Due to this complexity, it is recommended that dual data management systems be utilized on these types of inspections to minimize the potential for human or software type errors. 6.4.2.1 Inspection Planning and Sample Plan Expansion

a. The licensee shall provide the following site-specific information: 1. DA including inspection scope 2. Data analysis guidelines and ETSSs 3. Data management requirements b. Data management personnel shall develop tube inspection plans and subsequent sample plan expansions in a database management system in accordance with a. above. c. The tube inspection plans and subsequent sample plan expansions shall be documented and provided to the licensee prior to implementation. d. It is recommended that inspection plans be electronically transferable between one data management system and the data acquisition system to minimize human input errors.

6-11

System Performance

6.4.2.2

Uploading Analysis Results

a. The required analysis results shall be uploaded into the data management system after completion of such analysis. b. The analysis results shall be subject to input quality checks to verify all analysis results fields are valid inputs consistent with site-specific requirements. c. It is recommended that the analysis results be subject to quality checks to verify all historical results are addressed as required by site-specific requirements. 6.4.2.3 Rescheduling

The data management system shall be capable of rescheduling examinations for retests and supplemental testing. 6.4.2.4 Inspection Progress

The data management system shall be capable of tracking individual inspection plans and the entire inspection to its completion. This includes the ability to report on data acquisition progress and analysis progress. When dual data management systems are utilized, only the system that interfaces with the data acquisition system is required to report on data acquisition progress. 6.4.2.5 Scope Completion Accountability

The data management system shall be capable of accounting for the completeness of each scheduled examination validated through queries and/or compiled software. 6.4.2.6 Final Closeout

a. Final closeout begins when the data manager has uploaded all completed analysis results into the data management system. Final closeout is the process data management personnel perform to ensure all required examinations are completed, the database results are accurate, and all site-specific requirements have been met. b. In the absence of a licensee closeout procedure or checklist, the data manager shall provide a written closeout procedure or checklist to the licensee. Prior to entering the final closeout process, the data manager should log the starting date and time on the checklist. c. The closeout checklist shall include the following queries and tasks as a minimum: 1. Completion of all initial inspection plans. 2. Completion of all retests and supplemental test plans. 3. Completion of sample plan expansions as applicable. 4. Proper final disposition of all tubes with addressable indication codes.

6-12

System Performance

5. Required addressing of previous history is complete. 6. Positive identification (PID) requirements have been met. 7. Comparison of data management records to analysis source records showing no discrepancies. d. If two independent data management systems are used, it is recommended that the final databases from both systems be compared line-by-line electronically. Any discrepancies found between the two systems will be either resolved or documented and approved by the licensee if left unresolved. e. The closeout process may uncover discrepancies, which require resolution by additional data acquisition and/or analysis processes. Each addressed discrepancy shall be evaluated against all previously completed steps in the closeout process before the closeout process resumes at the step that discovered the discrepancy. When all discrepancies are resolved or approved and the closeout checklist is complete, the data manager should log the end date and time on the checklist and make official notification to the licensee that the inspection is complete. 6.4.2.7 Final Archiving

After closeout is complete all the results in the analysis system shall be merged and linked back to the appropriate individual raw data files. It is recommended that a procedure be provided for the process of properly archiving the results and the raw data files including quality checks that validate the integrity of the archived results.

6.5

Data Quality Requirements

The purpose of this section is to provide data quality requirements for SG eddy current examinations. Implementing the following requirements is expected to improve accuracy in detecting and sizing tube degradation. In addition, when automated processes are utilized, implementation of these requirements is expected to identify retests during the acquisition process rather than the analysis process. Data quality parameters are divided into four tables and are separated as generic (Table 6-2), bobbin (Table 6-3), rotating (Table 6-4), and array probes (Table 6-5). The tables provide a frequency, location, acceptance criteria, and corrective action for each of the listed quality parameters. The data quality parameters provided in this section shall be verified. While some of these data quality monitoring parameters may prove to be too restrictive, the basis for reduction of frequency or less stringent criteria shall be documented.

6-13

System Performance Table 6-2 Generic Data Quality Parameters Quality Parameter Summary verification Frequency Once per calibration Location Summary file Acceptance Criteria Correct site, unit, SG, owner, identification, model, calibration standard(s), probe(s) type, manufacturer, length and serial number, motor unit length and serial number, cable length and type, procedure with revision level, instrument serial number, ETSS number with revision number, software version with revision level, operator with certification level, frequencies, mode of operation, date, scan direction (may be performed using tube header information), data collected from inlet/outlet or other parameters essential to the documentation associated with this calibration set. Corrective Actions Correct information, retest tubes with correct summary, or document errors through a corrective action program.

Tube identification number Extent tested

Each tube

Tube

The data file present reflects the correct Retest tube with tube identification with independent correct tube verification by an additional method. identification. Planned beginning to planned end for each portion of tube or entire tube. Data file present at beginning of calibration group. Calibration standard encoded properly. Retest all or portion of tube to complete test. Reject data, retest calibration standard, correct data file, or document error in a message.

Each tube

Tube

Presence of Once per an initial calibration calibration group standard data file Saturation Continuous

Data file

Area of interest Area of interest Data file Data file

The eddy current signal(s) of channel(s) Reject data or required by the site ETSS are within the document dynamic range of the instrument. evaluation. Frequencies required by the site ETSS to be monitored are functional. Setting equal to site ETSS. Setting equal to site ETSS. Reject data.

Presence of eddy current signals Drive voltage Gain setting

Continuous

Once per data file Once per data file

Reject data. Reject data.

6-14

System Performance Table 6-3 Bobbin Probe Data Quality Parameters Quality Parameter Frequency Location Acceptance Criteria Phase changes on normalized reference signal (Section 6.2.3.4) 5. Amplitude changes on normalized reference signal (Section 6.2.3.4) 20%. (If there is no end calibration due to equipment failure, no check is required) to site ETSS. 1 spike per 12 in. (304.8 mm) and on <2 frequencies required by the site ETSS for detection and/or detection and sizing. A spike is defined as signal less than 5 data points in duration with an included angle less than 5 and an amplitude >1 volt. Corrective Actions Reject data or document evaluation of alternate acceptance criteria. Reject data if < site ETSS. Reject data or document evaluation of alternate acceptance criteria.

Measurement Once at end Calibration on reference of standard defects calibration group

Sampling rate Continuous Presence of Continuous parasitic noise

Area of interest Area of interest

Table 6-4 Rotating Probe Data Quality Parameters Quality Parameter Trigger pulse Measurement on reference defects Frequency Location Acceptance Criteria Data terrain plots correctly. Phase changes on normalized reference signal (Section 6.2.3.4) 5. Amplitude changes on normalized reference signal (Section 6.2.3.4) 20%. (If there is no end calibration due to equipment failure, no check is required) to site ETSS. 1 spike per 10 consecutive revolutions on <2 frequencies required by the site ETSS for detection and/or detection and sizing. A spike is defined as signal less than 5 data points in duration with an included angle less than 5 and an amplitude >1 volt. to site ETSS. Corrective Actions Reject data Reject data or document evaluation of alternate acceptance criteria. Reject data if < site ETSS. Reject data or document evaluation of alternate acceptance criteria. Reject data if < site ETSS.

Continuous Area of interest Once at end of calibration group Calibration standard

Axial sampling rate Presence of parasitic noise

Continuous Area of interest Continuous Area of interest

Circumferential Continuous Area of sample rate interest

6-15

System Performance Table 6-5 Array Probe Data Quality Parameters Quality Parameter Frequency Location Area of interest Acceptance Criteria The presence of all channels in the array. Phase changes on normalized reference signal (Section 6.2.3.4) 5. Amplitude changes on normalized reference signal (Section 6.2.3.4) 20%. (If there is no end calibration due to equipment failures, no check is required). Corrective Actions Reject data

Presence of Continuous all channels in the array Measurement on reference defects

Once at end Calibration of standard calibration group

Reject data or document evaluation of alternate acceptance criteria.

Coil response and centering

Once per calibration

Calibration standard

Reject data or Amplitude 35% difference between maximum and minimum response to an document evaluation of OD groove. alternate acceptance criteria. Array channel representation in the correct circumferential and axial positions. to site ETSS. 1 spike per 12 in. (304.8 mm) and on <2 frequencies required by the site ETSS for detection and/or detection and sizing. A spike is defined as a signal less than 5 data points in duration with an included angle less than 5 and an amplitude >1 volt. Reject data.

Channel ordering Sampling rate

Once per calibration Continuous

Calibration standard Structure-tostructure Area of interest

Reject data if < site ETSS. Reject data or document evaluation of alternate acceptance criteria.

Presence of Continuous parasitic noise

6.5.1 Probe Quality Parameters Probe manufacturing tolerances can have a profound effect on data quality. Continuous monitoring of probe manufacturing tolerances during data acquisition would not be practical. Therefore, utilities shall request probe manufacturers to verify the applicable critical probe manufacturing parameters that can affect data quality and require the manufacturer to provide a certificate of conformance to the applicable portion of Table 6-6.

6-16

System Performance Table 6-6 Probe Manufacturing Quality Parameters Quality Parameter Center frequency Dissymmetry Coil winding alignment Coil winding perpendicularity Probe coil circumferential position Probe flux Probe coil axial position 360 Sensing area Coil Type(s) All coils Differential coils Rotating and array Plus point coils Multiple coil, solid body probes Bobbin (nonexternal reference) Multiple coil, solid body probes Array probes Acceptance Criteria The center frequency is within 10% of design. Within 10% amplitude difference between the average of the two lobes of the 100% TW hole. 10% secondary lobe on a 100% TW axial notch. 160200 between 100% TW axial and 100% TW circumferential notches. 10 between nominal and measured circumferential position.

Main lobe 90% of total amplitude. 0.1 in. (2.54 mm) between nominal and measured axial position. A polar plot with measurements taken at < or = 1 increments around the tube of a 60% thru-wall .080" (2mm) long EDM notch artifact oriented in the preferred direction to achieve the highest amplitude response from the sensor. Acceptance being a measure of the amplitude crossover at > or = 70%.

6.6

Human Performance Requirements

a. Human performance is one of the most unpredictable elements in the inspection process. Human performance can be improved through the use of training, procedures, observation, barriers, self-checking, and communication. b. Data acquisition is considered a skill-based activity. The process is governed by procedural controls but the user is faced with numerous critical decisions involving plan implementation, tester configuration, probe selection, data quality, platform activities, repair processes, program closeout, etc. The user has numerous tools to facilitate the acquisition task but is still heavily reliant upon training and experience. c. Data analysis is also considered a skill-based activity. The activity is highly practiced (i.e., thousands of signals are reviewed each outage). This review is typically performed based on memory through a repetitive process (though the analyst may and often does refer to the analysis guideline and ETSS). These skills are developed through training and experience.

6-17

System Performance

d. The errors associated with skill-based performance are referred to as execution errors. These errors can be characterized as involving inadvertent slips and unintentional omissions; not recognizing changes in task requirements, system responses or conditions associated with the task due to some preoccupation, or so intent on the task that pertinent information is not detected. e. Effective human performance requires working conditions that maximize the potential contribution of human capabilities and minimize the potential negative impact of human limitations. Working conditions that have contributed to human errors include the working environment, personnel scheduling, mental and physical fatigue, extensive travel, and non-work related distractions. f. The work environment plays an important role in the workers ability to operate at peak performance. Room temperature, background noise, music, conversation, ventilation, table heights, chairs, and lighting can all affect performance to some degree. While providing a proper working environment is no guarantee that error-free performance will be achieved, each of the factors cited above contributes cumulatively to performance degradation. g. Fitness for duty onsite and at remote locations is an important consideration. Management is aware of the potential for negative fitness-for-duty factors and encourages individuals to determine if they are in the proper state of mind to perform their job satisfactorily, and modifies schedules and assignments to overcome potential negative effects. A behavior observation program continually monitors workers behavior that may affect their human performance capabilities. h. It is recommended that the licensee provide adequate time and an environment conducive for performance of the SG inspection.

6.7

Verification Responsibilities

a. SG inspection and repair processes are major program elements that encompass requirements from several sources including the Code of Federal Regulations, Plant Technical Specifications, NEI 97-06, EPRI Guidelines and the ASME Boiler and Pressure Vessel Code. b. The licensee shall verify critical aspects of the SG inspection and tube repair or plugging processes before, during or after the activity. The specific observations shall be documented in a procedure(s) and/or checklist(s). It is recommended that aspects include: 1. Personnel, probe, and equipment certification compliance with purchase order requirements and this document 2. SSPD results 3. Acquisition/analysis system (software/hardware) setup in accordance with site ETSS 4. Repair or plugging of tubes identified as exceeding the applicable acceptance criteria 5. Proper performance of repair or plugging methods 6. Repair or plugging installation process parameters

6-18

System Performance

6.8

Contractor Oversight

Additional guidance beyond that found in NEI 97-06 is not needed.

6.9

Requirements for Visual Inspection

The requirements for performing visual inspections and verification activities need to meet minimum standards as specified below. Items that can affect the results include tubesheet moisture, ventilation, SG temperature, boron crystal growth, time elapsed since primary-side drain down, secondary-side conditions, and pressurization. Therefore, consideration should be given to when the visual inspection is performed. It is recommended all visual examinations be permanently recorded on an appropriate media to allow follow up review by licensee engineering personnel, development of training material, sharing of information and historical comparisons. If a leaking plug is discovered, an evaluation shall be conducted to determine the reason and possible impact to other plugs installed. 6.9.1 Mechanical Plugs The in-service visual examination of the previously installed plugs shall be performed by an individual with specific training in this area. This inspection should verify the location and presence of existing in-service plugs, and look for excessive boron buildup or wetness around the tube plug (see Figure D-2, PWR Steam Generator Tube Plug Assessment Document ). 6.9.2 Welded Plugs The pre-service visual examination of the final weld after installation shall be performed by an individual certified as a visual examination (VT-1) examiner in accordance with ASME Section XI. The in-service visual examination of the previously installed plugs shall be performed by an individual with specific training in this area. This inspection should verify the location and presence of existing inservice plugs, and look for evidence of plug leakage that may include excessive boron buildup or wetness around the tube plug.

6-19

System Performance

6.10 Standardized Analysis Report Format


A proposed industry standardized analysis report format is available at EPRIQ.com.

6.11 Standardized Raw Data Format


A proposed industry standardized raw data format is available at EPRIQ.com.

6-20

SUMMARY OF REQUIREMENTS

7
7.1

Introduction and Background


Requirement None Section N/A

7.2

Compliance Responsibilities
Requirement None Section N/A

7.3

Sampling Requirements for Prescriptive-Based Examinations


Requirement Eddy current inspections shall be performed using techniques selected in accordance with Section 6 for degradation mechanism(s) identified as existing or potential in the degradation assessment (DA). The PSI shall be performed after tube installation for replacement SGs or after field hydrostatic tests for new plants, but in both cases, prior to initial power operation to provide a definitive baseline record against which future in-service inspections can be compared. A full-length examination on 100% of the tubing in each SG shall be performed using techniques that are expected to be employed during future inservice inspections. This requirement should not be construed to inhibit the use of new technology or to imply that the techniques used during the PSI will always remain acceptable for future use (i.e., different techniques may be appropriate based on the DA). In addition, when only the bobbin coil is employed during the PSI, techniques capable of signal characterization are recommended on a representative sample of abnormal conditions (for example, tubesheet expansion anomalies, manufacturing burnish marks, unusual signals, dings) observed during the PSI to provide a basis for signal characterization and historical comparisons. Section 3.1

3.2.1

7-1

Summary of Requirements

A PSI, over the full length of all sleeves, shall be performed prior to the SG being returned to service. The PSI shall also include a licensee review, in addition to vendor reviews, of the installation process parameters for each installed sleeve to ensure that the sleeve installation process was performed as intended. The PSI of tube plugs shall be performed after installation. The PSI shall verify each plug was installed in the proper location and in accordance with pre-established acceptance criteria. Methods for location verification include (but are not limited to) visual and/or the use of an independent tube identification system. A licensee review (in addition to any required vendor reviews) of the pre-established acceptance criteria (for example, weld process parameters, torque traces for rolled plugs, expander travel distance for mechanical plugs) for each plug shall be performed prior to service. Refer to Section 6.9.2 for the PSI of welded plugs for additional requirements. For all other types of tube repairs, the PSI shall be performed after installation. As a minimum, a licensee review (in addition to any required vendor reviews) of pre-established acceptance criteria (for example, torque trace, hydraulic time pressure curve) for each repair shall be performed prior to service.
Provisions for SG tube inspections. Periodic SG tube inspections shall be performed. The number and portions of the tubes inspected and methods of inspection shall be performed with the objective of detecting flaws of any type (e.g., volumetric flaws, axial and circumferential cracks) that may be present along the length of the tube, from the tube-to-tubesheet weld at the tube inlet to the tube-to-tubesheet weld at the tube outlet, and that may satisfy the applicable tube repair criteria. The tube-to-tubesheet weld is not part of the tube. In addition to meeting the requirements of d.1, d.2, and d.3 below, the inspection scope, inspection methods, and inspection intervals shall be such as to ensure that SG tube integrity is maintained until the next SG inspection. An assessment of degradation shall be performed to determine the type and location of flaws to which the tubes may be susceptible and, based on this assessment, to determine which inspection methods need to be employed and at what locations Reviewers Note Plants are to include the appropriate Frequency (e.g., select the appropriate Item 2.) for their SG design. The first Item 2 is applicable to SGs with Alloy 600 mill annealed tubing. The

3.2.2

3.2.3

3.2.4

3.3

7-2

Summary of Requirements

second Item 2 is applicable to SGs with Alloy 600 thermally treated tubing. The third Item 2 is applicable to SGs with Alloy 690 thermally treated tubing. d.1. Inspect 100% of the tubes in each SG during the first refueling outage following SG replacement. [d.2. Inspect 100% of the tubes at sequential periods of 60 effective full power months. The first sequential period shall be considered to begin after the first inservice inspection of the SGs. No SG shall operate for more than 24 effective full power months or one refueling outage (whichever is less) without being inspected.] [d.2. Inspect 100% of the tubes at sequential periods of 120, 90, and, thereafter, 60 effective full power months. The first sequential period shall be considered to begin after the first inservice inspection of the SGs. In addition, inspect 50% of the tubes by the refueling outage nearest the midpoint of the period and the remaining 50% by the refueling outage nearest the end of the period. No SG shall operate for more than 48 effective full power months or two refueling outages (whichever is less) without being inspected.] [d.2. Inspect 100% of the tubes at sequential periods of 144, 108, 72, and, thereafter, 60 effective full power months. The first sequential period shall be considered to begin after the first inservice inspection of the SGs. In addition, inspect 50% of the tubes by the refueling outage nearest the midpoint of the period and the remaining 50% by the refueling outage nearest the end of the period. No SG shall operate for more than 72 effective full power months or three refueling outages (whichever is less) without being inspected.] d.3. If crack indications are found in any SG tube, then the next inspection for each SG for the degradation mechanism that caused the crack indication shall not exceed 24 effective full power months or one refueling outage (whichever is less). If definitive information, such as from examination of a pulled tube, diagnostic non-destructive testing, or engineering evaluation indicates that a crack-like indication is not associated with a crack(s), then the indication need not be treated as a crack.

The first ISI establishes the operational baseline to determine if degradation has initiated, whether there are differences in SG behavior, and for future historical comparisons. As a minimum, a full-length bobbin coil (or a technique(s) that is expected to be employed during future inservice inspections) examination on 100% of the tubing in each SG shall be performed. In addition, when only the bobbin coil is employed during the

3.4.1

7-3

Summary of Requirements

first ISI, techniques capable of signal characterization are recommended on a representative sample of abnormal conditions (for example, tubesheet expansion anomalies, manufacturing burnish marks, unusual signals, dings) observed during the inspection. The first ISI establishes the operational baseline to determine if degradation has initiated, to establish potential differences between the PSI and the first ISI and for future historical comparisons; therefore all sleeves shall be examined after their first cycle of operation. A visual inspection on 100% of the installed plugs shall be performed in accordance with Section 6.9 at the next scheduled primary side inspection of the applicable SG. The first ISI establishes the operational baseline to determine if degradation has initiated, to provide a basis for signal characterization (when applicable), to establish potential differences between baseline and the first ISI and for future historical comparisons (when applicable). Therefore, all other types of tube repairs shall be inspected after their first cycle of operation. For subsequent inspection guidance, apply the applicable tubing requirement provided in Sections 3.3 (material specific), 3.6, 3.7 and 3.8. The parent tube pressure boundary behind the sleeve shall be inspected to the parent tube requirement. Alloy 600 Mill Annealed Plug Material - During subsequent examinations, all plugs in all SGs shall be examined at the end of each fuel cycle. When plug designs preclude a volumetric examination, all installed plugs shall be examined visually in accordance with Section 6. Alloy 600 TT and 690 TT Plug Material - Visual examination shall be performed on all installed plugs in accordance with Section 6.9 when the SG is opened for scheduled primary side examinations. Leaking or defective plugs shall be addressed in the Licensees corrective action program. Alloy 600 Mill Annealed Material - During subsequent examinations, all other repairs in all SGs shall be examined at the end of each fuel cycle. 3.5.4 3.4.2

3.4.3

3.4.4

3.5.2

3.5.3

a. Planning the SG examination scope begins with establishing sampling plans for each existing or potential degradation mechanism in accordance with the requirements in Section 3.3. The examination scope may include several discrete pre-defined sample plans based on the degradation mechanisms, examination techniques to be used and the regions of the tube to be inspected. For example, a full-length bobbin 7-4

3.6

Summary of Requirements

coil inspection may be one sample plan. Rotating coil examinations may include three discrete sample plans; (a) hot leg expansion transitions, (b) low row u-bends and (c) dents and dings. The same tube(s) may be selected for multiple sample plans. For each sampling plan in the examination scope, a minimum sample size of 20% shall be examined. The basis for the minimum sample size is provided in Appendix A. Sampling serves to monitor the general condition of the SG by detecting the onset of new degradation or the recurrence of previously experienced degradation. The classification of results in accordance with Section 3.7 for each discrete sample plan would drive expansion in accordance with Section 3.8 independently using the technique that detected the degradation and applied to the region affected. b. Sampling shall be performed such that all tubing is examined within the periods established in Section 3.3. Tube selection within a sample plan shall be governed by the following requirements: 1. The tubes are selected on a random or systematic basis, and evenly distributed across the tube bundle to the extent practical (e.g. As Low As Reasonably Achievable (ALARA) considerations). 2. All inservice tubes that have prior indications of degradation and/or historical possible loose parts (PLP) are examined through the area of interest when the applicable SGs are open for examination. 3. Peripheral tubes, including tubes adjacent to no-tube lane regions, are included or added to a sample plan or considered a separate sample plan when there is reason to expect that loose parts are present or were introduced into the SG secondary side. A secondaryside foreign object search and retrieval (FOSAR) examination may be used to meet this requirement. c. When cracking and volumetric indications occur in the same region, it is recognized that the bobbin coil technique is not capable of distinguishing between the two mechanisms. Therefore, upon initial detection of cracking the licensee shall inspect 100% of the current bobbin indications in that region to correctly characterize cracking and volumetric indications with a technique capable of characterization. During subsequent examinations, inspect all new indications and sample historical indications with a technique capable of characterization. The results of each sample plan shall be classified into one of the following categories by detection technique and location/region per SG examined. Sample plan result categories listed in this section serve as the basis for expanding the examinations beyond that initially sampled as discussed in the next section. a. C-1: Less than 5% of total tubes inspected are degraded tubes and none of the inspected tubes are defective. 7-5 3.7

Summary of Requirements

b. C-2: Between 5% and 10% of the total tubes inspected are degraded tubes, or one or more tubes, but not more than 1% of the total tubes inspected, are defective. c. C-3: More than 10% of the total tubes inspected are degraded tubes or more than 1% of the inspected tubes are defective. Note: In all inspections, previously degraded tubes must exhibit significant (>10%) further wall penetrations or growth greater than 25% of the repair limit to be included in the above percentage calculations. Example: When a previously degraded tube, 39% through-wall (TW), grows to 49% TW, exceeding the repair limit of 40% TW, it need not be included in the above calculations as either a degraded or defective tube. Growth rates are calculated on a per fuel cycle basis. Based on the classification results for each sample plan examined, additional inspections may be required. Sample plan expansions shall be performed in accordance with Table 3-1 and Table 3-2. Forced outage examinations shall be performed during plant shutdown subsequent to any of the following conditions: a. SG Primary-to-secondary leakage leading to plant shutdown b. Seismic occurrence greater than the Operating Basis Earthquake c. Loss-of-coolant accident requiring actuation of the engineered safeguards d. Main steam line or feedwater line break For unscheduled examinations following primary-to-secondary leakage leading to plant shutdown, follow the Operational Leakage guidance in the EPRI Steam Generator Integrity Assessment Guidelines. For the other unscheduled examinations identified above leading to plant shutdown, follow the guidance in Section 3.6. 3.8.1

3.10

7.4

Sampling Requirements for Performance-Based Examinations


Requirement None Section N/A

7.5

SG Assessments
Requirement None Section N/A

7-6

Summary of Requirements

7.6

System Performance
Requirement Inspection of SG tubes shall be conducted using qualified techniques capable of detecting and characterizing the degradation mechanisms identified in the DA. The qualification of NDE techniques shall comply with the requirements specified in Appendix H or Appendix I. It is recommended that the DA consider Examination Technique Specification Sheet (ETSS) in effect six months prior to the examination. Incorporating new or revised ETSSs in effect for less than six months prior to examination is optional. The purpose of the site validation of examination techniques is to ensure that the detection and sizing capabilities developed in accordance with Appendix H or Appendix I are applicable to site-specific conditions for each damage mechanism identified in the DA. This shall be accomplished through a documented pre-inspection review of: a. All essential variables on the qualified ETSS as compared to the sitespecific ETSS to determine equivalency. This includes dent or ding analysis and reporting when addressed in the qualified ETSS (i.e., is the site dent or ding calibration and sizing criteria [voltage, channel and normalization] equivalent to the ETSS?). b. Site-specific signals (for existing degradation mechanisms only) compared to ETSS signals to determine if the degradation mechanisms are characterized correctly. c. A qualified techniques tubing essential variables to ensure that the application is consistent with site-specific SG conditions. This review establishes that the tubing extraneous test variables (for example, denting, deposits, tube geometry, noise) of the tubes in the qualification data set are comparable in voltage, phase and signal characteristics to ingenerator signals, or determines if the potential exists to degrade the probability of detection (POD). a. The calibration standard design shall meet the following criteria: 1. Rotating and Array Probe calibration standards Electro discharge machining (EDM) notches with a minimum length of 0.375 in. (9.525 mm) and a width of 0.005 in. (+0.001 in. 0.002 in.) (0.127 mm [+0.025 mm -0.051 mm]): o 100% TW axial and circumferential EDM notches o 60% TW inside diameter (ID) and outside diameter (OD) axial and circumferential EDM notches 7-7 6.2.3.2 Section 6.2

6.2.1

Summary of Requirements

o 40% TW ID and OD axial and circumferential EDM notches 2. Additional array probe calibration standard artifacts: o 30% TW OD circumferential groove 360 with a minimum width of 0.500 in. (12.700 mm) (+0.002 in. [+0.051 mm] tolerance on depth) o An artifact sufficient to establish channel ordering, such as a 30% TW OD spiral groove o 0.008 in. (+0.001 in. -0.002 in.) (0.203 mm [+0.025 mm -0.051 mm]) radial variation (360) for an axial distance sufficient to cover the entire array b. Where paragraph a. above cannot be met, alternative notches may be used. Equivalency shall be established for voltage, phase, and span settings in relation to ETSS calibration requirements. c. It is recommended that standards include a liftoff reference signal to facilitate evaluation of signals influenced by local profile changes. d. Any additional requirements are specified in the plant specific procedure or ETSS. The design of special calibration standards required for other plant specific ETSSs which cover unique degradation mechanisms or repair processes such as plugs and sleeves, shall be defined in the plant specific procedure or ETSS. A standardized method for voltage normalization of data provides the industry with comparable results, which can be used in a database to perform comparisons, trending, or help justify results transients, or anomalies. The following voltage normalization values shall supersede ETSS values provided the normalization does not invalidate voltage based sizing techniques or in situ pressure testing screening criteria. a. For the bobbin coil technique, the voltage normalization shall be accomplished as follows: 1. Using the four 20% TW flat-bottom holes located on the ASME standard, set the peak-to-peak voltage to 4 volts for each channel and to 2.75 volts for the prime/quarter mix channel, or 2. When using a repair criterion based on the correlation of a technique parameter to a structural parameter (for example, voltage versus burst pressure) normalized to a single reference standard, the transfer standard method of voltage correction, as defined by the alternate repair criteria, may be applied for NDE parameter normalization in the field to maintain consistency to that correlation, or 6.2.3.3

6.2.3.4

7-8

Summary of Requirements

3. Normalizing all standards to a single site-reference standard (the transfer standard method of correction, similar to the alternate repair criteria method) is acceptable, provided the single site-reference standard is normalized to 4 volts peak-to-peak on the four 20% TW flat-bottom holes to develop the transfer voltage. b. For the rotating probe technique, the voltage values shall be set to 20 volts peak-to-peak on the appropriate 100% TW notch, at the maximum amplitude response near the center of the flaw, individually for each channel. Appropriate notch means a circumferential notch for a circumferential sensitive channel and an axial notch for axial sensitive and nondirectional channels. c. For the array probe technique, the voltage value shall be set to 5 volts peak-to-peak using the 30% TW 360-degree groove, individually for each channel. For bobbin probe examinations, instrument digitization rate and pull speed shall be adjusted to obtain a minimum sample rate of 30 samples/inch (30 samples/25.4 mm) as established in the ASME code. For rotating and array probe techniques, the minimum sample rates shall be in accordance with the ETSS or as site validated. Appendix G specifies personnel training and qualification requirements for NDE personnel who analyze eddy current data for PWR SG tubing. All analysis personnel shall be a QDA in accordance with Appendix G. Appendix G specifies personnel training and qualification requirements for a SSPD. All personnel involved in the analysis of degradation shall successfully complete an SSPD for the current outage in accordance with Appendix G. It is recommended that these examinations be administered immediately prior to analysis activities. The purposes of analysis guidelines are to provide structure to the analysis process and to promote consistency and reliability in performing data analysis. The site-specific analysis guidelines shall be based on the ETSS for all techniques planned for use. However, site-specific data analysis guidelines shall be prepared in a manner that will encourage an analyst to identify possible flaw signals where new conditions are encountered or in situations where the analyst is uncertain about the correct classification of a signal. In these circumstances, the analyst should alert the lead analyst about the possible need for analysis guideline modification. It is recommended that the material provided in Table 6-1 be included within the site-specific data analysis guidelines and/or site specific ETSSs. 6.3.1 6.2.3.5

6.3.2

6.3.3.1

7-9

Summary of Requirements

Additional process controls and analysis guidance are to be established as follows: It is recommended that the roles and responsibilities of all personnel (for example, engineering, analysis, acquisition, resolution, system administration, data management) involved in the inspection be documented and communicated. The licensee shall implement a process to track changes made in the instructions, guidelines, and/or procedures used by acquisition operators, analysts, and data managers to document that program changes have been acknowledged and understood. When historical reviews are performed to determine if supplemental testing is warranted, the licensee shall define what constitutes change in terms of voltage and/or phase angle from the Lissajous signal in the current inspection as compared to the same Lissajous signal in the first ISI recorded on optical disk. During the first ISI, the history review comparison is to the PSI data. When cracking mechanisms are present, the process of historical review is reviewing the actual raw or processed eddy current data; review of database report entries is not acceptable. When cracking mechanisms are not present, review of database report entries is acceptable. Additional precautionary guidance shall be provided to data analysts on reporting indications that read zero percent TW if the analyst believes the indication is real degradation being influenced, or not, by extraneous test variables. 6.3.3.2

Each of the signals encountered during the SG inspection needs to be recognized and correctly classified. A single missed or incorrectly classified defect indication can lead to a plant shutdown or a tube rupture event. To reduce the likelihood of these consequences, analysis for tube degradation data shall be completed by two independent analysis teams (designated as primary and secondary). To maintain independence, the analyses shall be done separately (without knowledge of the other teams results) and not as a joint effort. It is recommended the two teams be from different vendor organizations. Reporting of anomalous signals not caused by degradation (such as bulges, dents, dings, possible loose parts, tube support anomalies, sludge, and etc.) need not be subjected to two-party analysis. Detailed crack profiling/sizing for tube integrity analysis may be performed by a single analyst, with an independent technical review. The independent technical review shall be conducted by a QDA and as a minimum, include review of the analysis set up, maximum depth, maximum voltage, length, and orientation, when provided. 7-10

Summary of Requirements

a. Automated analysis of eddy current data is achieved by systems incorporating software that allows the processing of eddy current data and its presentation in the form of an analyzed output. These systems are typically applied in one of the following ways: 1. Detection only mode: The software detects signals and the analyst applies manual analysis to provide results. 2. Interactive mode: The software detects and analyzes the signals and the analyst reviews the results identified by the software and compares them with his/her own analysis of the signals before the results are accepted, modified, or rejected. 3. Fully automated mode: The software detects and analyzes the signals and the analysis results are accepted with no human intervention. b. Application of automated analysis systems shall be limited to the mode that was qualified. c. Automated analysis systems used for degradation analysis are to be qualified through performance demonstration as follows: 1. The initial generic qualification shall be demonstrated on the applicable EPRI Automated Analysis Performance Demonstration Database (AAPDD) which validates detection and sizing/characterization algorithms for each applicable known damage mechanism found in Table G-1. Different algorithms may be required based on variations in AAPDD essential variables (for example, instrument types, drive voltages, tubing sizes, coil excitation frequencies). 2. It is recommended to use the initial generic performance algorithms as a source of information for help in establishing the site specific algorithms. 3. The site-specific performance capability of an automated analysis system shall be demonstrated and documented in accordance with the SSPD practical examination requirements in Appendix G. 4. Where appropriate, the SSPD qualified algorithms may be adjusted without requalification provided the adjustments are conservative or equivalent, and documented. 5. When non-conservative adjustments are made to the qualified algorithms, re-qualification of the adjusted algorithms on the SSPD shall be performed. 6. When adjustments are made to the qualified algorithms during an outage, an assessment of previously analyzed data from that outage shall be conducted to determine the need for reanalysis.

6.3.3.3

7-11

Summary of Requirements

7. A process shall be established to maintain control of the qualified algorithm revision(s). 8. An analyst with experience in the automated system being utilized shall verify the following prior to each inspection: o Appropriate tube or region coverage exists for detection of degradation. o Equivalent or conservative algorithm rules exist when compared to the site specific guidelines. o Damage mechanisms, for which the automated data analysis system is being used, are properly screened. d. SSPD qualifications of automated analysis systems shall be demonstrated independent of human intervention. Human intervention is defined as an analyst operating the automated analysis system deleting, adding, or changing a result. e. Key points to be observed in the use of dual automated analysis systems for degradation analysis are the following: 1. Both teams may use automated analysis systems for detection provided they are independent systems. However, if the detection algorithms are the same, they are not considered independent; and at least one team shall analyze all data manually to ensure the detection algorithms are not missing degradation. 2. Both teams may use automated analysis systems for sizing/characterization provided they are independent systems. However, at least one of the two analysis teams or the resolution team shall review all the analysis results manually to verify the sizing/characterization algorithm. A discrepancy resolution team (representing primary and secondary) shall 6.3.3.4 review and resolve discrepancies between the results of the two independent analysis teams. It is important that the site-specific data analysis guidelines contain an appropriate delineation of analyst responsibilities, clear definitions of discrepancy conditions, and a well-defined process for discrepancy resolution. An extremely important consideration is the deletion of a reported indication of degradation made by either of the two analysis teams. If the primary or secondary analyst reports degradation including I codes, as defined in Appendix F, then at least two resolution analysts (not from the same team) shall concur and consider review of prior history and/or other techniques before the analysis result is overruled.

7-12

Summary of Requirements

In addition, the resolution team shall consider feedback from the primary and secondary analysts as required in Section 6.7 and feedback from the IQDA. Administrative discrepancies may be corrected by a single resolution analyst. a. The licensee shall designate one or more IQDA(s) who is not part of the resolution team, to perform analysis oversight. It is recommended that the following primary duties are prioritized by the licensee: 1. Review all repairable calls rejected by resolution. 2. Randomly sample data to determine if the resolution analysts are resolving calls in a consistent and conservative manner and that calls are being dispositioned properly. 3. Provide any necessary feedback to the resolution team (see Section 6.3.3.4). 4. In the event that the two resolution analysts cannot agree, mediate concurrence or the most conservative result will prevail. 5. Monitor data analysis feedback process. 6. Randomly sample NDD and NDF calls by production analysts. Each licensee shall ensure that a Data Analysis feedback process that monitors analyst performance is implemented. It is recommended that the following aspects be identified in the feedback process: a. Overall process control requirements (i.e who is responsible for ensuring the process is performed as intended, roles, responsibilities, etc) b. Missed indication review requirements c. Overcall review requirements d. End of job review requirements e. Analyst comment resolution process f. Auto Analysis review requirements (when applicable) a. The licensee shall provide the following site-specific information: 1. DA including inspection scope 2. Data analysis guidelines and ETSSs 3. Data management requirements b. Data management personnel shall develop tube inspection plans and subsequent sample plan expansions in a database management system in accordance with a. above. 7-13 6.4.2.1 6.3.3.6 6.3.3.5

Summary of Requirements

c. The tube inspection plans and subsequent sample plan expansions shall be documented and provided to the licensee prior to implementation. d. It is recommended that inspection plans be electronically transferable between one data management system and the data acquisition system to minimize human input errors. a. The required analysis results shall be uploaded into the data management system after completion of such analysis. b. The analysis results shall be subject to input quality checks to verify all analysis results fields are valid inputs consistent with site-specific requirements. c. It is recommended that the analysis results be subject to quality checks to verify all historical results are addressed as required by site-specific requirements. The data management system shall be capable of rescheduling examinations 6.4.2.3 for retests and supplemental testing. The data management system shall be capable of tracking individual inspection plans and the entire inspection to its completion. This includes the ability to report on data acquisition progress and analysis progress. When dual data management systems are utilized, only the system that interfaces with the data acquisition system is required to report on data acquisition progress. The data management system shall be capable of accounting for the completeness of each scheduled examination validated through queries and/or compiled software. a. Final closeout begins when the data manager has uploaded all completed analysis results into the data management system. Final closeout is the process data management personnel perform to ensure all required examinations are completed, the database results are accurate, and all site-specific requirements have been met. b. In the absence of a licensee closeout procedure or checklist, the data manager shall provide a written closeout procedure or checklist to the licensee. Prior to entering the final closeout process, the data manager should log the starting date and time on the checklist. c. The closeout checklist shall include the following queries and tasks as a minimum: 1. Completion of all initial inspection plans. 2. Completion of all retests and supplemental test plans. 3. Completion of sample plan expansions as applicable. 6.4.2.4 6.4.2.2

6.4.2.5

6.4.2.6

7-14

Summary of Requirements

4. Proper final disposition of all tubes with addressable indication codes. 5. Required addressing of previous history is complete. 6. Positive identification (PID) requirements have been met. 7. Comparison of data management records to analysis source records showing no discrepancies. d. If two independent data management systems are used, it is recommended that the final databases from both systems be compared line-by-line electronically. Any discrepancies found between the two systems will be either resolved or documented and approved by the licensee if left unresolved. e. The closeout process may uncover discrepancies, which require resolution by additional data acquisition and/or analysis processes. Each addressed discrepancy shall be evaluated against all previously completed steps in the closeout process before the closeout process resumes at the step that discovered the discrepancy. When all discrepancies are resolved or approved and the closeout checklist is complete, the data manager should log the end date and time on the checklist and make official notification to the licensee that the inspection is complete. After closeout is complete all the results in the analysis system shall be merged and linked back to the appropriate individual raw data files. It is recommended that a procedure be provided for the process of properly archiving the results and the raw data files including quality checks that validate the integrity of the archived results. The data quality parameters provided in this section shall be verified. While some of these data quality monitoring parameters may prove to be too restrictive, the basis for reduction of frequency or less stringent criteria shall be documented. Probe manufacturing tolerances can have a profound effect on data quality. Continuous monitoring of probe manufacturing tolerances during data acquisition would not be practical. Therefore, utilities shall request probe manufacturers to verify the applicable critical probe manufacturing parameters that can affect data quality and require the manufacturer to provide a certificate of conformance to the applicable portion of Table 6-6. a. SG inspection and repair processes are major program elements that encompass requirements from several sources including the Code of Federal Regulations, Plant Technical Specifications, NEI 97-06, EPRI Guidelines and the ASME Boiler and Pressure Vessel Code. b. The licensee shall verify critical aspects of the SG inspection and tube repair or plugging processes before, during or after the activity. The 7-15 6.4.2.7

6.5

6.5.1

6.7

Summary of Requirements

specific observations shall be documented in a procedure(s) and/or checklist(s). It is recommended that aspects include: 1. Personnel, probe, and equipment certification compliance with purchase order requirements and this document 2. SSPD results 3. Acquisition/analysis system (software/hardware) setup in accordance with site ETSS 4. Repair or plugging of tubes identified as exceeding the applicable acceptance criteria 5. Proper performance of repair or plugging methods 6. Repair or plugging installation process parameters If a leaking plug is discovered, an evaluation shall be conducted to determine the reason and possible impact to other plugs installed. 6.9

The in-service visual examination of the previously installed plugs shall be 6.9.1 performed by an individual with specific training in this area. This inspection should verify the location and presence of existing in-service plugs, and look for excessive boron buildup or wetness around the tube plug (see Figure D-2, PWR Steam Generator Tube Plug Assessment Document ). The pre-service visual examination of the final weld after installation shall be performed by an individual certified as a visual examination (VT-1) examiner in accordance with ASME Section XI. The in-service visual examination of the previously installed plugs shall be performed by an individual with specific training in this area. This inspection should verify the location and presence of existing inservice plugs, and look for evidence of plug leakage that may include excessive boron buildup or wetness around the tube plug. 6.9.2

7-16

REFERENCES
1. PWR Steam Generator Examination Guidelines, originally published in 1981 and revised in 1984, 1988, 1992, 1996, 1997, and 2002. EPRI, Palo Alto, CA: 2002, 1003138. 2. Steam Generator Management Program Administrative Procedures, Revision 1. EPRI, Palo Alto, CA: 2004. 1011274. 3. Steam Generator Program Guidelines. Nuclear Energy Institute (NEI) 97-06. 2005. 4. The American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code, Sections V and XI. 5. American Society of Nondestructive Testing (ASNT) SNT-TC-1A. 6. American National Standards Institute (ANSI) CP-189. 7. The Code of Federal Regulations, 10CFR50, Appendix B. 8. Steam Generator Integrity Assessment Guidelines, Revision 2. EPRI, Palo Alto, CA: 2006. 1012987. 9. PWR Steam Generator Tube Plug Assessment Document. EPRI, Palo Alto, CA: 1997. TR-109495. 10. PWR Primary-to-Secondary Leakage Guidelines, Revision 3. EPRI, Palo Alto, CA: 2004. 1008219. 11. Tools for Integrity Assessment Project Technical Report. EPRI, Palo Alto, CA: 2006. 1014567. 12. Update on the Tools for Integrity Assessment Project. EPRI, Palo Alto, CA: 2007. 1014756. 13. User Manual Crystal Ball Monte Carlo POD Simulator. EPRI, Palo Alto, CA: 2006. 1012988. 14. Technical Specification Task Force letter to the NRC (TSTF 05-05), dated April 14, 2005, TSTF-449, Revision 4, Steam Generator Tube Integrity.

8-1

A
A.1

SAMPLING BASIS

Sample Size

The objective of conducting inspections of SG tubes is to monitor the condition of SG tubes against acceptable performance criteria in a manner sufficient to provide reasonable assurance that SGs remain capable of fulfilling their intended safety function. The purpose of examining samples of tubes rather than examining all tubes in SGs is to reduce the impact of meeting this objective. Consequently, the principal consideration in determining the appropriate sample size for tube examinations is the level of confidence that can be placed in the information obtained about the condition of SG tubes from examining a tube sample of given size. A.1.1 Level of Confidence Relative to Sample Size and the Number of Tubes Affected by Degradation SG tubes are examined to determine whether or not the SG contains one or more tubes affected by degradation. In this context, degradation does not imply a specific severity or percent penetration, only that the tube is affected by degradation. The level of confidence is then defined by the probability of sampling at least one affected tube in a SG that contains one or more affected tubes. Level of confidence is a function of the size of the tube sample and of the actual number of tubes affected by degradation in the SG. To meet the safety objectives for SGs in nuclear power plants, the sample size needs to be sufficient to provide a relatively high level of confidence, 0.90 or greater, that at least one affected tube is sampled if the SG is affected by degradation. Table A-1 provides a listing of the tube sample sizes and the associated numbers of actual affected tubes that are required to provide a 0.90 level of confidence.

A-1

Sampling Basis Table A-1 Tube Sample Sizes and Associated Numbers of Affected Tubes in SGs Required to Provide a 0.90 Level of Confidence That At Least One Affected Tube Is Sampled from the SG Percentage of tubes sampled: 1% 3% 10% 20% 30% 40% 50% Number of affected tubes: 229 74 23 12 8 6 5

The data of Table A-1 suggest that a sample of 20% represents a practical point of diminishing returns relative to the level of confidence that can be placed in information obtained from tube examinations. Substantial gains can be realized in the probability of sampling an affected tube by increasing the sample size up to about 20%. With a 20% sample, only 12 or more affected tubes need to be present in the SG for one to be sampled at the 0.90 level of confidence. However, increasing the sample size beyond 20% would result in relatively little further reduction in the numbers of affected tubes that need to be present for a one to be sampled. This is independent of the number of tubes in the SG. A.1.2 Statistical Basis for Level of Confidence The probability P of sampling at least one affected tube from a population of N tubes containing a fraction of p affected tubes is determined approximately by the binomial distribution, as expressed in Equation A-1. P = 1 - (1 - p) Where: P is the probability of sampling at least one affected tube (confidence level [CL]), p is the probability that a single sampled tube is affected (the number of affected tubes divided by the total number of tubes in the SG, n'/N), and n is the number of tubes examined (the sample size). n
Equation A-1

A-2

Sampling Basis

Note: The values provided by the binomial distribution are approximate because, while the binomial distribution assumes sampling with replacement, actual practice is to sample without replacement. However, because of the relatively large numbers of tubes in SGs, the violation of this assumption has an insignificant effect on the resulting values. Solving the above expression for n provides the following expression for sample size in terms of the two numbers P and p: n = log (1 - P) / log (1 - p) Where: P is the probability of sampling at least one affected tube p is defined as n'/N Equation A-2 served as the basis for developing Table A-1 which relates the percentage of tubes sampled, the number of affected tubes present in the total tube population, and the probability (CL) that at least one affected tube is sampled during the examination.
Equation A-2

A-3

APPENDIX

Intentionally Left Blank

B-1

APPENDIX

Intentionally Left Blank

C-1

APPENDIX

Intentionally Left Blank

D-1

APPENDIX

Intentionally Left Blank

E-1

TERMINOLOGY

F
F.1

Definitions

The following definitions are provided to ensure a uniform understanding of terms used in this guideline. Active Damage Mechanism Historical term that is now synonymous with the term existing degradation that is found in the EPRI Steam Generator Integrity Assessment Guidelines [8]. Defective Tube A tube that contains an indication that exceeds the repair limit. Degradation A reportable indication 20% TW or greater, or 50% of the repair limit for length-based or voltage-based criteria. Degraded Tube A tube with a reportable indication 20% TW or greater, or 50% of the repair limit for lengthbased or voltage-based criteria. Dent A local reduction (plastic deformation) in the tube diameter due to a buildup of corrosion products (magnetite). Ding A local reduction (plastic deformation) in the tube diameter caused by manufacturing, support plate shifting, vibration, or other mechanical means.

F-1

Terminology

Essential Variables The physical, mechanical, electrical, or chemical properties of the entire examination system which may vary or be varied and must be controlled or kept constant to ensure an expected examination response. Extraneous Test Variables Signal sources such as denting, deposits, support structures, geometry changes, and tubing essential variables that influence the test results. Fatigue Material failure resulting from the initiation and/or propagation of cracks due to cyclic loads. Fill Factor A measure of the degree to which a bobbin coil fills a tube. Specifically, the ratio of the square of bobbin coil OD to the square of the tube inner diameter. Foreign Object (loose part) Piece of material located within the SG where it does not belong. The material could have originated from outside of the SG or from within the SG and could be metallic or non-metallic. Intergranular Attack Corrosive attack of grain boundaries in materials with no preferential (stress-related) orientation. Lane and Wedge Region Tubes in a once-through SG associated with the untubed inspection lane. Manufacturing Burnish Mark A tubing condition where localized tubing imperfections were removed in the tubing mill or fabrication shop by buffing and are detectable due to the effects of cold working and localized wall thinning. Permeability Condition where the test coil impedance changes due to a change in the tubing materials inherent willingness to conduct magnetic flux lines.

F-2

Terminology

Primary Water Stress-Corrosion Cracking (PWSCC) SCC on the reactor coolant side (inside) of SG tubes. Sludge An accumulation of particulate matter found on the secondary side of the SG in low-flow areas. Stress-Corrosion Cracking (SCC) Intergranular cracking of tubes which is a result of complex interactions between stress, environment, and material. Volumetric Wall Loss The loss of tube material caused by general or localized thinning, wear, or IGA. Wear The loss of tube material caused by excessive rubbing of the tube against its support structure, a loose part, or another tube.

F.2

Acronyms
Automated Analysis Performance Demonstration Database Analog-to-Digital As Low As Reasonably Achievable Axial Length Coefficient American National Standards Institute American Society of Mechanical Engineers American Society for Nondestructive Testing Anti-Vibration Bar Code of Federal Regulations Confidence Level Decibel F-3

AAPDD A/D ALARA ALC ANSI ASME ASNT AVB CFR CL dB

Terminology

DA DC DCSS DOV ECT EDM ESFW ETFW ETSS EPRI FFC FOSAR Hz ID IGA IGSCC IQDA IRG ISI LOV NDD NDE NEI NSSS F-4

Degradation Assessment Depth Coefficient Direct Current Saturation Strength Diametral Offset Value Eddy Current Testing Electro Discharge Machining Effective Scan Field Width Effective Track Field Width Examination Technique Specification Sheet Electric Power Research Institute Fill Factor Coefficient Foreign Object Search and Retrieval Hertz Inside Diameter Intergranular Attack Intergranular Stress-corrosion Cracking Independent Qualified Data Analyst Issue Resolution Group In-Service Inspection Lift-Off Value No-detectable Degradation Nondestructive Examination Nuclear Energy Institute Nuclear Steam Supply System

Terminology

OD ODSCC ORNL OTSG PDC PDD PID PLP POD PSI PWR PWSCC QDA r RMSE SCC SDP SG SGMP S/N SSPD TAG TSTF TT

Outside Diameter Outside Diameter Stress-Corrosion Cracking Oak Ridge National Laboratory Once-Through Steam Generator Phase-to-Depth Curve Performance Demonstration Database Positive Identification Possible Loose Parts Probability of Detection Pre-Service Inspection Pressurized Water Reactor Primary Water Stress-Corrosion Cracking Qualified Data Analyst Correlation Coefficient Root Mean Square Error Stress-Corrosion Cracking Standard Depth of Penetration Steam Generator Steam Generator Management Project Signal-to-Noise Site-Specific Performance Demonstration Technical Advisory Group Technical Specification Task Force Thermally Treated F-5

Terminology

TW TWC UQDA USNRC UT VT-1

Through-Wall Transverse Width Coefficient Ultrasonic Qualified Data Analyst United States Nuclear Regulatory Commission Ultrasonic Testing Visual Examination

F.3

Three-Letter Codes

The purpose of defining three-letter codes is to standardize their use in the industry. The use of Table F-1 codes on an industry-wide basis is recommended. A three-letter designation is used to describe eddy current inspection results. The codes are grouped into seven categories that describe actions required to assist in the correct disposition of the results. If new three letter codes are developed, then it is recommended that the user assign the applicable category. CATEGORY I: No Further Action Required. These codes represent a non-flaw tube condition. CATEGORY II: Retest Required. These codes represent a condition that requires retesting with the same or smaller diameter, magnetically biased or tuned frequency probe. CATEGORY III: Supplemental Test Required. These I codes represent (possible) flaw signals where no qualified sizing technique exists and require supplemental testing. CATEGORY IV: Post-Supplemental Test Result. These S codes are assigned when an indication previously reported as an I code receives a supplemental test that confirms a nonflaw condition or when a historical S code has been compared to a current signal and the signal has not changed. CATEGORY V: Repair, ARC, or Engineering Evaluation Required. These codes designate a repairable condition except when an approved alternate repair criteria is used. F-6

Terminology

CATEGORY VI: Repeat Repair May Be Required. These codes designate incomplete repairs and may require rework depending upon the acceptance criteria. CATEGORY VII: Review Indication, Review History, Supplemental Sampling, or Engineering Evaluation Required. These codes require additional reviews (lead analyst or historical), additional diagnostic sampling, or engineering evaluations.
Table F-1 Three-Letter Codes CATEGORY I: No Further Action Required Copper Deposit No Detectable Degradation Plugged Positive Identification Sleeved Sludge CUD DEP NDD PLG PID SLV SLG CATEGORY II: Retest Required Retest Analyst Discretion Retest Bad Data Retest Fixture Retest Identification Retest Incomplete Retest No Data Retest High Frequency Retest Magnetically Biased Retest Restricted Tube RAD RBD RFX RID RIC RND RHF RMB RRT

F-7

Terminology Table F-1 (continued) Three-Letter Codes CATEGORY III: Supplemental Test Required Absolute Drift Indication Differential Freespan Indication Distorted Dent Indication Distorted Expansion Indication Distorted Support Indication Distorted Tubesheet Indication Loose Part With Indication of Tube Wall Degradation Non-Quantifiable Indication Plate Ligament Indication ADI DFI DDI DEI DSI DTI LPI NQI PLI

CATEGORY IV: Post-Supplemental Test Result Absolute Drift Signal Differential Freespan Signal Distorted Dent Signal Distorted Expansion Signal Distorted Support Signal Distorted Tubesheet Signal Loose Part Signal No Degradation Found Non-Quantifiable Signal Plate Ligament Signal ADS DFS DDS DES DSS DTS LPS NDF NQS PLS

CATEGORY V: Repair, ARC, or Engineering Evaluation Required Ligament Crack Indication Mixed Mode Indication Multiple Axial Indication Multiple Circumferential Indication Multiple Volumetric Indication Obstructed LCI MMI MAI MCI MVI OBS

F-8

Terminology Table F-1 (continued) Three-Letter Codes CATEGORY V: Repair, ARC, or Engineering Evaluation Required (continued) Single Axial Indication Single Circumferential Indication Single Volumetric Indication To Be Plugged SAI SCI SVI TBP CATEGORY VI: Repeat Repair May Be Required No Heat Treatment No Hydraulic Expansion No Roll Expansion No Support Expansion No Weld Signal NHT NHE NRE NSE NWS

CATEGORY VII: Review Indication, Review History, Supplemental Sampling, or Engineering Evaluation Required Bulge Ding Dent Indication Not Found Indication Not Reportable ID Chatter Lead Analyst Review Lift-Off Signal Manufacturing Burnish Mark No Tubesheet Expansion Noisy Tube Over Expansion Partial Tubesheet Expansion Permeability Variation Possible Loose Part without Indication of Tube Wall Degradation Skip Rolled Volumetric BLG DNG DNT INF INR IDC LAR LOS MBM NTE NSY OXP PTE PVN PLP SKR VOL

F-9

QUALIFICATION OF EDDY CURRENT EXAMINATION PERSONNEL FOR ANALYSIS OF EDDY CURRENT EXAMINATION DATA

G.1

Scope

This appendix specifies personnel training and qualification requirements for eddy current examination personnel who analyze eddy current examination data for PWR SG tubing. Its purpose is to ensure a continuing uniform knowledge base and skill level for data analysts. The appropriate knowledge base is imparted using a comprehensive classroom and laboratory training program. This appendix uses depth as a basis to establish flawed grading units based on the premise that repair criteria is expressed in terms of percent TW. Other NDE parameters, for example, amplitude can be substituted as grading units to demonstrate performance in accordance with this appendix when the applicable repair limit is expressed in terms of other NDE parameters. It is recommended that grading units be established over the full range of the parameter. Analysts are qualified by successful completion of written and practical examinations defined in this appendix.

G.2

Qualification Level

G.2.1 General Requirements a. An individual who successfully completes the requirements described herein is recognized as a QDA. The individuals certification record, as compiled and maintained by the employer, should clearly indicate compliance with this document. b. An individual seeking qualification as a QDA shall be certified Level II or III in accordance with the employers written practice in eddy current examination. c. Formal certification of personnel and compliance with the requirements of this document is the exclusive responsibility of the employer. d. The licensee is responsible for SSPD training and testing of QDA analysts in accordance with the requirements of this document.

G-1

Qualification of Eddy Current Examination Personnel for Analysis of Eddy Current Examination Data

G.3

Written Practice

G.3.1 General Requirements Organizations performing QDA qualification activities shall prepare a written program for their control and administration. G.3.1.1 Training

It is recommended that the written practice specify the following: a. Classroom and laboratory training requirements in accordance with Section G.4.1. b. Course outline including the number of instruction hours. G.3.1.2 Examinations

It is recommended that the written practice specify examination requirements in accordance with Section G.4. G.3.2 Responsibilities It is recommended that the written practice specify the responsibilities, duties, and qualifications required for personnel who implement the personnel qualification program and including classroom and laboratory training. G.3.3 Use of an Outside Agency An outside agency is an organization or an individual that provides instructor or qualification services and whose qualifications have been accepted by the organization that engages the outside agency. It is recommended that the written practice of the organization that engages the outside agency specify requirements for ensuring that the outside agency meets the applicable requirements of this appendix. If an outside agency is used the employer remains responsible for the content of the training and qualification materials. G.3.4 Transfer of QDA Qualifications It is recommended that the employers written practice address accepting QDA qualification documentation that may be transferred from other employers. This could be invoked if a QDA seeks employment from an employer other than the one that performed the training and qualification and is within the interrupted service timeframe of G.4.2.4. G-2

Qualification of Eddy Current Examination Personnel for Analysis of Eddy Current Examination Data

G.3.5 Confidentiality Provisions to ensure the confidentiality of the qualification materials (for example, test questions, answer sheets, and test data) shall be included in the written practice. Access to such materials should be limited and the examination material should be maintained in secure files. G.3.6 Availability of Training Course Materials It is recommended that training course materials be available for review or audit by user organizations and cognizant authorities. Training course materials should not be subject to any confidentiality requirements other than the normally applicable copyright laws.

G.4

Qualification Requirements

G.4.1 Training G.4.1.1 Program, Facilities, and Materials

a. It is recommended that the training program include formal classroom and structured practical laboratory exercises. b. Training course material shall be prepared and made available to the individual. G.4.1.2 QDA Training Course Content and Duration

a. It is recommended that training course content be in accordance with Attachment G-1. b. It is recommended that the training consist of a minimum of 40 hours to include classroom and laboratory exercises. G.4.1.3 SSPD Training

It is recommended that analysis personnel be provided with site specific training prior to evaluation of data for flaw detection or sizing/characterization. It is recommended that the training be provided in accordance with a site approved document and include the following subject matter: a. Required reading b. SG design c. SG examination and operating history d. Damage mechanisms (existing and potential as defined in the DA) e. Planned examination scope f. Responsibilities G-3

Qualification of Eddy Current Examination Personnel for Analysis of Eddy Current Examination Data

g. Analysis guidelines and ETSS overview h. Expected flaw responses (graphics and/or raw data with evaluated results) i. Unique examination process/evaluation challenges h. Applicable industry operating experience/events G.4.2 QDA Examinations G.4.2.1 Qualification Examinations and Data Sets

Revisions to the Performance Demonstration Database (PDD), including additions/deletions of existing data, inclusion of new techniques or deletion of existing techniques, are conducted in accordance with Supplement H3. It is recommended that PDD Version 3.0 be used until Version 4.0 (EPRI Product 1014495) becomes available.
G.4.2.1.1 Written Examinations

a. It is recommended a question bank containing at least twice the minimum number of questions be used for generating the test based on a random selection process. b. It is recommended that the written examination contain a minimum of forty (40) questions covering the lecture material outlined in Attachment G-1. c. The written examination may be conducted in an open book format.
G.4.2.1.2 Practical Examination

a. It is recommended that the practical examination consist of eddy current data sets that are randomly selected and contain indications indicative of all current damage mechanisms covering SG operating experience. Each damage mechanism is represented by a data set. The Appendix H requirement for physical measurement of flaw dimensions is not applicable to this appendix, and expert opinion is used to establish eddy current truth for grading purposes. Damage mechanism categories included in the practical examination are identified in Table G-1.
Table G-1 Damage Mechanisms Included in the Practical Examination Damage Mechanism Categories Thinning Support Structure Wear OD IGA/SCC PWSCC Loose Part Wear

G-4

Qualification of Eddy Current Examination Personnel for Analysis of Eddy Current Examination Data

b. It is recommended that grading units for each practical examination data set be established using data acquired with techniques qualified in accordance with Appendix H. c. It is recommended that adequate numbers of flawed and unflawed grading units be used to meet the POD, statistical CL, and false-call requirements of Table G-2.
Table G-2 Example Performance Demonstration Test Matrices for Flaw Detection Flaw Detection Criteria for a Given Damage Mechanism Category 40% TW 11 17 18 24 25 31 32 Minimum Acceptance Criteria for Detection at 80% POD 90% CL 11 17 17 23 23 29 29 Minimum Number of Unflawed Grading Units 22 34 36 48 50 62 64 Maximum Number of False Calls 2 3 3 4 5 6 6

d. It is recommended that the practical examination contain a minimum of 11 flawed grading units for each damage mechanism category (see Table G-1) where only detection is being applied. Larger numbers of flawed grading units may be included in the practical examination. Table G-2 provides examples of various test matrices and their corresponding acceptance criteria. It is recommended that the number of unflawed grading units selected for the practical examination be equal to at least twice the number of flawed grading units. e. It is recommended that the practical examination contain a minimum of 16 flawed grading units for each damage mechanism category (see Table G-1) where both detection and sizing are being applied. Larger numbers of flawed grading units may be included in the practical examination. Table G-3 provides examples of various test matrices and their corresponding acceptance criteria. It is recommended that the number of unflawed grading units selected for the practical examination be equal to at least twice the number of flawed grading units.

G-5

Qualification of Eddy Current Examination Personnel for Analysis of Eddy Current Examination Data Table G-3 Example Performance Demonstration Test Matrices for Flaw Detection and Sizing Flaw Detection Acceptance Criteria for a Given Damage Mechanism Category Minimum Acceptance Criteria for Detection at 80% POD 90% CL <40% 4 4 5 7 7 10 10 12 12 40% 11 12 12 17 17 23 23 29 29 32 34 36 50 52 72 74 92 94 3 3 4 5 5 7 7 9 9 False Call Acceptance Criteria

Total Number of Flawed Grading Units

Number of Flawed Grading Units <40% 40% 11 12 12 17 18 24 25 31 32

Minimum Number of Unflawed Grading Units

Maximum Number of False Calls

16 17 18 25 26 36 37 46 47

5 5 6 8 8 12 12 15 15

f. It is recommended that extraneous test variables (for example, denting, deposits, tube geometry changes) be included for each damage mechanism category, where applicable, based on SG operating experience. g. It is recommended that the SG type, tube identification, SG leg, and other pertinent information typically available to a data analyst during a field examination be identified for each data set. h. It is recommended that for each practical examination data set, the individual be provided with a description of the examination techniques performed along with a set of analysis guidelines for each technique. The grading criteria to be applied should also be provided.

G-6

Qualification of Eddy Current Examination Personnel for Analysis of Eddy Current Examination Data

G.4.2.2

Analyst Qualification Criteria

a. To be considered a QDA, an analyst shall successfully pass both written and practical examinations for all damage mechanisms including technique parameters (for example, detection, sizing, and orientation). See Figure G-1 for the flow diagram. b. A grade of at least 80% shall be required to pass the written examination. c. Practical examinations for each data set shall be graded by one or more of the following methods depending on the technique applicability: detection, sizing, and orientation. 1. Personnel shall be considered qualified for detection of a specific damage mechanism if all of the following requirements are met: o A POD of at least 80%, at a 90% CL for flawed grading units 40% TW. o Detection of at least 80% of the flawed grading units <40% TW. o The number of reported false calls is no more than 10% of the total number of unflawed grading units. 2. Personnel shall be considered qualified for performing sizing measurements on a specific damage mechanism if a root mean square error (RMSE) of less than or equal to 10% is demonstrated. The sample set, RMSE, is calculated using Equation G-1:
1 n n RMSE = i =1 (M i T i )2
Equation G-1

Where: Mi is the eddy current measured flaw parameter assigned by the individual analyst for the ith indication Ti is the eddy current measured flaw parameter for the ith indication determined by expert opinion n is the number of measured grading units in the data set. Only those flaws, which are detected, are considered in the RMSE calculation (a zero is not assigned as the measured flaw parameter for flaws not detected). 3. Personnel shall be considered qualified for determining orientation of a specific damage mechanism if the correct orientation is reported on at least 80% of the flawed grading units. Only those flaws which are detected are considered in the orientation calculation (a zero is not assigned as the determined orientation for flaws not detected).

G-7

Qualification of Eddy Current Examination Personnel for Analysis of Eddy Current Examination Data

Figure G-1 Qualification Flow Chart

G-8

Qualification of Eddy Current Examination Personnel for Analysis of Eddy Current Examination Data

G.4.2.3

Re-examination

If an individual fails to pass either the written examination or any applicable technique parameter of the practical examination, then the individual should have the option of undergoing a re-examination following the required reviews, training, or waiting period as defined herein. See Figure G-1 for the flow diagram.
G.4.2.3.1 Written Examination

Prior to re-examination, an individual failing the initial written examination should receive additional training on topics or subjects in which the individual did not demonstrate an acceptable level of understanding. If the individual fails to attain a passing score on a written retest, a five-day wait is required before a subsequent re-test may be administered. Each written reexamination contains a minimum of 40 questions and is assembled by a random selection process which may be augmented by the inclusion of questions specific to the areas of deficiency.
G.4.2.3.2 Practical Examination

A practical re-examination shall be given for each of the practical examination data set(s) in which the individual received less than a passing grade. Missed indications should be reviewed and no waiting period is required prior to the initial re-test. It is recommended that prior to a second re-test, the individual complete additional training as deemed necessary by the proctor. This re-test shall include each practical examination data set(s) in which the individual received less than a passing grade. If the individual fails to attain a passing score after the second re-test, a 30-day waiting period, with 40 hours of additional training, shall be required before a new test can be administered. This new test shall include the entire written and practical examination described in Section G.4.2.1. No credit shall be given for the previous three practical examinations or for the written examination. G.4.2.4 Interrupted Service

A QDA who has not analyzed data in a 15-month consecutive period shall be required to successfully complete the practical examination requirements specified in Section G.4.2. G.4.2.5 QDA Requalification

QDA requalification shall be performed within one year of EPRI addition of a new technique/damage mechanism to the QDA qualification process. The individual should be considered requalified if the practical examination requirements of G.4.2.2 are met for the new G-9

Qualification of Eddy Current Examination Personnel for Analysis of Eddy Current Examination Data

technique/damage mechanism. The individuals records are to be updated to reflect requalification on new technique/damage mechanisms. See Figure G-2 for the flow diagram.

Figure G-2 Requalification Flow Chart

G.4.2.6

QDA Qualification Records

It is recommended that the required documentation identified herein, supplement not replace, any documentation requirements identified by the employers written practice. Records of certification to Level II or III, as required in Section G.2.1 can be separately maintained at the discretion of the employer. A written testimony of qualification should include: a. Name of individual b. Statement indicating record of completion of training, including training hours, dates attended, and training institution c. Dates and pass/fail results assigned to the written examination and to each practical examination data set G-10

Qualification of Eddy Current Examination Personnel for Analysis of Eddy Current Examination Data

It is recommended that the employer maintain examinations with grades assigned and records of material covered. G.4.3 SSPD Examinations G.4.3.1 Examinations and Data Sets

The licensee is responsible for preparing, or contracting for preparation of, SSPD examinations and data sets.
G.4.3.1.1 Written Examinations

A written test evaluates knowledge of the specific plant SG history and conditions, examination techniques, expected damage mechanisms, and unique challenges to the examination process. The test should be a minimum of twenty questions. The test should be conducted in an open book format in order to simulate field practices.
G.4.3.1.2 Practical Examinations

a. The practical examination for flaw detection and orientation contains raw data for each examination technique/damage mechanism known or expected to exist based on the DA for the applicable SG. Sufficient data should be available during the test to allow grading of examination(s) on a POD/CL basis and the data should be representative of typical expected responses and conditions applicable to the SG. For examination techniques/damage mechanisms for which the analyst was evaluated on a QDA or other statistically valid SSPD, the test may consist of a reduced data set and evaluated on an all inclusive basis. For examination techniques/damage mechanisms for which the analyst was not evaluated on a QDA or other statistically valid SSPD, the analyst should be evaluated on a technique specific basis that can be used to measure analyst POD/CL. b. The practical examination for flaw sizing/characterization (e.g., depth sizing, length sizing, etc.) contains raw data for known or expected damage mechanisms to exist based on the DA for the applicable SG. This examination is only applicable to those personnel performing flaw sizing/characterization activities during a specific outage G.4.3.2
G.4.3.2.1

Examination Grading
Written Examination

Each analyst shall score at least 80% on the written test.

G-11

Qualification of Eddy Current Examination Personnel for Analysis of Eddy Current Examination Data

G.4.3.2.2

Practical Examination

a. Detection and Orientation 1. It is recommended that each analyst demonstrate a detection capability of at least that provided in Section G.4.2.2. 2. It is recommended that each analyst demonstrate an orientation identification capability that is correct on at least 80% of the flawed grading units (if orientation activities are performed). b. Flaw Sizing/Characterization 1. It is recommended that where applicable each analyst performing flaw sizing demonstrate an RMSE 10% TW (e.g., Anti-Vibration Bar (AVB) wear, thinning, etc.). Alternative sizing errors are acceptable provided they are considered in the applicable integrity assessments. 2. It is recommended that flaw sizing/characterization performance for techniques not evaluated on a QDA test or statistically valid SSPD, if applied, be demonstrated and documented. c. It is recommended that false call performance be at the discretion of the licensee. G.4.3.3 Re-Examination

It is recommended that retesting be at the discretion of the licensee. G.4.3.4 SSPD Performance Feedback

The analyst shall be allowed to review missed indications and overcalls on the practical examination.

G.5

Re-Grading QDA and SSPD Examinations

Re-grading of examinations by the proctor is acceptable, with documentation.

G-12

Qualification of Eddy Current Examination Personnel for Analysis of Eddy Current Examination Data

Attachment G-1 QDA Training And Laboratory Program Contents


LECTURE INTRODUCTION Purpose of Course Inspection Requirements Overview

STEAM GENERATORS Design and Operation Operating Experience

INDUSTRY EXPERIENCE Thinning PWSCC Wear Denting IGA/SCC Fatigue Manufacturing Anomalies Loose Parts/Loose Part Damage

INSTRUMENTATION/SOFTWARE OVERVIEW Data Acquisition Data Analysis

EDDY CURRENT PROBES Bobbin Coil Probes Mechanically Rotated Probes +PointTM Coil Pancake Coil Axial/Circumferential Wound Coils Transmit/Receive Coils

G-13

Qualification of Eddy Current Examination Personnel for Analysis of Eddy Current Examination Data

Fixed Array Probes Transmit/Receive Electronically Rotated Coils Profilometry Probes

CALIBRATION STANDARDS ASME Wear Scar Notch Array Profile

DATA ANALYSIS PRINCIPLES Signal Rotation Versus Frequency Factors Affecting Frequency Selection Signal-to-Noise (S/N) Detection Measurement Probe Type

Multiparameter Signal Suppression Flaw Specific Detection/Analysis Principles Probe Specific Detection/Analysis Principles Flaw Specific Sizing Principles Data Quality

LABORATORY Calibration Procedures Analysis Techniques Practical Examination Data Sets Application of Analysis Techniques Grading Criteria Self-Study

G-14

PERFORMANCE DEMONSTRATION FOR EDDY CURRENT EXAMINATION

H.1

Scope

a. This appendix provides performance demonstration requirements for eddy current examination techniques and equipment. b. NDE technique qualification and documentation, including both destructive and NDE, shall be performed under 10CFR50 Appendix B Quality Assurance Program. Tubes pulled prior to April 15, 1999, may be excluded from 10CFR50 Appendix B requirements and may be grandfathered. c. The performance demonstration requirements specified in this appendix apply to the acquisition process but do not apply to personnel involved in the data acquisition process. It is recommended that data acquisition personnel be trained and qualified by their employer in a manner commensurate with the tasks they perform. It is recommended that the requirements for training and qualification of such personnel be described in the employers written practice. d. Techniques that have been qualified in accordance with this appendix may be used in procedures without regard to the organization that qualified the technique. e. All work associated with the NDE technique qualification, with the exception of the peer review, may be performed by competent technical personnel who are not QDAs.

H.2

General Examination System Requirements

H.2.1 Technique Requirements The technique specification contains a statement of scope that specifically defines the qualification applicability in the context of damage mechanism/extraneous test variable combinations, material, and geometry. H.2.1.1 Examination Technique Specification Sheet

The ETSS defines the following essential variables as a single value, range of values, or formula:

H-1

Performance Demonstration for Eddy Current Examination

H.2.1.1.1

General Variables All Types of Probes (Bobbin, Rotating and Array)

a. Equipment Variables 1. Instrument or systems name and model 2. Analog cable type and length including: o Probe cable type (wiring used within sheath,), manufacturer, length and part number o Extension cable type, manufacturer, length and part number 3. Probe manufacturer and part number b. Technique Variables 1. Examination frequencies 2. Drive voltage and gain settings 3. Coil excitation modes (e.g., absolute, differential or transmit/receive) 4. Tubing material and wall thickness 5. Analysis Variables 6. Method of calibration (e.g., voltage setting, span, phase, sizing curves, filtering and mixes) 7. Data analysis requirements (e.g., typical analysis window display, C-Scan display if applicable and filtering) 8. Reporting requirements 9. Software used for acquisition and analysis including manufacturers name and specific revision level
H.2.1.1.2 Specific Essential Variables for Bobbin Coil

a. Equipment Variables 1. Probe type, coil width size, coil spacing and coil swept peak frequency 2. Fill factor 3. Technique Variables 4. Minimum axial samples per inch

H-2

Performance Demonstration for Eddy Current Examination

H.2.1.1.3

Specific Variables for Rotating Probes

a. Equipment Variables 1. Coil type, coil dimension, coil part number and coil swept peak frequency 2. Maximum Diametral Offset (applicable to non-surface riding coil) b. Technique Variables 1. Minimum axial samples per inch and circumferential samples per inch
H.2.1.1.4 Specific Variables for Array Probes

a. Equipment Variables 1. Coil sensing area, spacing, coil part number and coil swept peak frequency 2. Maximum Diametral Offset b. Technique Variables 1. Minimum axial samples per inch

H.3

Performance Demonstration

H.3.1 General Successful completion of Supplement H2 constitutes qualification of an acquisition technique and an analysis method. H.3.2 Essential Variable Ranges a. Any two techniques with the same essential variables are considered equivalent. Equipment with essential variables that vary within the tolerances specified in Section H.4 is considered equivalent. When variations in the technique allow more than one value or range for an essential variable, repeat technique qualification at the minimum and maximum value for each essential variable with all other variables remaining at their nominal values. b. When the method does not specify a range for essential variables and establishes criteria for selecting values, the criteria should be demonstrated for essential variables such as sample rates. H.3.3 Requalification When a change in an acquisition technique or analysis technique causes an essential variable to exceed a qualified range, requalify the acquisition or analysis technique for the revised range.

H-3

Performance Demonstration for Eddy Current Examination

H.3.4 Disqualification of Techniques The qualification of techniques is dependent on the quality of the data sets used in the qualification. Therefore, to improve the data set credibility, additional pulled tube and laboratory specimen data that becomes available should be added to the qualification data sets on an annual basis. A technique that initially met the requirements of this appendix may then fail the acceptance criteria after data are appended. For any technique believed not to meet the detection requirements as stated in Supplement H2, immediate notification to the industry will be provided by the SGMP NDE IRG chairperson or equivalent. An industry peer review group will be convened within 100 days to review this assessment. This peer review will be performed consistent with the requirements of Supplement H4. The results of the peer review will be made available to the industry by the SGMP NDE IRG chairperson or equivalent. Any technique, determined to be disqualified by not meeting the requirements of this appendix if included in Appendix G, will also be removed from Appendix G for qualification purposes.

H.4

Essential Variable Tolerances

H.4.1 Instruments and Probes The qualified acquisition technique may be modified to substitute or replace instruments or probes without requalification when the range of essential variables defined in the ETSS (when applicable) are met providing both the original and replacement instruments or probes are characterized utilizing Supplement H1. H.4.1.1 Cable

Change of probe or extension cable within the analog portion of the system may change the system resistance and capacitance which may have an undesired effect on the operating characteristics of the system. Two approaches are available to determine the extent of the change: a. A swept frequency plot of the originally qualified system configuration compared to the swept frequency plot of the proposed system configuration. The acceptance is based on the center frequency plot being within +/- 10% of the qualified value. b. A demonstrative approach where data from the same indication is compared using both system configurations. The acceptance is based on the amplitude response being within +/- 10% of the qualified cable amplitude response.

H-4

Performance Demonstration for Eddy Current Examination

H.4.1.2

Frequency/Material/Wall Thickness

a. The relative current density at a given depth within the tube wall can be calculated for different combinations of frequency and material. The technique is considered equivalent if the relative current density is within +/- 10% of the qualified technique. The tolerance is applied as a calculated value. The relative current density value should be calculated at the OD surface of the tube wall. b. If the same technique has been qualified at two different frequencies, then all frequencies between the qualified frequencies are considered qualified. c. Calculation of Relative Current Density: Relative current density is a measure of eddy currents generated within the tube wall. The value is expressed in terms of the eddy current density percentage remaining in relation to that available at the tube wall ID. First one must solve for the standard depth of penetration (SDP). Equation H-1 provides SDP in units of inches. SDP () = 1.98 Where: is the material resistivity (cm) f is the frequency (hertz) Then relative current density (Jx/Jo) is approximated by: Jx/Jo = e x/ Where: Jx/Jo is the approximate ratio of the current density at the relative distance of x versus o and is expressed as a percentage value x is the relative position within the tube wall o is at the surface of the coil For this equation, x is the distance (in inches) between x and o, when SDP () is in inches.
Equation H-2

/f

Equation H-1

H-5

Performance Demonstration for Eddy Current Examination

H.4.1.3

Array Probe Sensing Area Coverage

The dimensions or footprint of the sensing area sensitivity for an array probe can be characterized as Amplitude Crossover. The amplitude crossover is the point below the peak amplitude response from adjacent individual sensing areas in an array where the response signals cross. This sensing area attribute is essential to flaw resolution, signal repeatability, and the minimum detection capability of the probe. The amplitude crossover should not exceed (-10%) of the originally qualified technique between any two adjacent sensing areas. H.4.2 Computerized System Algorithms Algorithms may be altered provided the modified algorithms are demonstrated to be equivalent or better than those initially qualified. H.4.3 Calibration Methods Alternative calibration methods may be used without requalification if it is demonstrated that the proposed calibration method is equivalent to or more conservative than those described in the qualified acquisition technique or qualified analysis method.

H.5

Record of Qualification

Technique qualification documentation includes identification of acquisition techniques, analysis methods, essential variables, data sets and performance indices.

H-6

Performance Demonstration for Eddy Current Examination

Supplement H1 Equipment Characterization H1.1 Scope


This supplement specifies essential variables associated with eddy current data acquisition instrumentation and establishes a protocol for essential variable measurement.

H1.2 Eddy Current Instrument


It is recommended that all instruments meet the requirements of the ASME Boiler and Pressure Vessel Code, Section XI. The essential variables for the eddy current instrument are related to: Signal generation Amplification, demodulation, and filtering Analog-to-digital (A/D) conversion Channel cross talk Signal Generation Total Harmonic Distortion

H1.2.1 H1.2.1.1

Harmonic distortion is due to nonlinearities in the amplitude transfer characteristics of the instrument. The output contains not only the fundamental frequency but also integral multiples of the fundamental frequency. For eddy current instruments, harmonic distortion is a measure of the quality of the sinusoidal signal injected into the coil(s). The total harmonic distortion is expressed in either percent distortion compared to the fundamental sinusoidal frequency or the ratio in decibel (dB) of the amplitude of the fundamental frequency to the amplitude of the largest sidelobe as displayed on a frequency spectrum plot. Measure it for each frequency specified. When used as an essential variable, specify the maximum harmonic distortion. H1.2.1.2 Output Impedance

The output impedance is measured for each test frequency at the output connector of the instrument. Both the magnitude and phase are measured for each specified frequency. When used as an essential variable, specify the tolerance of the ratio of the output (transmitter) to input (receiver) impedance.

H-7

Performance Demonstration for Eddy Current Examination

H1.2.2 H1.2.2.1

Amplification, Demodulation and Filtering Input Impedance

The input impedance is to be measured independently of the output impedance if the transmitter and receiver are not wired to the same coil(s) as is the case for reflection (driver/pickup) arrangements. Measure both the magnitude and phase at each specified frequency. When used as an essential variable, specify the tolerance of the ratio of the output (transmitter) to input (receiver) impedance. H1.2.2.2 Amplifier Linearity and Stability

a. Amplifier linearity and stability of each channel used for inspection is measured as the ratio of the signal injected at the instrument input to the magnitude of the signal measured at the data analysis screen. It is a measurement of the similarity between the eddy current signal sensed at the coil side and the signal observed on the analysis screen after signal amplification and filtering. The measurement is performed: 1. For five different gain settings equally spaced between the smallest and largest gain values available on the instrument 2. For five different signals injected at the instrument input at each gain setting, equally spaced between the smallest detectable signal and the largest signal that can be obtained without saturation b. Linearity is expressed in terms of percentage deviation from a best-fit linear relationship between corresponding input and output values when plotted on a graph. The percentage is determined by dividing the maximum deviation from the line by the full-scale value. c. When used as essential variables, express linearity and/or stability as minimum requirements. The output/input graph shown in Figure H1-1 illustrates the curve fitting method used to determine amplifier linearity.

H-8

Performance Demonstration for Eddy Current Examination

Figure H-1 Instrument Linearity

H1.2.2.3

Phase Linearity

Phase linearity of each channel used for inspection is measured as the maximum phase angle deviation from a best-fit and real data measured and represented on a graphic plot of the phase measurement. The measurement is performed as follows: a. A signal is injected in the inputs of each channel used for the inspection at the corresponding ECT frequency. b. The signal amplitude is adjusted to produce an output close to the maximum amplitude without saturation. c. The phase angle of the output is measured. The phase angle of the input is varied for at least 16 different values spread from 0 to 360 degrees equally spaced. d. The graphic of the output phase angle on the vertical axis and phase angle of signal input is plotted. A linear best-fit with a slope of 1 degree/degree is done. e. A different phase linearity curve and best-fit is done for each channel used for the inspection. f. The phase linearity is expressed in terms of the maximum degree deviation from the output phase measurement and the best-fit line. When used as an essential variable, express the maximum phase linearity deviation that is retained.

H-9

Performance Demonstration for Eddy Current Examination

H1.2.2.4

Bandwidth and Demodulation Filter Response

Bandwidth of each channel used for inspection is measured as the frequency where the demodulation filter amplitude frequency response reaches the maximum accepted deviation. The measurement is performed as follows: a. A signal is injected at each input used for the inspection. b. The signal frequency is adjusted to the ECT frequency of the corresponding channel. c. The signal amplitude is adjusted to get close to the maximum output range without saturation. d. The output amplitude is measured for each channel corresponding to the frequency injected. e. The input frequency is increased by steps less than 5% of the bandwidth and the corresponding output amplitudes are measured. f. From previous measurements, a graphic is plotted for each channel. The vertical axis displays the signal amplitude ratio of each frequency injected divided by the signal amplitude at the corresponding ECT frequency. The horizontal axis displays the corresponding difference of frequency values. g. The bandwidth is expressed in terms of Hertz of the value on the graphic where the ratio goes out of the acceptable range. For example, the acceptable range might be 100% +/-2%. When used as an essential variable, express the bandwidth as the minimum bandwidth frequency. H1.2.3 H1.2.3.1 A/D Conversion A/D Resolution

The resolution of the A/D conversion is the value of the input voltage that corresponds to a change of one bit. It is a measurement of the smallest change in the eddy current signal that can be observed after digitization. If applicable, it is measured for five equally spaced gain settings between the smallest and the largest gain values available on the instrument. When used as an essential variable, express the resolution of the A/D conversion as a minimum value.

H-10

Performance Demonstration for Eddy Current Examination

H1.2.3.2

Dynamic Range

The number of bits for full-scale input determines the dynamic range of the A/D conversion. It is a measure of the maximum eddy current signal that can be recorded without distortion after digitization. When used as an essential variable, express the number of bits for full-scale input as a minimum value. H1.2.3.3 Digitizing Rate

The digitizing rate is the frequency in hertz (Hz) at which the A/D conversions are made. It is recommended that the minimum digitizing rate of the A/D conversion be capable of providing the specified sample rate at the probe speeds to be used. When used as an essential variable, specify the minimum digitizing rate. H1.2.4 Channel Crosstalk

Channel crosstalk is measured as the signal present at the output of a channel when no signal is present at the corresponding input (coming from a signal present at another input). The measurement is performed as follows: a. The gain is set at the minimum value of the instrument range. b. A signal is injected at each input used for the inspection. It is applied on one channel at a time. c. The signal frequency is adjusted to the ECT frequency of the corresponding channel. d. The signal amplitude is adjusted to get close to the maximum output range without saturation. e. The output amplitude is measured at each other channel output. f. The crosstalk is expressed in terms of percentage of the ratio of the output amplitude of the channel having no signal input divided by the output amplitude of the corresponding channel having the signal injected at the input (multiplied by 100%). When used as an essential variable, express the channel crosstalk as the maximum percentage of crosstalk.

H-11

Performance Demonstration for Eddy Current Examination

H1.3 Probe Characterization


H1.3.1 Impedance

Measure the impedance (magnitude and phase) for each test coil at the test frequencies selected for the examination. This is considered to be the input impedance of the instrument (see Section H1.2.2.1). H1.3.2 Center Frequency

The center frequency is measured with the full cable length between the coil and the instrument input connector. When used as an essential variable, specify the allowable range of the center frequency. H1.3.3 Coil Configuration Sensing Area

Measurements are performed with the eddy current system hardware, e.g. tester, probes, extension cables, etc., configured as required for actual implementation. Essential variables are defined for bobbin, rotating and array probes. H1.3.3.1
H1.3.3.1.1

Bobbin Probes
Effective Scan Field Width

The effective scan field width (ESFW) is a measure of the extent of the effective sensing area in the scanning direction. It is also a measure of the spatial resolution. This resolution determines the minimum spacing between three successive notches that is compared to a single flaw of equal depth. Measurements are performed for each eddy current examination frequency and mode of coil operation, for example, absolute, differential, or transmit/receive. A 100% TW notch of 0.008 in. (0.2 mm) wide by a minimum length equal to the sensing areas cross sectional dimension in the scanning direction plus 1.0 in (25 mm) is scanned perpendicular to the induced current flow. A curve is plotted from the signal amplitude generated by the scan of the probe past the notch. The ESFW in. (mm) is determined by subtracting the notch length from the measured distance between corresponding signal amplitude points for a given attenuation below the maximum amplitude (typically half amplitude). The ESFW can be a negative value for one or all of the four points measured on the curve.

H-12

Performance Demonstration for Eddy Current Examination

When used as an essential variable, specify the ESFW for each mode of excitation as the sensing area measured at a point on the curve used to determine the value minimum of four points, equally spaced, are to be selected so as to define the curve on either side of the minimum and maximum signal amplitudes. Example:
H1.3.3.1.2

ESFW -12 dB = -0.08 in. (2.03 mm)


Fill Factor Coefficient

The fill factor coefficient (FFC) is a measure of the drop in effective sensitivity due to the increase in distance between the source of the eddy current inducing magnetic field and the inner surface of the tubing. The measurement is performed for each ETSS examination frequency coil configurations, and excitation modes. A 100% TW notch of 0.008 in. (0.2 mm) width and of a minimum length equal to the sensing areas cross sectional dimension in the scanning direction plus 1.0 in. (25 mm) is scanned perpendicular to the induced current flow. The gain setting is adjusted for an 80% full-scale peak signal for the signal having the largest amplitude. The measurements are performed for three or more fill factors (ratio of square of OD probe diameter to ID tube diameter) between the largest and smallest to be encountered in the SG tubes. When used as an essential variable, specify the fill factor, for absolute and differential modes, as the amplitude attenuation from the largest fill factor to the smallest fill factor. Example:
H1.3.3.1.3

FFC 0.85 to 0.70 = -5 dB


Depth Coefficient

The depth coefficient (DC) is a measure of the depth-of-penetration and current density within the wall thickness of the tube. The measurement is performed for each eddy current examination frequency coil configurations, and modes of operation at the nominal fill factor being used in the SG tube. The following notches of 0.008 in. (0.2 mm) width and a minimum length equal to the coil width plus 1.0 in. (25 mm) will be fabricated from the OD surface and the ID surface of the tube: One notch of 100% TW depth One notch of 80% TW depth Two notches of 60% TW depth in the same cross section 180 apart Four notches of 40% TW depth in the same cross section 90 apart Six notches of 20% TW in the same cross section 60 apart

H-13

Performance Demonstration for Eddy Current Examination

The notches are scanned perpendicular to the sensing area direction. The gain setting is adjusted for an 80% full-scale peak signal for the signal having the largest amplitude (100% TW depth). The amplitude of each eddy current signal will be divided by the number of notches in the same cross section. When used as an essential variable, specify the defect DC, for each mode, as a maximum amplitude attenuation for each depth relative to 100%. Examples: DC 80% = -2 dB DC 60% = -5 dB
H1.3.3.1.4 Axial Length Coefficient (ALC)

The ALC is a measure of the influence of the axial crack length on the amplitude of the eddy current signal. The measurement is performed for each of the examination frequencies and modes, at the nominal fill factor to be used in the SG tube. A 100% TW notch of 0.008 in. (0.2 mm) width and of varying length from a minimum length equal to the coil width and up to the coil width plus 0.5 in. (13 mm) at increments of 0.1 in. (2.5 mm) is scanned perpendicular to the coil preferred direction. The gain setting is adjusted for an 80% scale peak signal for the signal having the largest amplitude. When used as an essential variable, specify the ALC, for each mode, as a maximum amplitude attenuation for each length relative to longest one. Examples: ALC -0.098 in. (2.5 mm) = 0 dB ALC -0.197 in. (5.0 mm) = -2 dB
H1.3.3.1.5 Transverse Width Coefficient (TWC)

The TWC is a measure of the dependency of transverse crack width on the amplitude of the eddy current signal. The measurement is performed for each eddy current examination frequency for each mode and nominal fill factor. A 100% TW notch of the same length as the sensing area width, and the width of 0.008 in. (0.2 mm) to 0.02 in. (0.6 mm), at increments of 0.004 in. (0.1 mm), is scanned parallel to the direction of induced eddy current flow. The gain setting is adjusted for an 80% full-scale peak signal for the signal having the largest amplitude. When used as an essential variable, specify the TWC, for each mode, as a maximum amplitude attenuation for each defect width relative to largest one. Examples: TWC -0.004 in. (0.1 mm) = -0.5 dB TWC -0.008 in. (0.2 mm) = -1.0 dB

H-14

Performance Demonstration for Eddy Current Examination

H1.3.3.1.6

Phase-to-Depth Curve (PDC)

The PDC is derived from the phase measurement of the eddy current signal response to varying depth artifacts in a calibration tube. The measurement is performed for each examination frequency for each mode and nominal fill factor. Notches from 20% TW ID to 100% TW and 20% TW OD to 100% TW of the same length as the total coil(s) width plus 0.5 in. (13 mm) and 0.008 in. (0.2 mm) width are scanned perpendicular to the induced eddy currents. The gain setting is adjusted for an 80% full-scale peak signal for the signal having the largest amplitude. When used as an essential variable, it is recommended that the PDCs be given, for each mode and, as a minimum, the phase spread between each consecutive defect depth. Examples: PDC (20% TW to 40% TW OD) = 30 PDC (40% TW to 60% TW OD) = 20 PDCs may be given for complementary defect types such as holes, laboratory-induced defects, or site-specific defects. Unless specified, consider the phase spread between consecutive defect depths as a minimum requirement.
H1.3.3.1.7 Direct Current Saturation Strength (DCSS)

The DCSS concerns only with probes delivered with a supplemental coil or a magnet designed to suppress the influence of possible magnetic variations. The DCSS is measured in air with a gaussmeter located at the center of the coil at a nominal distance from the tubes inner surface. It is expressed in milliTesla. When used as an essential variable, specify the DCSS coefficient and direction as a minimum requirement. H1.3.3.2
H1.3.3.2.1

Rotating Probes
Effective Scan Field Width

See Section H1.3.3.1.1.


H1.3.3.2.2 Effective Track Field Width (ETFW)

The ETFW takes into account the combined influence of the coil configurations sensing area and the scanning pitch for each examination frequency. It measures the drop in signal amplitude when the sensing area scans past the defect at increasing scanning distances. A 100% TW notch of 0.008 in. (0.2 mm) width, and of a minimum length equal to the sensing area width plus 1.0 in. (25 mm), is scanned perpendicular to the direction of the induced eddy currents for defect detection. H-15

Performance Demonstration for Eddy Current Examination

The gain setting is adjusted for an 80% full-scale peak signal for the signal having the largest amplitude. A curve is plotted for the signal amplitude as a function of the distance between the center of the sensing area and the center of the notch. When used as an essential variable, specify the ETFW coefficient, for each mode, as the maximum distance from the notch where a given signal attenuation is measured minimum of four points that are equally spaced, and are to be selected so as to define the curve on either side of the minimum and maximum signal amplitudes. Example:
H1.3.3.2.3

ETFW -3 dB = 0.12 in. (3 mm)


Lift-Off Value (LOV)

The LOV is a measure of the drop in the effective sensitivity due to the increase in distance between the source of the eddy current inducing magnetic field and the inner surface of the tubing. The measurement is performed for each eddy current examination frequency, coil configuration, and mode of operation. A 100% TW notch of 0.008 in. (0.2 mm) width and of a minimum length equal to the sensing area width plus 1.0 in. (25 mm) is scanned perpendicular to the direction of induced eddy currents. The gain setting is adjusted for an 80% full-scale peak signal for the signal having the largest amplitude. The measurements are performed for three or more LOVs between the largest and smallest to be encountered in the SG tubes. Lift-Off (inches) = Distance between the Sensing Area and the Tube ID Surface (includes coil recess) When used as an essential variable, specify the LOV, for each mode, as the amplitude attenuation from the smallest to the largest LOV. Example:
H1.3.3.2.4

LOV 0.85 to 0.70 = -5 dB


Depth Coefficient

See Section H1.3.3.1.3 (and replace the term fill factor with the term lift-off)
H1.3.3.2.5 ALC

See Section H1.3.3.1.4 (and replace the term fill factor with the term lift-off)
H1.3.3.2.6 TWC

See Section H1.3.3.1.5 (and replace the term fill factor with the term lift-off) H-16

Performance Demonstration for Eddy Current Examination

H1.3.3.2.7

PDC

See Section H1.3.3.1.6 (and replace the term fill factor with the term lift-off) H1.3.3.3
H1.3.3.3.1

Array Probes
Effective Scan Field Width

See Section H1.3.3.1.1.


H1.3.3.3.2 Diametral Offset Value (DOV)

The DOV is a measure of the drop in the effective sensitivity due to the increase in distance between the source of the eddy current inducing magnetic fields and the inner surface of the tubing. The measurement is performed for each eddy current examination frequency, coil configuration, and mode of operation. A 100% TW notch of 0.008 in. (0.2 mm) width and of a minimum length equal to the sensing area width plus 1.0 in. (25 mm) is scanned perpendicular to the direction of induced eddy currents. The gain setting is normalized for an 80% full-scale peak signal for the signal having the largest amplitude due to the smallest offset value. The measurements are performed for three or more offsets between the largest and smallest to be encountered in the SG tubes. Diametral offset (inches) = Tube ID Probe OD + Coil Recess
Equation H-3

When used as an essential variable, specify the DOV for each mode, as the amplitude attenuation from the smallest to the largest signal amplitude. Example:
H1.3.3.3.3

DOV 0.005" to 0.060" = -6 dB


Depth Coefficient

See Section H1.3.3.1.3 (and replace the term fill factor with the term diametral offset)
H1.3.3.3.4 ALC

See Section H1.3.3.1.4 (and replace the term fill factor with the term diametral offset)
H1.3.3.3.5 TWC

See Section H1.3.3.1.5 (and replace the term fill factor with the term diametral offset)

H-17

Performance Demonstration for Eddy Current Examination

H1.3.3.3.6

PDC

See Section H1.3.3.1.6 (and replace the term fill factor with the term diametral offset)
H1.3.3.3.7 Tube Circumference Coverage

The dimensions or footprint of the sensing area sensitivity for an array probe can be characterized as Amplitude Crossover. The amplitude crossover is the point below the peak amplitude response from adjacent individual sensing areas in an array where the response signals cross. This sensing area attribute is essential to flaw resolution, signal repeatability, and the minimum detection capability of the probe. The amplitude crossover for circumferential coverage can be measured by creating a polar plot with measurements taken at < or = 1 increments around the tube of a 60% thru-wall 0.080 (2mm) long EDM notch artifact oriented in the preferred direction to achieve the highest amplitude response from the sensor. When used as an essential variable, the amplitude crossover is expressed as a percentage of peak amplitude. Example: Amplitude Crossover = 81%

H-18

Performance Demonstration for Eddy Current Examination

Supplement H2 Qualification Requirements for Examination of SG Tubing H2.1 General


SG tubing examination techniques and equipment used to detect and size flaws are qualified by performance demonstration. EPRIQ techniques shall meet the requirements of Supplement H4 for conduct of the peer review process. In lieu of the Supplement H4 peer review, non EPRIQ techniques shall, as a minimum, have an IQDA verify and document compliance with Appendix H.

H2.2 Qualification Data Set Requirements


H2.2.1 General

a. It is recommended that test sample damage mechanism morphology and dimensions used to construct qualification data set grading units be based on SG operating experience as inferred from tube pulls. b. Where applicable, it is recommended that the influence of extraneous test variables associated with each of the damage mechanisms (e.g., denting, dings, deposits, tube geometry changes) be assessed. c. Test samples fabricated using mechanical or chemical methods may be used. However, the flaws should produce signals similar to those being observed in the field in terms of signal characteristics, signal amplitude, and S/N ratio. d. Flaw dimensions for samples, used to construct grading units included in the qualification data set, shall be verified by physical measurements. Evidence of leakage may be used to verify 100% TW depth. Alternatively, flaw dimensions may be established as follows: e. Supplemental NDE techniques may be used to determine the flaw dimensions for samples provided that the following steps are completed: 1. The supplemental NDE technique should be an alternative method to the technique being qualified. For example, ultrasonic examination being used to establish flaw dimensions for an eddy current technique. 2. Establish the uncertainties of the supplemental NDE technique by comparison to destructive analysis of flaw samples. 3. This comparison should include samples from the same type of damage mechanism utilizing a minimum of sixteen samples. 4. Establish the measurement error of the supplemental NDE technique in accordance with H2.3.2 for each structural parameter to be used in the technique qualification. 5. It is recommended that the correlation coefficient, as established in H2.3.2, meet these minimum requirements of Table H-1 for the supplemental NDE technique to be considered eligible for use: H-19

Performance Demonstration for Eddy Current Examination Table H-1 Sample Size Correlation Coefficient Sample Size 30 25 20 16 Correlation Coefficient (Minimum) 0.36 0.40 0.44 0.50

6. Future samples of the same type (damage mechanism/location) shall have a 20% sample of the flaws destructively analyzed to validate that the sample set is within the parameters of the original set from which the measurement error was determined. The samples being used for validation should represent the range that is being qualified. If the subsequent sample set destructive analysis results are not within the bounds of the original sample set then the alternative flaw dimension profiles cannot be used and the Chairman of the NDE IRG will be notified for corrective action. 7. Use the standard error of regression at a 95/50 confidence interval as the measurement error applied to flaw dimensions for samples used in technique qualifications utilizing supplemental NDE techniques for establishment of the flaw dimensions in lieu of physical measurements. The measurement error should be applied to the measurement in the most conservative manner and may be applied for specific variable ranges. f. The effect of the coil field width and effective coil tracking, as defined in Supplement H1, are taken into account when comparing eddy current measured parameter with measured metallurgical results, for crack like indications. This data will be used to average and integrate the metallurgical measurements to establish a common basis to compare to the qualification eddy current measurements. The ETSS should delineate whether average depth or maximum depth flaw measurements were used in the qualification.

H-20

Performance Demonstration for Eddy Current Examination

H2.2.2

Detection Data Set

a. Select the data set according to the requirements of Table H-2 for each damage mechanism/extraneous test variable combination.
Table H-2 Minimum Number of Flawed Grading Units (60% TW) That Must Be Detected to Meet the Technique Acceptance Criterion of POD 0.80 at a 90% Confidence Level Total Number of Flawed Grading Units in the Data Set for Technique Acceptance Testing 11 17 18 24 25 31 32 Minimum Number of Flawed Grading Units that Must be Detected for Technique Acceptance 11 17 17 23 23 29 29

b. The minimum detection data set is 11 flawed grading units. c. It is recommended that the data set be uniformly distributed over the depth range of 60% to 100% TW. It is acceptable to use a lower percent TW criteria for detection rather than the 60% value. d. POD calculations include the use of flawed locations reported as no detectable degradation (NDD). H2.2.3 Sizing Data Set

a. Select the sizing data set for each damage mechanism/extraneous test variable combination. b. The minimum sizing data set is 16 flawed grading units. c. It is recommended that the sizing data set include the detection data set. d. It is recommended that the remainder of the sizing data set have maximum measured depths less than 60% TW of the nominal tube wall thickness. It is recommended that the data set be uniformly distributed over the depth range of 20% TW to 59% TW, as is reasonably achievable. Specialized data sets are permissible for a targeted sizing range. e. Sizing performance indices do not include NDD locations; however, locations that provide a 0% TW measurement are reported as 1% TW and are included in the performance indices.

H-21

Performance Demonstration for Eddy Current Examination

The sizing data set is used to demonstrate depth sizing. Sizing performance may be supplemented by additional data sets as required to meet higher CLs as dictated by the needs of condition monitoring and operational assessments.

H2.3 Acceptance Criteria


Qualification is exclusively based on detection. H2.3.1 Detection Acceptance Criteria

Techniques shall be considered qualified for detection if they demonstrate a POD of 0.80 or greater at a 90% lower-bound CL, using a data set of 11 or more flawed grading units. A flawed grading unit is defined, for this qualification, as a flaw that is either equal to or greater than 60% TW. It is acceptable to use a lower percent TW criteria for detection rather than the 60% value. Table H-2 provides the minimum number of detections required to meet this acceptance criterion for data sets consisting of 11 to 32 flawed grading units. To meet the criterion of a POD of 0.80 or greater at a 90% CL, for example, in data sets numbering from 11 to 17 flawed grading units, the technique must detect every flawed grading unit. In data sets numbering from 18 to 24, the technique must detect all but one, and from 25 to 31, all but two. In constructing Table H-2, calculations from the binomial distribution determined the minimum number of successes required in examining a flaw data set to ensure, at a CL of 90%, that the actual POD is 0.80 or greater. The binomial distribution provides the probability of each possible outcome over a specified number of trials when only two outcomes are possible on each trial, success or failure, and the likelihood of a success or failure is known or assumed. The probability of exactly x successes in n trials, when the probability of success on each trial is p, is calculated in Equation H-4:

P(x n, p ) =

n! p x (1 p ) n x x!(n x)!

Equation H-4

This expression can be used, then, to calculate the minimum number of detections required from a sample set of given size to be ensured, at a CL of 90%, that the actual POD is equal to or greater than 0.80. Starting with the largest number of detections possible (n), the probability of that number of detections is calculated and that of each lesser number of detections (n-1, n-2, etc.) so that when the probabilities are added together they do not exceed a combined probability of 0.10. These are the outcomes, then, that exceed 90% of all possible outcomes. So when observed they provide assurance, at a 90% CL, that the actual POD is equal to or greater than 0.80.

H-22

Performance Demonstration for Eddy Current Examination

In the following example, the process described in this section is completed to determine the minimum number of detections required in a sample set of 18 flawed grading units to have 90% confidence that the actual POD is equal to or greater than 0.80. Using Equation H-5, the probability of detecting exactly 18 flaws out of the 18 in the sample set is:

P (1818,0.80) =

18! (0.80)18 (0.20) 0 = 0.0180 18!0!

This means that the probability of detecting 18 out of 18 is 1.8%, which also means that if 18 flaws out of 18 in the sample set have been detected, then this outcome has exceeded 98.2% of all possible outcomes. Therefore, detection of 18 out of 18 represents a POD of 80% at 98.2% CL. The probability of detecting exactly 17 flaws out of the 18 in the sample set is:

P(17 18,0.80) =

18! (0.80)17 (0.20)1 = 0.0811 17!1!

The probability of detecting exactly 16 flaws out of the 18 in the sample set is:

P (1618,0.80) =

18! (0.80)16 (0.20) 2 = 0.1723 16!2!

The sum of the probabilities for 17 and 18 detections (the probability of 17 or more detections) is equal to 0.0180 plus 0.0811 equals 0.0991 equals 9.91%. This means that the probability of detecting 17 or more is 9.91% and if 17 or more have been detected from a sample set of 18, then this outcome has exceeded 90.09% of all possible outcomes. So detection of 17 or more out of a sample set of 18 represent a POD of 80% at 90.09 CL. Since this combined probability is less than 0.1000, it is at or beyond the 90% CL. Adding the probability of 16 detections, however, results in a combined probability (the probability of 16 or more detections) of 0.0180 plus 0.0811 plus 0.1723 equals 0.2714 equals 27.1%. This means that the probability of detecting 16 or more out of 18 is 27.1%, which also means that if 16 or more out of 18 have been detected, then this outcome only exceeds 72.9% of all possible outcomes (80% POD at 72.9% CL). This leads to the conclusion that detection of 17 or more out of 18 represents a POD of 80% at a CL of 90% or greater, while detection of 16 or better out of 18 leads to a POD 80% at a CL of 72.9% or greater, which is not acceptable per established criteria. The binomial distribution produces discrete rather than continuous values; as a consequence, the actual CLs vary among the minimum number of detections presented in Table H-2. However, the CLs associated with the acceptance criteria in the table are all equal to or greater than 90%.

H-23

Performance Demonstration for Eddy Current Examination

H2.3.2

Technique Sizing Performance

Technique performance for sizing is based on a standard error of regression and correlation coefficient (r) of the eddy current measured parameters. Standard error of regression is defined by Equation H-5: n xy ( x)( y ) 2 1 2 Syx = n y ( y ) n x 2 ( x ) 2 n(n 2) Correlation coefficient is defined by the Equation H-6:
r=

]
2

Equation H-5

x y x y 2 2 2 2 n x ( x ) n y ( y )
n

Equation H-6

Where: x is the NDE estimate y is the structural variable (metallurgical value) n is the number of values in the data set.

H-24

Performance Demonstration for Eddy Current Examination

Supplement H3 Protocol for PDD Revision H3.1 Scope


This supplement establishes the method to revise the Appendix G PDD. Revisions include additions/deletions of existing data, inclusion of new techniques or deletion of existing techniques.

H3.2 Administrative Responsibilities


H3.2.1 New Techniques

A documented request from a licensee for adding new eddy current techniques to the PDD, Appendix G, will be directed to the chairperson of the NDE IRG of the EPRI SGMP. The NDE IRG should conduct a review for consideration of inclusion. The new technique will be comprised of system components (instruments, probes, and software, etc.) that are commercially available. The new technique will be substantially different in analysis methods as compared to the techniques already included in the PDD. The new technique will be a technique that is generally accepted and used by a majority of the utilities to warrant industry-wide training and qualification of all data analysts. The NDE IRG will accept or reject the request, take the appropriate action as specified in this supplement, and inform the requester in writing. H3.2.2 Existing Techniques

It is recommended that the QDA training, written test, and practical test content be reviewed by a group of at least 5 QDAs on a biennial basis or as directed by the NDE IRG. This review is intended to identify areas that are no longer appropriate, areas that are out of date and require revision, or areas that are not included but merit inclusion, based on recent industry findings. The recommendations of a simple majority will be documented for NDE IRG review and dispositioned within 6 months of the QDA review. The documentation of the peer review team will include ample justification for the NDE IRG to assess the recommendations. Required updates should be completed within the timeframe specified by the NDE IRG disposition.

H3.3 Technique Originator Responsibilities


The originator/sponsor of the technique is responsible for supplying sufficient information to the peer review team to permit adequate review and evaluation of the technique. The originator is also responsible for providing data of sufficient quantity and quality meeting the requirements of Appendix G for the development of training materials and practical tests for the qualification of personnel. The peer review group will be the judge of the data quantity and quality for review, training, and testing purposes. H-25

Performance Demonstration for Eddy Current Examination

H3.4 PDD Curator Responsibilities


The PDD curator (currently the EPRI NDE Center) is responsible for revising the PDD. This responsibility includes the preparation of training materials, revision of written test questions, and inclusion of peer-reviewed results for the practical examination.

H3.5 PDD Peer Review Procedure


H3.5.1 Team Structure

The peer review team will have at least five QDAs from vendor and utility organizations. For each peer review member, his or her QDA certification will be current and on file at their employers facility along with a copy on file at the EPRI NDE Center. For inclusion of new techniques, the technique qualification originator or its representative will be invited to participate and answer questions from the review panel. The sponsoring utility and/or vendor will not be considered part of the peer review team. It is recommended that the PDD Curator not be a participant of the peer review team. H3.5.2 Conduct of the Review

a. The peer review team will verify that all data and information required to qualify the technique under the procedures of Appendix H are available for the review. b. As a minimum, the following items will be reviewed and accepted: 1. Raw eddy current data 2. Essential variables 3. Analysis instructions 4. Analyzed results (expert opinion) 5. Appendix H compliance 6. Calibration standard drawings for new techniques 7. Training and testing materials c. The team will accept or reject the data for inclusion in the PDD based upon a two-thirds majority acceptance by the peer review team.

H-26

Performance Demonstration for Eddy Current Examination

Supplement H4 Protocol for Technique Peer Review H4.1 Scope


This supplement establishes the method and protocol for conducting industry peer reviews of Appendix H or J techniques. The peer review process is an independent review intended to provide oversight and verification of the qualification process.

H4.2 Administrative Responsibilities


A documented request from a utility or vendor for conducting a peer review will be directed to the chairperson of the NDE IRG of the EPRI SGMP or the Manager of Steam Generator-NDE at the EPRI NDE Center.

H4.3 Technique Originator Responsibilities


The originator/sponsor of the technique is responsible for supplying sufficient information to the peer review team to permit adequate review and evaluation of the technique. The originator is also responsible for providing data of sufficient quantity and quality meeting the requirements of Appendix H and J as applicable. The peer review group will be the judge of the data quality for review.

H4.4 Examination Technique Specification Sheet Curator Responsibilities


The ETSS curator (currently the EPRI NDE Center) is responsible for incorporating the approved technique into the ETSS program. This responsibility includes populating the ETSS database, the formulation of performance numbers, requesting/assembling the peer review team, maintaining documentation of peer reviews, and distributing the approved ETSS to the industry.

H4.5 Peer Review Procedure


H4.5.1 Team Structure

An eddy current peer review team will have at least five QDAs from vendor and utility organizations and an ultrasonic peer review team will have at least three ultrasonic analysts. The technique qualification originator (who need not be a QDA) will be invited to participate and answer questions from the review panel. The ETSS Curator, sponsoring utility, and/or originator will not be considered part of the peer review team. For each peer review member, their certification will be current and on file at the EPRI NDE Center.

H-27

Performance Demonstration for Eddy Current Examination

H4.5.2

Conduct of the Review

a. The peer review team will verify that all data and information required to qualify the technique under the procedures of Appendix H or J are available for the review. b. An EPRI peer review procedure provides the requirements for peer review. As a minimum, the following items will be reviewed and accepted: 1. Review and acceptance of acquisition data 2. Review and acceptance of metallurgical data (metallurgical results and data, at a minimum have a transmittal cover sheet identifying the company, person transmitting the data, and attestation that data was performed under a 10CFR50, Appendix B quality program.) 3. Review and acceptance of essential variables 4. Review and acceptance that all available data were used or considered and that data considered but not used is documented as to the reason for its non use 5. Review and acceptance that the POD is correct 6. Review and acceptance that the correct standard error of regression value is provided 7. Review and acceptance that the ETSS is correct 8. Review and acceptance that the data listing is correct 9. Review and acceptance of analyzed results 10. Review and acceptance of NDD tube samples 11. Review and acceptance that the technique applicability is correct as stated on the ETSS 12. Review and acceptance that detection requirements per Appendix H or J have been met c. The team will ensure that sufficient documentation exists for replication of the examination. d. The team will accept or reject the technique. Techniques that receive a minimum of a two-thirds (66.67%) majority acceptance by the peer review team are considered Appendix H- or J-qualified. Additionally, techniques will be approved by a minimum of five peer team members. In the event that the technique is rejected, the team will prepare written justification for the rejection. Peer team members will have the opportunity to document dissenting opinions. The technique originator will have an opportunity to provide additional information or withdraw the technique from the qualification process. In the event modifications are made to the submittal, the entire process should be repeated unless the original members are satisfied that the changes made do not affect the acceptable portions of the ETSS.

H-28

NDE SYSTEM MEASUREMENT UNCERTAINTIES FOR TUBE INTEGRITY ASSESSMENTS

I.1

Scope

a. This appendix provides guidance for determining NDE system measurement uncertainties for tube integrity assessments. b. Two NDE system performance measures are needed; 1) probability of detection (POD) and 2) sizing accuracy. Both performance measures are quantified using functional measures over the expected range of the structural variable. c. NDE technique qualification and documentation, including both destructive and NDE, shall be performed under 10CFR50 Appendix B Quality Assurance Program. Tubes pulled prior to April 15, 1999, may be excluded from 10CFR50 Appendix B requirements and may be grandfathered.

I.2
I.2.1

Requirements
General

a. NDE system performance indices generated in accordance with this appendix supersedes performance indices for techniques qualified in accordance with Appendix H. b. An examination technique may be qualified in accordance with this appendix in lieu of Appendix H. c. An examination technique qualified in accordance with this appendix may be used in procedures without regard to the organization that qualified the technique. I.2.2 Personnel Requirements

a. Data analysts participating in a performance demonstration shall be a qualified data analyst (QDA) in accordance with Appendix G. b. Data analysts that have participated in a peer review of the technique under evaluation are excluded from participating in the performance demonstration. I.2.3 Technique Requirements

The requirements of Appendix H Section H.2, H.3.2, H.3.3, H.4 and H2.2.1.f shall be met. I-1

NDE System Measurement Uncertainties for Tube Integrity Assessments

I.3
I.3.1

Performance Demonstration
Data Set Requirements

Supplement I1 describes performance demonstration data set assembly. I.3.2 Protocol

Supplement I2 describes performance demonstration protocol. I.3.3 Process Overview

a. The overall steps to developing POD and NDE sizing uncertainties to support plant integrity assessments include the following:

Preliminary review of eddy current and destructive examination data to be presented to the peer review team for incorporation into the performance demonstration analyses. Preparation of eddy current analysis procedures to be used in the performance demonstration analyses. Peer review of the eddy current and destructive examination data and the eddy current analysis procedures to approve the databases and analysis procedures for the performance demonstration analyses. Formal testing of eddy current data analysts for detection and sizing using peer review approved datasets and analysis procedures. Processing of the eddy current analysis results to obtain probability of detection (POD) distributions and NDE sizing uncertainties, if applicable. ETSS preparation providing the eddy current analysis procedure together with the POD distributions and eddy current sizing uncertainties, if applicable, resulting from the performance testing of data analysts. Publication of the ETSS(s) and associated POD and NDE uncertainty distributions resulting from the performance testing and processing of the test results.
b. Peer reviews are required and shall be conducted in accordance with Supplement I4. I.3.4 Acceptance Criteria

Qualification is based on generating NDE system performance data in accordance with Supplement I3. The detection acceptance criteria of Section H2.3.1 is not applicable.

I-2

NDE System Measurement Uncertainties for Tube Integrity Assessments

I.3.5

Determining Noise-Dependent NDE System Performance

a. Although the POD(S) model in Supplement I3 Equation I3-1 uses the structural variable S as the sole explanatory or independent variable, the model is in general, noise-dependent. Two methods have been developed for providing a noise-dependent structural POD. The EPRI Tools for Integrity Assessment Project Technical Report, 1014567 describes a method for adjusting an empirically derived POD and requires that noise associated with the ETSS data set be measured. The EPRI User Manual Crystal Ball Monte Carlo Simulator, 1012988 describes a predictive method and does not require ETSS noise measurement. Both methods require plant noise to be measured. b. Applications of POD noise adjustment methods, as described in EPRI Tools for Integrity Assessment Project Technical Report, 1014567, require performance demonstration test data generated in accordance with Supplement I2. For predictive model-assisted POD approaches, performance demonstration test data are not required.

I.4

Record of Qualification

Technique qualification documentation includes identification of acquisition techniques, analysis methods, essential variables, data sets and performance indices.

I-3

NDE System Measurement Uncertainties for Tube Integrity Assessments

Supplement I1 ETSS Data Set Requirements I1.1 Data Acceptability Guidelines

Data acceptability guidelines are addressed in the EPRI Tools for Integrity Assessment Project Technical Report, 1014567 which provides guidance on using pulled tube and laboratory data to construct ETSS data sets, destructive examination information requirements, combining data sets, data exclusion, and data sufficiency requirements.

I1.2

Data Set Requirements

a. The detection data set shall be selected in accordance with Table I-1 for applicable degradation mechanism/extraneous test variable combinations. b. The sizing data set shall be selected according to the requirements of Table I-2 for applicable damage mechanism/extraneous test variable combinations. The column titled Supporting Tube Repair in Tables I-1 and I-2 is applicable when the sizing correlation is to be used to size flaws left in service per the licensed repair limits (e.g., 40% depth). Requirements for licensed Alternate Repair Criteria take precedent over the requirements of this supplement. c. It is recommended that the data set used to establish sizing uncertainty be uniformly distributed between the minimum and maximum values of the structural variable S of interest. The maximum value should span the range typically applied in operational assessments (e.g., 100% for maximum depth, average depths in range of structural limit applications). d. Figure I-1 illustrates the minimum correlation coefficient for a given number of data points to achieve a 95% level of confidence. For the example shown in the figure, the correlation coefficient must be greater than 0.36 for a sizing data set of 30 points. If the correlation coefficient is less than this, technical justification shall be provided to use the data. e. The number of analysis teams for detection and sizing shall be selected in accordance with Table I-1 and Table I-2, respectively. Analysis team participants for detection and sizing are described in Supplement I2. f. Data analyst false call rate is measured using a sample of NDD grading units (based either on metallography or expert opinion). The outcomes of these trials are then used to provide an upper bound estimate of the population false call rate. The population false call rate is estimated using a one-sided upper bound confidence limit for a binomial distribution. Figure I-2 illustrates a 90% one-sided confidence limit.

I-4

NDE System Measurement Uncertainties for Tube Integrity Assessments Table I-1 Data Sufficiency Requirements for POD Number of Grading Units Minimum Recommended 20 40 Detection Range Undetected Detected Above Largest of Applicable Structural Variable Undetected Number of Analysis Teams 10% of Grading Units 15% of Grading Units

Table I-2 Data Sufficiency Requirements for Sizing Maximum Depth Range, %TW 0%-35% 36%-65% 66%-100% Total Number of Analysis Teams Supporting Tube Repair 10 10 10 30 10 Supporting Tube Integrity Assessments Only 7 6 7 20 5

I-5

NDE System Measurement Uncertainties for Tube Integrity Assessments

Figure I-1 Correlation Coefficient, r, at 95% Confidence Level for a Positive Correlation Note: Applies to linear first-order models only.

I-6

NDE System Measurement Uncertainties for Tube Integrity Assessments

Figure I-2 Minimum Number of Required NDD Grading Units as a Function of Population False Call Rate (Plot assumes zero misses i.e. all the grading units are reported as NDD)

I-7

NDE System Measurement Uncertainties for Tube Integrity Assessments

Supplement I2 Performance Demonstration I2.1 General Requirements

a. The detection and sizing grading units shall be traceable to metallographic ground truth. b. Prior to initiating a performance demonstration, written data analysis guidelines shall be developed. As a minimum, the analysis guidelines include information on calibration, analysis and reporting. c. Training shall include pre-analyzed (practice) eddy current data and analysis guidelines using a self-study protocol.

I2.2

Probability of Detection

a. All analysis shall be performed blind (without access to ground truth). b. Each detection team shall consist of four individuals; two (primary and secondary) production analysts and two (primary and secondary) resolution analysts. c. Production analysts shall analyze all data independently with no communications between primary and secondary production or resolution analysts and no team-to-team communications. d. Communications between resolution analysts from the same team is permitted; however, team-to-team communication is not permitted. e. Primary and secondary results are reviewed using a compare process in which discrepancies are identified. These discrepancies are resolved by primary and secondary resolution analysts resulting in a team final report. If agreement cannot be reached, the most conservative result will prevail. f. Analyst team assignments for detection testing may differ from sizing team assignments.

I2.3

Sizing

a. It is recommended that two analysts participate in the sizing activity. One analyst would be designated as the sizing analyst and the other as an independent technical reviewer. b. Sizing analysts will be provided the data point location for the degradation to be sized. c. The independent technical review includes a review of the analysis set up, maximum depth, maximum voltage, length, and orientation, when provided. d. The sizing final report reflects agreement between the two analysts. If agreement cannot be reached, the most conservative result will prevail.

I-8

NDE System Measurement Uncertainties for Tube Integrity Assessments

Supplement I3 NDE System Performance Measures for Tube Integrity Assessments I3.1 NDE System Performance Measures

Two NDE system performance measures are defined; 1) probability of detection (POD) and 2) sizing. The POD metric, as defined in this section, provides an estimate of undetected tube degradation left in service after steam generator examination and is used as an input to deterministic or probabilistic tube bundle integrity models to establish tube bundle burst probabilities and leak rates. In a similar manner, the sizing metric, as defined in this section, provides an estimate of the state of degraded tubes for plants with repair-on-sizing acceptance criteria and is used as an input to deterministic or probabilistic tube bundle integrity models for quantitative tube bundle burst and leakage assessments. I3.1.1 Probability of Detection a. The recommended performance measure or model for POD is a non-linear log-logistic function defined as POD( S ) =

1 1+ e
( a 0 + a1 log S )

Equation I-1

where S is the structural parameter, and a0 and a1 are model parameters to be determined. In general, these parameters are degradation mechanism and examination technique specific. This function is referred to as a structural POD. Alternate POD functional distributions may be applied when documented to show adequate conservatism for the tube integrity analysis model being applied for condition monitoring and operational assessments. b. System POD(S) is determined using resolution data analyst binary hit-miss data which have been regressed using a Generalized Linear Model (GLM) log-logistic link function. c. The uncertainties in the parameters POD(S) are described by a covariance matrix. For the case of a two-parameter log-logistic model, with intercept a0 and slope a1, the confidence bounds for the predicted probabilities are found using the following expressions, where p is the probability associated with the one-sided confidence level desired, a0 and a1 are the intercept and slope values, and the Vii are the covariance matrix values. a0 + a1Xi + Zpi, where i = [Vii + (2V12 + V22Xi)Xi]1/2
Equation I-2

d. Figure I-3 illustrates an example POD(S) calculation along with its measurement uncertainties as determined using GLM methodology. e. NDE system POD(S) performance can be determined using multiple analysis team resolution results. Accordingly, multiple team performance shall be described using a GLM weighted average as shown in Figure I-4. Although Figure I-4 shows team fractional POD results, the GLM weighted average method uses hit/miss data weighted by the number of data points. I-9

NDE System Measurement Uncertainties for Tube Integrity Assessments

Figure I-3 Calculation of POD and Measurement Uncertainties Using GLM Excel Spreadsheet

I-10

NDE System Measurement Uncertainties for Tube Integrity Assessments

Figure I-4 Using GLM Weighted Average to Describe Multiple Analysis Team POD Results

I3.2.2 Sizing a. The recommended performance measure or model for sizing is a linear function defined as where S is the true value of the structural variable used for integrity assessment, a and b are model parameters to be determined and NDEmeasured is the value predicted by NDE. As with detection, the parameters a and b are degradation mechanism and examination technique specific. S = a + b NDEmeasured
Equation I-3

b. The standard error of regression or standard deviation provides a measure of the random error or sizing uncertainty while the regression equation provides an estimate of the systematic error or bias. c. NDE system sizing error is determined by regressing actual values of the structural variable with multiple data analyst sizing team measured values as shown in Figure I-5 which shows results for ten teams. The plot in the figure includes values for the regression equation Sy|x, the standard error of regression or standard deviation, the number of data points, and correlation coefficient. d. NDE system sizing error shall be determined via performance demonstration in accordance with Supplement I2. I-11

NDE System Measurement Uncertainties for Tube Integrity Assessments

Figure I-5 NDE System Sizing Error for Ten Analysis Teams

I-12

NDE System Measurement Uncertainties for Tube Integrity Assessments

Supplement I4 Protocol for NDE System Peer Review I4.1 Scope

This supplement establishes the method and protocol for conducting industry peer reviews of Appendix I techniques. The peer review process is an independent review performed by materials and/or tube integrity (MTI) engineers and QDAs. The intent of the peer review is to ensure that the tube degradation for an ETSS dataset properly represents that found in the field, and to provide oversight and verification of the qualification process. The peer review process is also an independent review of eddy current data acquisition and analysis techniques, and the destructive examination results to be applied for the performance demonstration.

I4.2

Administrative Responsibilities

A documented request from a utility or vendor for conducting a peer review will be directed to the chairperson of the NDE IRG of the EPRI SGMP or the Manager of Steam Generator-NDE at the EPRI NDE Center.

I4.3

Technique Originator Responsibilities

The originator/sponsor of the technique is responsible for supplying sufficient information to the peer review team to permit adequate review and evaluation of the technique. The originator is also responsible for providing data of sufficient quantity and quality meeting the requirements of Appendix I as applicable. The peer review group will be the judge of the data quality for review.

I4.4

Curator Responsibilities

The ETSS curator is responsible for incorporating the performance demonstration results into the ETSS program. This responsibility includes maintaining documentation of peer reviews and distributing the approved ETSS to the industry.

I4.5

Peer Review Procedure

I4.5.1 Team Structure a. A peer review team will have at least three MTI engineers and at least three QDAs. It is recommended that at least one of the MTI engineers have statistical analysis experience. It is recommended that the peer review team include both utility and vendor participants. b. The technique qualification originator (who need not be a QDA) will be invited to participate and answer questions from the peer review teams. I-13

NDE System Measurement Uncertainties for Tube Integrity Assessments

c. The ETSS Curator, sponsoring utility, and/or originator will not be considered part of the peer review team. d. For each peer review member, their certification and/or experience record will be current and on file at the EPRI NDE Center. I4.5.2 Conduct of the Review a. The initial peer review is conducted prior to the actual performance demonstration. b. An EPRI peer review procedure provides the requirements for peer review. c. As a minimum, the following items are to be addressed by the MTI team: 1. Review and acceptance of metallurgical data (metallurgical results and data, at a minimum have a transmittal cover sheet identifying the company, person transmitting the data, and attestation that data was performed under a 10CFR50, Appendix B quality program). 2. Review and acceptance that all available data were used or considered and that data considered but not used is documented as to the reason for its non use. Germane examples known to present a challenge to the performance of the proposed technique are to be included if possible. 3. Review and acceptance that the number of samples comprising the qualification is sufficient to satisfy the minimum statistical rigor of Appendix I for detection and/or sizing. 4. Review and acceptance that the data grouping is representative of inservice field results. This grouping may be based upon structural parameters, amplitude response, physical location, noise values, or any combination thereof. 5. The acceptance of the data sets for use in the performance demonstration shall be based on a majority vote of the MTI peer review team. d. As a minimum, the following items are to be addressed by the QDA team: 1. Review and acceptance of acquisition data: o Verify that the pulled tube data has a summary or is identified in the tube header as to the unit, row and column. o The NDE data is verified by summary and/or data header to have been collected during the outage that the tube removal was performed. o All laboratory data documented and collected in a fashion consistent with the collection and documentation of field data. o Calibration standard drawings or sketches of as-built conditions are available for both lab and field collected data. 2. Review and acceptance of essential variables. 3. Review and acceptance that the ETSS is correct and sufficient information exist for industry implementation. I-14

NDE System Measurement Uncertainties for Tube Integrity Assessments

4. Review and acceptance that the technique applicability is correct as stated on the ETSS. 5. Review and acceptance that the data listing is correct. 6. The acceptance of the data acquisition and analysis technique will be based on a majority vote of the QDA peer review team. e. Combined MTI and QDA peer review of data and NDE techniques: 1. A combined meeting of the MTI and QDA peer review groups is required for final approval of the database and NDE techniques for performance demonstration testing. 2. The MTI team presents the results of the review of the ETSS, data sets, and supporting information. 3. The QDA team presents the results of the review of the NDE techniques and supporting information. 4. The MTI and QDA peer review teams jointly review the data and ETSS for acceptance. This review may result in recommendations for changes or acceptance. 5. The acceptance of the data sets and ETSS for use in performance demonstration testing will be based upon a two-thirds (66.67%) majority acceptance by the combined peer review team. Rejection of either a data set or an ETSS will require written justification by the peer review team and suggestions for correction if applicable. f. The final peer review is conducted after the completion of the actual performance demonstration. 1. Initial peer review team members are not required to also serve as final peer review team members. 2. The final peer review will focus on the validity of the performance demonstration results and clarity and correctness of the proposed final ETSS. 3. Techniques that receive a minimum of a two-thirds (66.67%) majority acceptance by the combined peer review team are considered Appendix I qualified. In the event that the technique is rejected, the team will prepare written justification for the rejection. Peer team members will have the opportunity to document dissenting opinions. The technique originator will have an opportunity to provide additional information or withdraw the technique from the qualification process. In the event modifications are made to the submittal, the entire process should be repeated unless the original members are satisfied that the changes made do not affect the acceptable portions of the ETSS.

I-15

PERFORMANCE DEMONSTRATION FOR ULTRASONIC EXAMINATION

J.1

Scope

a. This appendix provides performance demonstration requirements for ultrasonic examination techniques and equipment for examination of SG tubing. b. NDE technique qualification and documentation, including both destructive and NDE, shall be performed under 10CFR50 Appendix B Quality Assurance Program. Tubes pulled prior to April 15, 1999, may be excluded from 10CFR50 Appendix B requirements and may be grandfathered. c. The performance demonstration requirements specified in this appendix apply to the acquisition process but do not apply to personnel involved in the data acquisition process. It is recommended that data acquisition personnel be trained and qualified by their employer in a manner commensurate with the tasks they perform. It is recommended that the requirements for training and qualification of such personnel be described in the employers written practice. d. Techniques that have been qualified in accordance with this appendix may be used in procedures without regard to the organization that qualified the technique. e. All work associated with the NDE technique qualification, with the exception of the peer review, may be performed by competent technical personnel who are not qualified ultrasonic data analysts.

J.2

Performance Demonstration

J.2.1 General Successful completion of Supplement J2 constitutes qualification of an acquisition technique and an analysis method. J.2.2 Essential Variable Ranges a. Any two procedures with the same essential variables are considered equivalent. Pulsers, search units, and receivers that vary within the tolerances specified in Supplement J1 are considered equivalent. When the pulsers, search units, and receivers vary beyond the tolerances of Supplement J1, or when the examination procedure allows more than one value J-1

Performance Demonstration for Ultrasonic Examination

or range for an essential variable, repeat technique qualification at the minimum and maximum value for each essential variable with all other variables remaining at their nominal values. b. When the method does not specify a range for essential variables and establishes criteria for selecting values, the criteria should be demonstrated for the essential variable(s). J.2.3 Requalification When a change in an acquisition or analysis technique cause an essential variable to exceed a qualified range, re-qualify the acquisition or analysis technique for the revised range. J.2.4 Disqualification of Techniques The qualification of techniques is dependent on the quality of the data sets used in the qualification. Therefore, to improve the data set credibility, additional pulled tube and laboratory specimen data that becomes available should be added to the qualification data sets on an annual basis. A technique that initially met the requirements of this appendix may then fail the acceptance criteria after data are appended. For any technique believed not to meet the detection requirements as stated in Supplement J2, immediate notification to the industry will be provided by the SGMP NDE IRG chairperson or equivalent. An industry peer review group will be convened within 100 days to review this assessment. This peer review will be performed consistent with the requirements of Supplement H4. The results of the peer review will be made available to the industry by the SGMP NDE IRG chairperson or equivalent. Any technique, determined to be disqualified by not meeting the requirements of this appendix if included in Appendix K, will also be removed from Appendix K for qualification purposes.

J.3

Essential Variable Tolerances

J.3.1 Instruments and Search Units The qualified acquisition technique may be modified to substitute or replace instruments or search units without requalification when the range of applicable essential variables documented in the ETSS are met providing both the original and replacement instruments or search units are characterized utilizing Supplement J1. J.3.2 Computerized System Algorithms Algorithms may be altered when the modified algorithms are demonstrated to be equivalent or better than those initially qualified.

J-2

Performance Demonstration for Ultrasonic Examination

J.3.3 Calibration Methods Alternative calibration methods may be used without requalification if it is demonstrated that the proposed calibration method is equivalent to or more conservative than those described in the qualified acquisition technique or qualified analysis method.

J.4

Record of Qualification

Technique qualification documentation includes identification of acquisition techniques and analysis methods, equipment, and data sets used during qualification or performance demonstration of equivalency, and the performance indices. J.4.1 Examination Technique Specification Sheet The ETSS contains a statement of scope that specifically defines the limits of the techniques applicability in the context of damage mechanism/extraneous test variable combinations, material, and geometry. It also defines the following essential variables as a single value, range of values, or formula. It is recommended that other variables not listed but critical to the technique application also be recorded. J.4.1.1 Equipment Variables

a. Data acquisition instrumentation brand and model for instrumentation used including pulser/receiver, remote pulser/receiver, amplifier, and digitizer. b. Search unit manufacturer, brand and model. The following nominal values for transducers utilized in the search unit: 1. Center frequency and either bandwidth or waveform duration as defined in Supplement J1 2. Mode of propagation and examination angle(s) as related to the inside surface 3. Size and shape of the active element 4. Number of elements 5. Configuration of mirrors (if used) 6. Documentation that focused transducers are used (if used) c. Search Unit Cables (analog signal path) to include: 1. Cable and connector types 2. Maximum length(s) 3. Maximum number of interconnects d. Software used for acquisition and analysis including manufacturers name and revision

J-3

Performance Demonstration for Ultrasonic Examination

J.4.1.2

Technique Variables

a. Applicable ultrasonic instrument settings: 1. Pulse repetition rate 2. Pulser voltage 3. Pulse width and polarity 4. Amplifier mode 5. Filtering 6. Distance amplitude curve 7. Reject used b. Applicable digitizer settings: 1. Frequency 2. Real time processing (that is, interpolation, averaging peak extraction, and compression) 3. Data resolution c. Data acquisition variables 1. Scan pattern and beam directions 2. Maximum scan speed 3. Maximum circumferential and axial data acquisition increments 4. Extent of scanning and action to be taken for access restrictions 5. Scanning limitations d. Methods of calibration for detection and sizing (for example, actions required such that the sensitivity and accuracy of the signal amplitude and time outputs of the examination system, whether displayed, recorded, or automatically processed, are repeated from examination to examination) e. Examination and calibration data to be recorded f. Recording Equipment g. Method and criteria for the discrimination of indications (for example, geometric versus flaw indications and for length and depth sizing of flaws) h. Tubing material and wall thickness

J-4

Performance Demonstration for Ultrasonic Examination

Supplement J1 Equipment Characterization J1.1 Scope


This supplement specifies essential variables associated with ultrasonic data acquisition instrumentation and establishes a protocol for essential variable measurement.

J1.2 Pulsers, Receivers, Search Unit, and Cable


It is recommended that all instruments meet the requirements of the ASME Boiler and Pressure Vessel Code, Section XI. The qualified procedure may be modified to substitute or replace pulsers, receivers, search unit, or cable without re-qualification when the following conditions are met. J1.2.1 Search Unit

Substituting a search unit with another manufacture or model requires that the substituted model meet the following criteria as compared to the qualified search unit. J1.2.1.1 Transducer Wave Propagation Mode

The wave propagation mode is the same. J1.2.1.2 Transducer Measured Angle

Measure angle, +3

J-5

Performance Demonstration for Ultrasonic Examination

J1.2.1.3

Search Unit Transducer Evaluation

Perform an evaluation comparing the qualified search unit transducer(s) to the replacement transducer(s). Use the complete qualified examination system, set up to the requirements of the applicable ETSS, to measure the center frequency and bandwidth. For angle beam examination, use a signal obtained at 1/2 vee-path on an outside surface connected notch with a minimum depth of 2 wavelengths (as calculated for the material and transducer frequency). For a zero degree examination, use a backwall signal. The requirements of ASTM E 1065, Evaluation of the Characteristics of Ultrasonic Search Units, 1987, may be used as a guide in this process. a. Center frequency: It is recommended that the center frequency (fc) of the replacement transducer(s) be within +20% of the qualified transducer. The center frequency is determined in Equation J-1: fc = Where: fl is the lower frequency measured at -6 dB below the peak frequency fu is the upper frequency measured at -6 dB below the peak frequency b. It is recommended that the upper and lower frequencies be measured at -6 dB below the peak frequency and meet the requirements of Equations J-2 and J-3: flr (1.05) (flq) fur (0.95) (fuq) Where: flr is the lower frequency of replacement flq is the lower frequency of qualified fur is the upper frequency of replacement fuq is the upper frequency of qualified J1.2.1.4 Focused Search Units
Equation J-2 Equation J-3

fl + fu 2

Equation J-1

It is recommended that for search units utilizing focused transducers, the focal point, focal length, depth of field and beam size at the focal point be within 15% of the qualified transducer.

J-6

Performance Demonstration for Ultrasonic Examination

J1.2.1.5

S/N Ratio

The S/N ratio of the replacement transducer is measured and compared to the qualified transducer. For angle beam examination, use a signal obtained at 1/2 vee-path on an outside surface connected notch with a minimum depth of 2 wavelengths (as calculated for the material and transducer frequency). For a zero degree examination, use the backwall signal. It is recommended that the S/N ratio meet the Equation J-4: S/Nr (0.95) (s/nq) Where: S/Nr is the S/N ratio of replacement S/Nq is the S/N ratio of qualified J1.2.1.6 Confirmation
Equation J-4

It is recommended that a tube specimen containing the damage mechanism specified in the ETSS be used to confirm that the flaw signals have not been adversely affected. J1.2.1.7 Requirements for Pulse Echo Tip Diffraction Sizing Techniques

It is recommended that the following performance requirements be met for pulse echo tip diffraction depth sizing techniques: a. Timing accuracy: The sound path delta between the maximum corner trap and tip signals is within +20% of that calculated for notch depths of 20% TW, 40% TW, 60% TW, and 80% TW in a calibration tube of the same nominal diameter and thickness as being examined. See Equation J-5. SPD = Where: SPD is sound path delta (microseconds) D is notch depth A is examination angle V is tube material velocity

2D cos A * V

Equation J-5

J-7

Performance Demonstration for Ultrasonic Examination

b. Length of echodynamic: The corner echodynamic does not interfere with the -6 dB drop of the tip echodynamic of an outside surface connected notch with a depth of 2.5 wavelengths +0.5 wavelength as calculated for the material and transducer frequency used. c. Time resolution: The maximum tip and resultant corner signals does not interfere on an outside surface connected notch with a depth of 1.5 wavelengths +0.5 wavelength as calculated for the material and transducer frequency used. J1.2.2 Substituting a Pulser/Receiver or Ultrasonic Instrument

Substituting a qualified pulser/receiver or the ultrasonic instrument with another manufacture or model requires an evaluation between the complete qualified system and the complete system containing the replacement components. The complete qualified examination system is set up to the requirements of the applicable ETSS. The complete system containing the replacement components is set up to the configuration being evaluated. It is recommended that the following criteria be met: a. A frequency and bandwidth evaluation is performed on signals obtained by: for angle beam examination, use a signal obtained at 1/2 vee-path on an outside surface connected notch with a minimum depth of 2 wavelengths (as calculated for the material and transducer frequency) and for a 0 examination use a backwall signal. The center frequency and bandwidth of the replacement system is acceptable if the replacement system measurements are within the tolerances of the qualified system as calculated and specified in J1.2.1.3. b. The S/N ratio of the replacement system is measured and compared to the qualified system. For angle beam examination, use a signal obtained at 1/2 vee-path on an outside surface connected notch with a minimum depth of 2 wavelengths (as calculated for the material and transducer frequency). For zero degree examination, use the backwall of a signal. The S/N ratio meets the requirements of Equation J-6: S/Nr (0.95) (S/Nq) Where: S/Nr is S/N of replacement S/Nq is S/N of qualified c. The requirements specified in J1.2.1.6 and J1.2.1.7 for the replacement system. J1.2.3 Cable
Equation J-6

Modifying cable properties such as changing cable type, adding cable length, or adding interconnects requires an evaluation between the complete qualified system and the same system with the modifications installed. It is recommended that the requirements of J1.2.2 be met.

J-8

Performance Demonstration for Ultrasonic Examination

J1.3 Documentation and Review of Changes


When changes or modifications are made as specified in this supplement, an equivalency document shall be prepared. Documentation should include plots or pictures of ultrasonic signals and frequency spectrums used, measurement and calculation results, and equipment used.

J-9

Performance Demonstration for Ultrasonic Examination

Supplement J2 Qualification Requirements for Examination of SG Tubing J2.1 General


SG tubing examination techniques and equipment used to detect and size flaws are qualified by performance demonstration. New industry qualified techniques shall meet the requirements of Supplement H4 for conduct of the peer review process. It is recommended that new site specific qualified techniques meet the intent of Supplement H4 for conduct of the peer review process by the developing organization.

J2.2 Qualification Data Set Requirements


J2.2.1 General

a. It is recommended that test sample damage mechanism morphology and dimensions used to construct qualification data set grading units be based on SG operating experience as inferred from tube pulls where practical. b. Where applicable, it is recommended that the influence of extraneous test variables associated with each of the damage mechanisms (e.g., denting, deposits, tube geometry changes) be assessed. c. Test samples fabricated using mechanical or chemical methods may be used. However, the flaws should produce signals similar to those being observed in the field in terms of signal characteristics, signal amplitude, and S/N ratio. d. Flaw dimensions for samples, used to construct grading units included in the qualification data set, shall be verified by physical measurement. Evidence of leakage may be used to verify 100% TW depth. Alternatively, flaw dimensions may be established as follows: e. Supplemental NDE techniques may be used to determine the flaw dimensions for samples provided that the following steps are completed: 1. The supplemental NDE technique should be an alternative method to the technique being qualified. For example, computer tomography being used to establish flaw dimensions for an ultrasonic technique. 2. Establish the uncertainties of the supplemental NDE technique by comparison to destructive analysis of flaw samples. 3. This comparison should include samples from the same type of damage mechanism utilizing a minimum of sixteen samples.

J-10

Performance Demonstration for Ultrasonic Examination

4. Establish the measurement error of the supplemental NDE technique in accordance with J2.3.2 for each structural parameter to be used in the technique qualification. 5. It is recommended that the correlation coefficient, as established in J2.3.2, meet these minimum requirements of Table J-1 for the supplemental NDE technique to be considered eligible for use:
Table J-1 Sample Size Correlation Coefficient Sample Size 30 25 20 16 Correlation Coefficient (minimum) 0.36 0.40 0.44 0.50

6. Future samples of the same type (damage mechanism/location) shall have a 20% sample of the flaws destructively analyzed to validate that the sample set is within the parameters of the original set from which the measurement error was determined. The samples being used for validation should represent the range that is being qualified. If the subsequent sample set destructive analysis results is not within the bounds of the original sample set then the alternative flaw dimension profiles cannot be used and the Chairman of the NDE IRG will be notified for corrective action. 7. Use the standard error of regression at a 95/50 confidence interval as the measurement error applied to flaw dimensions for samples used in technique qualifications utilizing supplemental NDE techniques for establishment of the flaw dimensions in lieu of physical measurements. The measurement error should be applied to the measurement in the most conservative manner and may be applied for specific variable ranges.

J-11

Performance Demonstration for Ultrasonic Examination

J2.2.2

Detection Data Set

a. Select the data set according to the requirements of Table J-2 for each damage mechanism/extraneous test variable combination. The minimum detection data set is 11 flawed grading units.
Table J-2 Minimum Number of Flawed Grading Units (60% TW) That Must Be Detected to Meet the Technique Acceptance Criterion of POD 0.80 at a 90% Confidence Level Total Number of Flawed Grading Units in the Data Set for Technique Acceptance Testing 11 17 18 24 25 31 32 Minimum Number of Flawed Grading Units that Must be Detected for Technique Acceptance 11 17 17 23 23 29 29

b. It is recommended that the data set be uniformly distributed over the depth range of 60% to 100% TW. It is acceptable to use a lower percent TW criteria for detection, rather than the 60% value. c. POD calculations include the use of flawed locations reported as NDD. J2.2.3 Sizing Data Set

a. Select the sizing data set for each damage mechanism/extraneous test variable combination. b. The minimum sizing data set is 16 flawed grading units. c. It is recommended that the sizing data set include the detection data set. d. It is recommended that the remainder of the sizing data set have maximum measured depths less than 60% of the nominal tube wall thickness. It is recommended that the data set be uniformly distributed over the depth range of 20% TW to 59% TW, as is reasonably achievable. Specialized data sets are permissible for a targeted sizing range. e. Sizing performance indices do not include NDD locations; however, locations that provide a 0% TW measurement are reported as 1% TW and are included in the performance indices.

J-12

Performance Demonstration for Ultrasonic Examination

The sizing data set is used to demonstrate depth sizing. Sizing performance may be supplemented by additional data sets as required to meet higher CLs as dictated by the needs of condition monitoring and operational assessments.

J2.3 Acceptance Criteria


Qualification is exclusively based on detection. J2.3.1 Detection Acceptance Criteria

Techniques shall be considered qualified for detection if they demonstrate a POD of 0.80 or greater at a 90% lower-bound CL, using a data set of 11 or more flawed grading units. A flawed grading unit is defined, for this qualification, as a flaw that is either equal to or greater than 60% TW. It is acceptable to use a lower percent TW criteria for detection rather than the 60% value. Table J-2 provides the minimum number of detections required to meet this acceptance criterion for data sets consisting of 11 to 32 flawed grading units. To meet the criterion of a POD of 0.80 or greater at a 90% CL, for example, in data sets numbering from 11 to 17 flawed grading units, the technique must detect every flawed grading unit. In data sets numbering from 18 to 24, the technique must detect all but one, and from 25 to 31, all but two. In constructing Table J-2, calculations from the binomial distribution determined the minimum number of successes required in examining a flaw data set to ensure, at a CL of 90%, that the actual POD is 0.80 or greater. The binomial distribution provides the probability of each possible outcome over a specified number of trials when only two outcomes are possible on each trial, success or failure, and the likelihood of a success or failure is known or assumed. The probability of exactly x successes in n trials, when the probability of success on each trial is p, is calculated in Equation J-7:

P(x n, p ) =

n! p x (1 p ) n x x!(n x)!

Equation J-7

This expression can be used, then, to calculate the minimum number of detections required from a sample set of given size to be ensured, at a confidence level of 90%, that the actual POD is equal to or greater than 0.80. Starting with the largest number of detections possible (n), the probability of that number of detections is calculated and that of each lesser number of detections (n-1, n-2, etc.) so that when the probabilities are added together they do not exceed a combined probability of 0.10. These are the outcomes, then, that exceed 90% of all possible outcomes. When observed, they provide assurance at a 90% CL that the actual POD is equal to or greater than 0.80.

J-13

Performance Demonstration for Ultrasonic Examination

In the example below, the process described above is completed to determine the minimum number of detections required in a sample set of 18 flawed grading units to have 90% confidence that the actual POD is equal to or greater than 0.80. Using the equation above, the probability of detecting exactly 18 flaws out of the 18 in the sample set is: P (1818,0.80) =

18! (0.80)18 (0.20) 0 = 0.0180 18!0!

This means that the probability of detecting 18 out of 18 is 1.8%, which also means that if 18 flaws out of 18 in the sample set have been detected, then this outcome has exceeded 98.2% of all possible outcomes. Therefore detection of 18 out of 18 represents a POD of 80% at 98.2% CL. The probability of detecting exactly 17 flaws out of the 18 in the sample set is: P(17 18,0.80) =

18! (0.80)17 (0.20)1 = 0.0811 17!1!

The probability of detecting exactly 16 flaws out of the 18 in the sample set is: P (16 18,0.80) =

18! (0.80)16 (0.20) 2 = 0.1723 16!2!

The sum of the probabilities for 17 and 18 detections (the probability of 17 or more detections), is equal to 0.0180 plus 0.0811 equals 0.0991 equals 9.91%. This means that the probability of detecting 17 or more is 9.91%; and if 17 or more have been detected from a sample set of 18, then this outcome has exceeded 90.09% of all possible outcomes. So the detection of 17 or more out of a sample set of 18 represent a POD of 80% at 90.09 CL. Since this combined probability is less than 0.1000, it is at or beyond the 90% CL. Adding the probability of 16 detections, however, results in a combined probability (the probability of 16 or more detections) of 0.0180 plus 0.0811 plus 0.1723 equals 0.2714 equals 27.1%. This means that the probability of detecting 16 or more out of 18 is 27.1%, which also means that if 16 or more out of 18 have been detected, then this outcome only exceeds 72.9% of all possible outcomes (80% POD at 72.9% CL). This leads to the conclusion that detection of 17 or more out of 18 represent a POD of 80% at a CL of 90% or greater, while detection of 16 or better out of 18 leads to a POD 80% at a CL of 72.9% or greater, which is not acceptable per established criteria. The binomial distribution produces discrete rather than continuous values; as a consequence, the actual CLs vary among the minimum number of detections presented in Table J-2. However, the CLs associated with the acceptance criteria in the table are all equal to or greater than 90%.

J-14

Performance Demonstration for Ultrasonic Examination

J2.3.2

Technique Sizing Performance

Technique performance for sizing is based on a standard error of regression and correlation coefficient (r) of the ultrasonic measured parameters. Standard error of regression is defined in Equation J-8:
2 n xy ( x)( y ) 2 1 2 Syx = n y ( y ) n x 2 ( x ) 2 n(n 2)

Equation J-8

Correlation coefficient is defined in Equation J-9:


r=

x y x y 2 2 2 2 n x ( x ) n y ( y )
n

Equation J-9

Where: xis the NDE Estimate yis the structural variable (metallurgical value) nis the number of values in the data set.

J-15

QUALIFICATION OF PERSONNEL FOR ANALYSIS OF ULTRASONIC EXAMINATION DATA

K.1

Scope

This appendix specifies training and qualification requirements for personnel who analyze SG tubing ultrasonic examination data. Its purpose is to ensure a uniform knowledge base and skill level for ultrasonic data analysts. Analysts are qualified by successful completion of written and practical examinations as defined in this appendix. An individual who successfully completes the requirements described herein is recognized as an Ultrasonic Qualified Data Analyst (UQDA). Note: The UQDA qualification can consist of detection only or detection and sizing in one or a combination of damage mechanism categories.

K.2

Responsibilities and Records

a. The licensee is responsible for SSPD training and testing in accordance with the requirements of this document. b. Formal certification of personnel and compliance with the requirements of this document is the exclusive responsibility of the employer. c. The individuals certification record, as compiled and maintained by the employer, should clearly indicate compliance with this document. d. The documentation identified herein should, supplement not replace, documentation requirements identified by the employers written practice. The employer should maintain examinations with grades assigned and records of material covered. Level II or III certification records can be maintained separately. A written testimony of UQDA qualification should include: 1. Name of individual 2. Statement indicating record of completion of training, including training hours, dates attended, and training institution 3. Dates and pass/fail results assigned to the written examination and to each practical examination data set 4. Damage mechanism(s)/ETSS(s) qualified to and detection and/or sizing qualification status K-1

Qualification of Personnel for Analysis of Ultrasonic Examination Data

K.3

Written Practice Requirements

Organizations performing qualification activities shall prepare a written program for their control and administration. K.3.1 Training and Examination The written practice should specify classroom and laboratory training requirements, course outline, the number of instruction hours and examination requirements as specified in this Appendix. K.3.2 Responsibilities The written practice should specify the responsibilities, duties, and qualifications required for personnel who implement the personnel qualification program including classroom and laboratory training. K.3.3 Use of an Outside Agency If an outside agency is used, the employer remains responsible for the content of the training and qualification materials. An outside agency is an organization or an individual that provides instructor or qualification services and whose qualifications have been accepted by the organization that engages the outside agency. The written practice of the organization that engages the outside agency should specify requirements for ensuring that the outside agency meets the applicable requirements of this appendix. K.3.4 Transfer Qualifications The employers written practice should address accepting qualification documentation that may be transferred from other employers. This could be invoked if a UQDA seeks employment from an employer other than the one that performed the training and qualification. K.3.5 Confidentiality Provisions to ensure the confidentiality of the qualification materials (for example, test questions, answer sheets, and test data) shall be included in the written practice. Access to such materials should be limited and the examination material should be maintained in secure files. K.3.6 Availability of Training Course Materials Training course materials should be available for review or audit by user organizations and authorities. Training course materials should not be subject to any confidentiality requirements other than the normally applicable copyright laws. K-2

Qualification of Personnel for Analysis of Ultrasonic Examination Data

K.4

Qualification Requirements

a. A UQDA shall be qualified as a Level II or III in ultrasonic examination in accordance with the employers written practice. The UQDA qualification can consist of detection only or detection and sizing in one or a combination of the damage mechanisms specified in Table K-1. Training and examination requirements shall be successfully completed. Training and testing should include all ETSS(s) that will be used in the field associated with the applicable damage mechanism category.
Table K-1 Damage Mechanisms Categories Damage Mechanism Categories Thinning and Wear OD IGA/SCC PWSCC

b. The training and examination requirements for flaw sizing/characterization (e.g., depth sizing, length sizing, etc.) are only applicable to personnel performing flaw sizing/characterization analysis. K.4.1 Training Training should include formal classroom and structured practical laboratory exercises as follows. The course material should be made available to the trainee. a. Eight hours of overview training in accordance with the Overview section of Attachment K-1. b. Eight hours training in each applicable damage mechanism category including detailed training in each associated ETSS(s) that will be utilized. The training should address the damage category/ETSS specific training specified in Attachment K-1. K.4.2 Written Examination The written examination should: a. Contain a minimum of ten questions covering material outlined in the Overview section of Attachment K-1. b. Contain a minimum of ten questions in each damage mechanism category addressing all applicable ETSS(s) and information provided in the Damage Mechanism Category/ETSS Training specified in Attachment K-1. c. Be conducted in an open book format. d. Have a passing grade of at least 80%. K-3

Qualification of Personnel for Analysis of Ultrasonic Examination Data

K.4.3 Practical Examination K.4.3.1 General

The practical examination should be specific to the applicable damage mechanism category(s)/ETSS(s) and use randomly selected ultrasonic data acquired with Appendix J qualified techniques. It is acceptable to use ETSS data, including laboratory sample data, provided it is representative of field degradation. The Appendix J requirement for physical measurement of flaw dimensions is not applicable and expert opinion may be used to establish ultrasonic truth for grading purposes. The candidate should be provided pertinent information typically available in the field when taking the practical. K.4.3.2 Practical Data Set

a. For a detection data set, a minimum of 11 flawed grading units and a minimum of twice as many unflawed grading units should be used for each applicable damage mechanism category (Table K-1). b. For a sizing data set, a minimum of 16 flawed grading units should be used for each applicable damage mechanism category (Table K-1). K.4.3.3 Analysis Qualification Criteria

Each damage mechanism category should be evaluated individually and each data set be graded by the following applicable criteria: a. Detection Criteria Personnel should be considered qualified for detection of a specific damage mechanism if all of the following requirements are met: An example of acceptance criteria is provided in Table K2. 1. A POD of at least 80%, at a 90% CL for flawed grading units 40% TW. 2. Detection of at least 80% of the flawed grading units <40% TW. 3. The number of reported false calls is no more than 10% of the total number of unflawed grading units.

K-4

Qualification of Personnel for Analysis of Ultrasonic Examination Data Table K-2 Example Performance Demonstration Test Matrices for Flaw Detection Flaw Detection Criteria for a Given Damage Mechanism Category 40% TW 11 17 18 24 25 31 32 Minimum Acceptance Criteria for Detection at 80% POD 90% CL 11 17 17 23 23 29 29 Minimum Number of Unflawed Grading Units 22 34 36 48 50 62 64 Maximum Number of False Calls 2 3 3 4 5 6 6

b. Sizing Criteria Personnel should be considered qualified for performing sizing measurements on a specific damage mechanism if a RMSE of <10% is demonstrated. The sample set, RMSE, is calculated using Equation K-1:
1 n RMSE = n i =1 (M i T i )2
Equation K-1

Where: Mi is the ultrasonic measured flaw parameter assigned by the individual analyst for the ith indication Ti is the ultrasonic measured flaw parameter for the ith indication determined by expert opinion n is the number of measured grading units in the data set. Only those flaws, which are detected, are considered in the RMSE calculation (a zero is not assigned as the measured flaw parameter for flaws not detected).

K-5

Qualification of Personnel for Analysis of Ultrasonic Examination Data

An example of acceptance criteria is provided in Table K-3.


Table K-3 Example Performance Demonstration Test Matrices for Flaw Detection and Sizing Flaw Detection Acceptance Criteria for a Given Damage Mechanism Category Total Number of Flawed Grading Units Minimum Acceptance Criteria for Detection at 80% POD 90% CL <40% TW 4 4 5 7 7 10 10 12 12 40% TW 11 12 12 17 17 23 23 29 29 32 34 36 50 52 72 74 92 94 3 3 4 5 5 7 7 9 9 False Call Acceptance Criteria Minimum Number of Unflawed Grading Units

Number of Flawed Grading Units <40% TW 40% TW 11 12 12 17 18 24 25 31 32

Maximum Number of False Calls

16 17 18 25 26 36 37 46 47

5 5 6 8 8 12 12 15 15

c. If flaw orientation identification is applicable, personnel should identify the correct flaw orientation on at least 80% of the flawed grading units. d. Upon successful completion of the practical, the analyst should review missed indications and overcalls.

K.5

Re-examination

A candidate failing to pass a portion of the examination (written or practical) has the option of undergoing a re-examination of the failed portion.

K-6

Qualification of Personnel for Analysis of Ultrasonic Examination Data

K.5.1 Written Examination The examination should contain the same number of questions required in the original test and be assembled by a random selection process. It may be augmented with inclusion of questions specific to the areas of deficiency. A second re-examination requires a waiting period of 5 days. Additional training on topics or subjects in which the individual did not demonstrate an acceptable level of understanding is required prior to the re-examination. K.5.2 Practical Examination A practical re-examination may be given in the damage mechanism category(s) that are not passed. Missed indications should be reviewed and no waiting period is required prior to the initial re-test. Prior to a second re-test, the individual should complete additional training as deemed necessary by the proctor. A candidate failing the second re-test should wait 30-days and obtain additional training meeting the requirements of Section K.4.1 prior to retesting.

K.6

Interrupted Service

A UQDA who has not analyzed data in a 15-month consecutive period shall be required to successfully complete the practical examination requirements specified in Section K.4.3.

K.7

SSPD

UQDA personnel should receive training and pass examinations as specified below. The examination and data sets should be specific to the damage mechanism and ETSS(s) that will be used for this field application. These examinations should be administered immediately prior to data analysis. K.7.1 SSPD Training The training should be provided in accordance with a site approved document and include: a. Responsibilities and planned examination scope. b. A comprehensive review of the applicable ETSS(s), other applicable analysis guidelines and any unique examination challenges. c. Expected flaw responses (graphics and/or raw data with evaluated results), a detailed review of the primary examination results including the damage mechanisms identified or ones that have been identified to be of concern as well as a detailed description of the expected flaw location and characteristics. d. Applicable industry operating experience/events

K-7

Qualification of Personnel for Analysis of Ultrasonic Examination Data

K.7.2 Written Examinations The written examination should conform to the following: a. Contain a minimum of 15 questions covering the applicable damage mechanisms/ETSS(s) and any unique examination challenges. b. Be conducted in an open book format. c. A passing grade of at least 80%. K.7.3 Practical Examination a. The requirements specified in Section K.4.3.1 are incorporated into this section. b. A minimum of 5 grading units should be used in the practical examination. The same flawed grading units may be used for detection and sizing. A minimum of twice as many non-flawed grading units should be added for the detection practical. c. The analyst should be allowed to review missed indications and overcalls on the practical examination. K.7.3.1 Detection and Orientation

a. A minimum of 80% of the flawed grading units should be detected. False call performance is at the discretion of the licensee. b. If flaw orientation identification is applicable, personnel should identify the correct flaw orientation in at least 80% of the flawed grading units K.7.3.2 Sizing

Where applicable, each analyst performing flaw sizing should demonstrate an RMSE <10% TW (e.g., AVB wear, thinning, etc.). Alternative sizing errors are acceptable provided they are considered in the applicable integrity assessments. K.7.4 Retesting Retesting is at the discretion of the licensee.

K-8

Qualification of Personnel for Analysis of Ultrasonic Examination Data

Attachment K-1 UQDA Training Program


OVERVIEW INSPECTION REQUIREMENTS SG DESIGN AND OPERATION / OPERATING EXPERIENCE INDUSTRY EXPERIENCE AND DAMAGE MECHANISMS

OD IGA/SCC PWSCC Thinning/Wear (Wall Loss)

Loose Parts/Loose Part Damage


Manufacturing Anomalies Denting Flaw Orientations

DATA ACQUISITION

Search Unit and Instrument

Principals Selection
Software Overview Circumferential/Axial/Volumetric Examination Applications Calibration Standards Calibration

DATA ANALYSIS

Software Overview Flaw Specific Detection/Analysis Flaw Specific Sizing Principles Screen Display Generation and Evaluation Data Quality

K-9

The Electric Power Research Institute (EPRI), with major WARNING: This Document contains information classified under U.S. Export Control regulations as restricted from export outside the United States. You are under an obligation to ensure that you have a legal right to obtain access to this information and to ensure that you obtain an export license prior to any re-export of this information. Special restrictions apply to access by anyone that is not a United States citizen or a permanent United States resident. For further information regarding your obligations, please see the information contained below in the section titled Export Control Restrictions. Export Control Restrictions Access to and use of EPRI Intellectual Property is granted with the specific understanding and requirement that responsibility for ensuring full compliance with all applicable U.S. and foreign export laws and regulations is being undertaken by you and your company. This includes an obligation to ensure that any individual receiving access hereunder who is not a U.S. citizen or permanent U.S. resident is permitted access under applicable U.S. and foreign export laws and regulations. In the event you are uncertain whether you or your company may lawfully obtain access to this EPRI Intellectual Property, you acknowledge that it is your obligation to consult with your companys legal counsel to determine whether this access is lawful. Although EPRI may make available on a case-by-case basis an informal assessment of the applicable U.S. export classification for specific EPRI Intellectual Property, you and your company acknowledge that this assessment is solely for informational purposes and not for reliance purposes. You and your company acknowledge that it is still the obligation of you and your company to make your own assessment of the applicable U.S. export classification and ensure compliance accordingly. You and your company understand and acknowledge your obligations to make a prompt report to EPRI and the appropriate authorities regarding any access to or use of EPRI Intellectual Property hereunder that may be in violation of applicable U.S. or foreign export laws or regulations. Programs: Nuclear Power Steam Generator Management Program locations in Palo Alto, California; Charlotte, North Carolina; and Knoxville, Tennessee, was established in 1973 as an independent, nonprofit center for public interest energy and environmental research. EPRI brings together members, participants, the Institutes scientists and engineers, and other leading experts to work collaboratively on solutions to the challenges of electric power. These solutions span nearly every area of electricity generation, delivery, and use, including health, safety, and environment. EPRIs members represent over 90% of the electricity generated in the United States. International participation represents nearly 15% of EPRIs total research, development, and demonstration program. Together...Shaping the Future of Electricity

2007 Electric Power Research Institute (EPRI), Inc. All rights reserved. Electric Power Research Institute, EPRI, and TOGETHER...SHAPING THE FUTURE OF ELECTRICITY are registered service marks of the Electric Power Research Institute, Inc. Printed on recycled paper in the United States of America 1013706

Electric Power Research Institute 3420 Hillview Avenue, Palo Alto, California 94304-1338 PO Box 10412, Palo Alto, California 94303-0813 USA 800.313.3774 650.855.2121 askepri@epri.com www.epri.com