You are on page 1of 4

If the material is relatively thick or the crack is relatively short, the crack base echo and

the crack tip diffraction echo could appear on the scope display simultaneously (as
seen in the figure). This can be attributed to the divergence
of the sound beam where it becomes wide enough to cover
the entire crack length. In such case, though the angle of the
beam striking the base of the crack is slightly different than
the angle of the beam striking the tip of the crack, the
previous equation still holds reasonably accurate and it can
be used for estimating the crack length.

Calibration Methods
Calibration refers to the act of evaluating and adjusting the precision and accuracy of
measurement equipment. In ultrasonic testing, several forms of calibrations must
occur. First, the electronics of the equipment must be calibrated to ensure that they
are performing as designed. This operation is usually performed by the equipment
manufacturer and will not be discussed further in this material. It is also usually
necessary for the operator to perform a "user calibration" of the equipment. This user
calibration is necessary because most ultrasonic equipment can be reconfigured for
use in a large variety of applications. The user must "calibrate" the system, which
includes the equipment settings, the transducer, and the test setup, to validate that
the desired level of precision and accuracy are achieved.
In ultrasonic testing, reference standards are used to establish a general level of
consistency in measurements and to help interpret and quantify the information
contained in the received signal. The figure shows some of the most commonly used
reference standards for the calibration of ultrasonic equipment. Reference standards
are used to validate that the equipment and the setup provide similar results from one
day to the next and that similar results are produced by different systems. Reference
standards also help the inspector to estimate the size of flaws. In a pulse-echo type
setup, signal strength depends on
both the size of the flaw and the
distance between the flaw and the
transducer. The inspector can use a
reference standard with an artificially
induced flaw of known size and at
approximately the same distance
away for the transducer to produce a

Introduction to Non-Destructive Testing Techniques

Ultrasonic Testing Page 33 of 36


signal. By comparing the signal from the reference standard to that received from the
actual flaw, the inspector can estimate the flaw size.
The material of the reference standard should be the same as the material being
inspected and the artificially induced flaw should closely resemble that of the actual
flaw. This second requirement is a major limitation of most standard reference
samples. Most use drilled holes and notches that do not closely represent real flaws. In
most cases the artificially induced defects in reference standards are better reflectors
of sound energy (due to their flatter and smoother surfaces) and produce indications
that are larger than those that a similar sized flaw would produce. Producing more
"realistic" defects is cost prohibitive in most cases and, therefore, the inspector can
only make an estimate of the flaw size.
Reference standards are mainly used to calibrate instruments prior to performing the
inspection and, in general, they are also useful for:
 Checking the performance of both angle-beam and normal-beam transducers
(sensitivity, resolution, beam spread, etc.)
 Determining the sound beam exit point of angle-beam transducers
 Determining the refracted angle produced
 Calibrating sound path distance
 Evaluating instrument performance (time base, linearity, etc.)

Introduction to Some of the Common Standards


A wide variety of standard calibration blocks of different designs, sizes and systems of
units (mm or inch) are available. The type of standard calibration block used is
dependent on the NDT application and the form and shape of the object being
evaluated. The most commonly used standard calibration blocks are those of the;
International Institute of Welding (IIW), American Welding Society (AWS) and
American Society of Testing and Materials (ASTM). Only two of the most commonly
used standard calibration blocks are introduced here.
IIW Type US-1 Calibration Block
This block is a general purpose calibration block that can be used for calibrating angle-
beam transducers as well as normal beam transducers. The material from which IIW
blocks are prepared is specified as killed, open hearth or electric furnace, low-carbon
steel in the normalized condition and with a grain size of McQuaid-Ehn No. 8. Official
IIW blocks are dimensioned in the metric system of units.

Introduction to Non-Destructive Testing Techniques

Ultrasonic Testing Page 34 of 36


The block has several features that facilitate checking
and calibrating many of the parameters and functions of
the transducer as well as the instrument where that
includes; angle-beam exit (index) point, beam angle,
beam spared, time base, linearity, resolution, dead zone,
sensitivity and range setting.

The figure below shows some of the uses of the block.

ASTM - Miniature Angle-Beam Calibration Block (V2)


The miniature angle-beam block is used in a somewhat similar manner as the as the
IIW block, but is smaller and lighter. The miniature angle-beam block is primarily used
in the field for checking the characteristics of angle-beam transducers.
With the miniature block, beam angle and exit point
can be checked for an angle-beam transducer. Both
the 25 and 50 mm radius surfaces provide ways for
checking the location of the exit point of the
transducer and for calibrating the time base of the

Introduction to Non-Destructive Testing Techniques

Ultrasonic Testing Page 35 of 36


instrument in terms of metal distance. The small hole provides a reflector for checking
beam angle and for setting the instrument gain.

Distance Amplitude Correction (DAC)


Acoustic signals from the same reflecting surface will
have different amplitudes at different distances from
the transducer. A distance amplitude correction (DAC)
curve provides a means of establishing a graphic
“reference level sensitivity” as a function of the
distance to the reflector (i.e., time on the A-scan
display). The use of DAC allows signals reflected from
similar discontinuities to be evaluated where signal
attenuation as a function of depth has been correlated.
DAC will allow for loss in amplitude over material depth (time) to be represented
graphically on the A-scan display. Because near field length and beam spread vary
according to transducer size and frequency, and materials vary in attenuation and
velocity, a DAC curve must be established for each different situation. DAC may be
employed in both longitudinal and shear modes of operation as well as either contact
or immersion inspection techniques.
A DAC curve is constructed from the peak amplitude
responses from reflectors of equal area at different
distances in the same material. Reference standards
which incorporate side drilled holes (SDH), flat
bottom holes (FBH), or notches whereby the
reflectors are located at varying depths are
commonly used. A-scan echoes are displayed at
their non-electronically compensated height and
the peak amplitude of each signal is marked to
construct the DAC curve as shown in the figure. It is
important to recognize that regardless of the type
of reflector used, the size and shape of the reflector
must be constant.
The same method is used for constructing DAC curves for angle beam transducers,
however in that case both the first and second leg reflections can be used for
constructing the DAC curve.

Introduction to Non-Destructive Testing Techniques

Ultrasonic Testing Page 36 of 36

You might also like