Professional Documents
Culture Documents
2
Issue
Reeves Technologies
Compact
Calibration
Guide
Compact Calibration Guide Issue 2 November 2002 Page 1 of 26
COMPACT LOGGING
CONTENTS
1. Introduction ....................................................................................................... 3
2.5 Combination.......................................................................................................................... 10
3. Signal Processing........................................................................................... 11
1. INTRODUCTION
The Compact Calibration Guide is concerned with the acquisition and processing of
Reeves Compact Systems well log data.
Starting with the raw output from logging tool transducers, the Guide follows the
calibration process through to the generation of logs scaled in engineering units. It shows
how calibration information presented on a log tail is traceable to primary standards, and it
specifies the maximum expected inter-tool normalisation errors for each measurement.
The Guide also includes a consideration of basic digital signal processing principles as
applied to the presentation of log data, and indicates how the choice of digital filter
influences spatial resolution, precision and nuclear log signal-to-noise ratios.
2. CALIBRATION PHILOSOPHY
Log calibration encompasses a range of procedures whose objectives are to ensure that log
data represents a true record of the physical properties being measured, and in particular
that their values are traceable to those of standards whose properties are known to a high
level of accuracy.
Normalisation is the process that ensures all examples of the same tool type respond in the
same way to a common stimulus.
Combination refers to the manner in which individual measurements are brought together
to form a compensated measurement.
An explanation of the meaning and interpretation of calibration tails occupies the major
part of this document.
2.2 DESIGNATION
The question arises - at what level of modification does a tool become a new tool with a
new response function?
Tool geometry modifications (for example, spacing and collimation) invariably result in the
designation of a new response function, whilst a discrete electronic component change
typically does not. The effects of some modifications can be determined only by
experiment - when a response function change is indicated, a new tool series is designated
along with a new set of response characteristics.
2.3 NORMALISATION
Once a generic tool type has been designated, its formation and environmental response
characteristics must be defined; this need only be done once for one tool, since it is
assumed that all tools of the same type share the same set of response characteristics.
However, no two tools are ever identical, so provision must be made to equalise their
outputs to a common reference standard. This is the process of normalisation.
Differences between tools are random and systematic. Examples of random variation are
manufacturing tolerances, and variations in the thickness of pressure casings caused by
wear. An example of a systematic variation is the decay of a radioactive source (Cs-137,
for example, decays 2.3% per year).
The process of normalisation is intended to correct for differences of this type that are
small. The raw responses from all tools of the same type (counts, volts and so forth) are
assumed to be related to each other in a simple way, usually in a linear fashion. There are a
small number of exceptions to the linearity assumption, and in these cases more complex
normalisations are used. For example, some caliper mechanisms do not behave linearly,
and in this case the full measurement range is split into sub-ranges which are each taken as
linear.
It is stressed that normalisation takes place in the raw output domain (sometimes after
application of a so-called design normalisation to scale the output into the appropriate
engineering unit range). This permits the assumption of linearity to be made valid by the
appropriate level of control during the manufacturing and operating processes.
Linear transformation (from raw units into normalised units) uses a gain term and an
offset. In other words:
where m and c are gain and offset respectively. They are derived by subjecting the tools to
a standard input or environment. For example, some resistivity tools are normalised using
precision resistors, whilst nuclear tools are generally subjected to standard fluxes.
In order to define both m and c it is necessary to have two reference points. In some cases
it can be determined that c is zero (or less than the normalisation error) in which case only
one non-zero reference is used. This is sometimes called a “one-point” normalisation.
The normalisation procedure is therefore to record raw output whilst subjecting the tool to
the two references in turn. This gives the simultaneous equations:
Reference 1 = m (Raw 1) + c
Reference 2 = m (Raw 2 )+ c
whence
and
c = Reference 1 - m (Raw 1)
In the case of nuclear logs, the measurements are made over a sufficiently long period of
time to allow the uncertainty due to counting statistics to be ignored.
When a normalisation procedure is simple and environmental effects are small, the best
place to perform it is at the wellsite immediately prior to logging. This is the case, for
example, with the gamma ray tool.
In general, however, normalisation procedure demand several measurements per tool, the
results may be influenced by the local environment, and attention to detail is required.
Consequently, the best place to normalise is usually at a base location where conditions can
be controlled, and where operational pressures are at a minimum. This is the Base
Calibration.
The gains and offsets obtained in this way are generally more accurate than can be obtained
at the wellsite. Moreover, the stability of modern systems is such that this consideration
outweighs any desirability for wellsite calibration. Consequently for the majority of tools,
the m and c values used during logging are those obtained at base, and a check procedure is
used to confirm or otherwise the appropriateness of the values at the wellsite.
The check may be a sub-set of the base calibration, or may use a special field portable jig or
internal standard. In each case, a Field Check At Base measurement is compared to a
before survey Field Check made at the wellsite using the same procedure. This in turn is
compared with an After Survey Check as a quality control on stability during logging.
Values obtained during each phase of the check procedure should agree to within quoted
tolerances. However, failure to agree is not proof of a tool fault.
If the tolerances are not met, the engineer will take other information into account (for
example, whether circumstances allowed the tool to be cleaned to the appropriate degree
prior to the check), and select one of the following courses of action:
2. accept the log and perform a full Base Calibration as soon as feasible after the job
The reference standards used to characterise the response of a tool are unique. They are the
primary calibration set. The standards used during normalisation are also commonly
referred to as calibrators; they are replicated at each operating base and their values
referenced to the primary calibration set. They are the secondary or base standards.
In some cases the base standards are physically large, and it may be inconvenient to
transport and use them in the field. Consequently the check measurements may be
performed using tertiary or field standards which are themselves calibrated against a base
standard.
In this way, a calibration trail or chain is established which enables the response of a tool to
be traced all the way back to the primary standard.
In the calibration trail, there is a level below that of field standard, and that is the internal
calibrator. This is used when it is not possible to perform a field check because of, for
example, interference from nearby structures (this can be particularly problematic on
offshore installations). Examples are the precision voltages generated internally within the
Array Induction tool, and the lock source counts from density tools.
The design of a calibrator is a function of its purpose. Primary calibrators are typically
large, enclosing the entire volume of investigation of the logging tool; they are
homogeneous and their properties are known to a high level of accuracy. Examples are
free space, test wells drilled (and cored) through real earth formations, artificial formations
made of real earth materials, cast or moulded blocks such as aluminium and nylon, and
large bodies of water such as lakes and reservoirs. The values assigned to primary
calibrators are determined using independent means.
Field standards, on the other hand, need not be large. In most cases, they are used as
checks, with portability and ease of use being important design considerations. Care may be
needed to avoid environmental perturbations i.e. the possibility that the tool readings may
be influenced by materials close to the tool during the check measurement.
For all calibrator types, their design needs to take account of the practicalities of making a
valid logging measurement.
In particular, it must be easy to control the position of the calibrator with respect to the
tool. So, for example, the density tool base normalisation and field checks are performed
with the tool horizontal, source and detector windows uppermost, and the calibrator
resting on top, then gravity does most of the positioning work.
Another design consideration for all calibrators is that they should provide a measurement
that is within the range of interest and in a manner similar to real earth formations. So, for
example, a gamma calibrator comprises multiple sources embedded in nylon, which wraps
around the gamma detector to give a uniform flux whose spectrum is typical of real
formations.
Base normalisations are performed at intervals which have been defined on the basis of
known tool properties, rates of usage, and past experience (in the form of normalisation
histories). The recommended intervals are specified for each tool in Section 4. The typical
interval is one month; occasionally three or six-month intervals are specified. Where base
and field procedures are the same (the gamma ray, for example) the normalisation may be
performed immediately prior to each job.
2.4 CHARACTERISATION
The first stage in this process is to define the transform between raw transducer output and
the formation property under standard conditions. Standard conditions are usually (but
not always) an 8 inch (203mm) diameter borehole filled with fresh water (density 1.0gm/cc
or 8.3lb/gal) at 70°F (21°C) and 1 atmosphere. Nuclear tools are standardly eccentred,
whilst mandrel resistivity tools are standardly centralised. Transforms are defined by
noting the tool output as the formation property of interest is changed; the environment
and (as far as possible) other formation properties are maintained constant.
The second stage is to define the departure characteristics. These are the variations in tool
output caused by changes in hole size, mud weight and so forth, and which occur even if
the formation property stays constant.
In some cases, the corrections may be independent of formation properties. This is the case
(to a good approximation) with the borehole size correction for induction measurements.
In other cases, the corrections can change in a complex way dependent on formation
properties. This is the case for neutron porosity standoff corrections, for example.
These are measurements made with real logging tools in simulated formations, or in
materials whose properties are known to a high level of accuracy. They are also needed to
benchmark the results of mathematical modelling. Among the many physical test facilities
hosted by Reeves Technologies is the industry standard Callisto neutron porosity facility,
and numerous density and Pe test blocks.
The nuclear measurements (Density, Pe and Gamma Ray) were characterised using the
Monte Carlo method in which "virtual" particles are tracked from source to detector via
interactions whose outcome is determined by the toss of a virtual coin. The electrical
measurements (Array Induction, Focussed Electric and Laterologs) were characterised using
analytical and finite element techniques.
2.5 COMBINATION
ρc = AρL + (1 − A)ρS
Here A is a constant, and the L and S subscripts refer to long and short spacings
respectively. In this case the individual logs have been separately normalised and
characterised; this approach is both consistent with theoretical considerations, and makes
the individual logs available for resolution enhancement in an appropriate form. This
aspect is addressed in Section 3.5.
3. SIGNAL PROCESSING
This section deals with sampling and filtering. These influence the perceived vertical
resolution and precision of logs.
Reeves logs are generated from sampled data. The rate of sampling is such that there are
always sufficient samples to allow a continuous log to be reconstructed with the
appropriate amount of vertical resolution - the finer the resolution of the measurement, the
greater the sample rate.
Table 1 shows the available sample rates and corresponding increment between samples.
10 100 (3.9)
40 25 (1.0)
100 10 (0.4)
200 5 (0.2)
500 2 (0.1)
The rate for all lithology logs is 10 samples/metre, which allows reproduction of spatial
frequencies as high as 5 cycles/metre.
The higher sample rates have been reserved for future dipmeter and formation image
logging developments.
The optimum sample rate is the minimum rate that permits reproduction of the highest
meaningful frequency in a measurement.
It is not normally convenient to record mixed sample rates from one tool string, so the rate
that is used depends on the highest resolution measurement in the string (under sampling
results in the loss of vertical resolution).
Consequently, some curves may be over sampled. In the case of nuclear logs, this means
that the standard deviation per sample is unnecessarily high, and smoothing filters are
employed to bring the noise back to optimum levels.
σX2 f
2
= ... W− 1 σX d − 1
+ W 0 2 σ X + W 12 σ X
d d + 1
Assuming the variances are the same over the averaged interval, the variance reduction is
therefore:
A number of filters are available to cover the range of noise reduction requirements, and
these are listed in Table 2. They are either moving averages (MA) in which the filter
weights are equal, or convolved moving averages (CMA) which are two MA filters
convolved together. For example a 3 MA filter is a 3 level filter whose weights are (1/3,
1/3, 1/3), and a 3/5 CMA is a 7 level filter with weights (1/15, 2/15, 1/5, 1/5, 1/5, 2/15,
1/15). The relative change in standard deviation per sample is also given in the table.
Table 2. Standard filters and the associated reduction in standard deviation per sample.
If a log has been sampled at the optimum rate, any smoothing will degrade resolution. If a
nuclear log has been over sampled the standard deviation per sample increase by the square
root of the ratio of the rate used to the optimum rate.
If the standard deviation associated with the optimum rate is σxo we have:
1/
σxd / σxo = ( R / Ro) 2 (2)
where R is the rate used and Ro the optimum rate. Combining (1) and (2) we get:
Ro / R = Σ Wi 2 (3)
Equation (3) defines the equivalence between filtering and changing the sample rate, and
allows the optimum filter to be chosen for any over sampled rate.
4. CALIBRATION CATALOGUE
This section describes primary calibration, base normalisation and field check procedures
for each tool, and specifies tolerances on uncalibrated and check values for use in quality
control procedures (see section 2.3.3). Example calibration records from Compact's 32-bit
acquisition software are reproduced.
Calibration records for each measurement are stored in the Reeves format Curve file for
that measurement, and is also encoded on certain styles of LIS format customer files (so-
called Pagoda-compatible files). Part of the calibration procedure is to compare the current
calibration set with the last (which is held in a separate calibrations database file)
The calibration tail on a paper log is a summary of the Curve file calibration data. In
general, the calibration record for each measurement comprises two parts:
2. Measurement Constants. These are engineer editable values that control the
processing of the calibrated data. Examples are matrix density (used to calculate density
porosity), borehole fluid salinity, and formation Sigma (for neutron porosity environmental
corrections).
The tool-specific calibration procedures are detailed below, illustrated with actual
Calibration Records. The accompanying tables summarise each step in the procedure, and
specify the expected tolerances on the measured (pre-calibrated) values.
ρ = m(kR+c)
where R is the resistance measured by the tool (sense electrode potential divided by
current), k the tool coefficient (or k-factor), and m and c the calibration gain and offset.
In the calibration record, the measured value is R and the calibrated value is ρ.
Two measurements, Reference 1 and Reference 2 are needed to determine m and c. The
design of the MFE is such that c=0, and Reference 1 is therefore set to zero.
The Array Induction Sonde makes four measurements of conductivity, each output channel
being independently calibrated.
Base calibration is from a copper loop and precision resistors placed around the tool, co-
planar with the measure points of each channel. The loop simulates a conductive
formation; the apparent conductivity of the loop is different for each channel.
Base calibrations are performed at specific locations known to have negligible or very low
background conductivity (determined from measurements made with tools calibrated in a
true zero conductivity environment). Non-zero backgrounds offset the calibration; they are
backed out, and reported as Site Corrections in the Induction Constants part of the
calibration record.
Field checks use two internal references that present different apparent conductivities to
each channel. The field internals are compared to internal values obtained during base
normalisation. The internals are also used in the after survey check.
The four channels are typically combined into Deep, Medium and Shallow curves. Values
for these curves resulting from the Field Checks are presented for information.
Tool output is in millivolts. The primary (and only) calibrator is an external precision
voltage source. Two reference signals are selected, typically close to 100mV; the tool
output is transformed to the reference signal using the appropriate gain and offset. This
calibration is performed in the field prior to logging.
A gamma measurement is made in the Compact Gamma Comms (MCG) sub. It logs
naturally occurring gamma rays having energies in excess of approximately 100keV.
The raw output is the number of counts per second from a sodium iodide crystal and
photo-multiplier tube. This is scaled into API units using a gain factor determined from a
test jig.
The primary gamma calibrator is the API test pit in Houston: 1 API unit is defined as 1/200
of the difference between high and low activity zones in the pit. Secondary calibrators are
split nylon doughnuts that are wrapped around the tool, and contain low activity
radioactive materials that simulate the flux from typical earth formations. This calibration
is performed in the field prior to logging, and repeated after survey as a check.
The Compact PhotoDensity Sonde provides formation density and borehole caliper logs,
plus a lithology-sensitive Pe measurement.
The formation density log is a compensated measurement derived from near and far spaced
measurements which have been individually calibrated and characterised. Primary
calibration was established in specially commissioned calibration blocks, and the Callisto
test facility.
The Pe curve is derived from the ratio of soft (low energy) counts to hard (high energy)
counts from the near spaced detector. Primary calibration was established in the same
calibration facilities.
Raw count rates are transformed into normalised counts (Standard Density Units - SDUs)
using secondary base standards.
Density processing transforms normalised hard energy counts into electron densities, ρe ,
and then into apparent log densities ρa , using the relation ρa = 1.0704ρe - 0.1878. This
defines a progressive Z/A correction which is applied in the range 1.678 ≤ 2.71 gm/cc; the
correction is set at zero for ρa > 2.71 gm/cc, and -0.065 gm/cc for ρa <1.687 gm/cc.
A single Base Calibration procedure normalises the density and Pe measurements. It begins
with a background measurement; most background counts are due to the small source used
by the detector gain stabilisation system, and must be removed prior to calculating densities
and Pe ratios. Two reference jigs are then placed in turn over the tool. Reference 1 is a
nylon block, Reference 2 an aluminium block with steel insert. Counts due to each are
sorted into soft (WS) and hard (WH) energy bins. Density counts are transformed into
normalised density units, whilst the soft/hard ratio values are transformed into normalised
ratio values.
The recommended base to field check procedure is to repeat the Reference 2 measurement
at base and compare with Reference 2 counts made in the field. If a valid Reference 2 field
measurement cannot be obtained for environmental reasons, it is acceptable to compare
background counts. The after survey check is a repeat of the Reference 2 or background
measurement, as appropriate.
Field Check
895.7 1171.8
PE Calibration
Base Calibration Measured Calibrated
WS WH Ratio Ratio
Background 168 780
Reference 1 16085 52904 0.305 0.319
Reference 2 6402 24926 0.258 0.275
Field Check
168.1 784.3
Photo Density Check MPD 008 Before Survey Check on 7-JAN-1999 14:20
After Survey Check on 8-JAN-1999 00:46
Density Check
Near Far
Before After Before After
895.7 890.0 1171.8 1170.0
PE Check
Before After
WS 168.1 170.0
WH 784.3 785.0
The Compact Dual Neutron porosity log is computed from a ratio of near and far thermal
neutron count rates that have been individually normalised.
Primary calibration was established at the Callisto test facility, at test pits in Houston, and
at other Reeves facilities. The secondary base standard is a large fresh water filled tank
replicated at each base, and used to normalise tools on a monthly basis. Checks are
performed using portable active jigs.
The acquisition software has the capacity to apply all relevant environmental corrections.
They are listed in the Neutron Constants part of the log tail, and are applied when the
correction parameter values depart from standard conditions (refer to the MDN Charts.
Field Check
Calibrated (cps)
2076 3048
Ratio 0.681
The fundamental measurement parameter is time. This is derived from a very accurate
crystal oscillator whose proper functioning is implicitly guaranteed in an operating tool,
and does not require calibration.
Some variants of MSS also operate as Cement Bond Log tools. In this case, the E1
amplitude may be normalised to produce a standard reading in free pipe. The gain factor is
recorded in the Curve File, but does not produce a printed calibration record.
Caliper tools measure borehole diameter; caliper logs are used in hole volume calculations,
and may be used in certain environmental corrections.
The caliper from the Compact PhotoDensity (MPD) serves to position the density shoe
against the borehole wall. The caliper from the Compact Two Arm Caliper (MTC) is
normally run at right angles to the MPD caliper in elliptical wells to position the density
shoe across the short axis of the well; this configuration results in an X-Y caliper output.
Base calibration uses five sleeves whose internal diameters are known to a high degree of
precision, covering the range of expected hole sizes. Count rates from the caliper
transducers are recorded for each sleeve, and a transform characteristic is computed.
A field calibration is performed at the wellsite prior to logging. One of the sleeves is
measured with the calibration transform applied, and compared with the actual sleeve
diameter. The two values should agree to within the specified tolerance; residual
differences below the tolerance level are removed by making a modification to the base
calibration characteristic.
The after survey check is a repetition of the sleeve measurements made before survey. The
before and after values should agree to within the specified tolerance.
Field Calibration
Measured Caliper (in) Actual Caliper (in)
7.95 7.96
The primary measurement of borehole temperature is from the MCG tool. Nominal
accuracy is ±1.7 degrees C. The High Resolution Temperature curve recorded by MAI
and MHT tools is used to compute differential temperatures, and as such the absolute
accuracy is not specified.
Temperature measurements are made with transducers whose output is linear with
temperature. During primary calibration the transducers are exposed to ambient
temperature and a second elevated temperature. Calibrated values are determined using an
independent precision thermometer.
4.6 PRESSURE
Formation and hydrostatic wellbore pressures are measured by the Compact Repeat
Formation Pressure Tester (MFT). The tool contains two gauges: a strain gauge, and a
Quartzdyne quartz gauge.
Initial calibration of all Quartzdyne gauges is by the manufacturer, and includes the
determination of the temperature response characteristics. The initial calibrations are
checked every 3 months using a reference gauge supplied by Quartzdyne and calibrated by
the manufacturer at 12 month intervals. Gauges that fail the check procedure are returned
to the manufacturer for primary calibration, but otherwise the date of the manufacturer's
original calibration appears in the MFT Logging Constants section of the log tail. The
primary calibration of individual gauges may therefore be older than 12 months.
The manufacturer also performs strain gauge primary calibration, and the contents of the
manufacturer's calibration certificate are reproduced in the Strain Gauge Constants part of
the MFT calibration tail. The calibrated response is recorded as a function of increasing
and decreasing temperature. The strain gauge calibration is checked with a dead weight
tester (or against the quartz reference gauge) every 3 months.