You are on page 1of 16

SPWLA 54th Annual Logging Symposium, June 22-26, 2013

SPEED MATTERS: EFECTS OF LOGGING SPEED ON LOG


RESOLUTION AND LOG SAMPLING
John Priest, Elton Frost and Terrence Quinn, Baker Hughes

Copyright 2013, held jointly by the Society of Petrophysicists and Well Log INTRODUCTION
Analysts (SPWLA) and the submitting authors
This paper was prepared for presentation at the SPWLA 54th Annual Logging
Symposium held in New Orleans, Louisiana, June 22-26, 2013. As is normally the case, system performance increases
with improved technology. This is certainly true in
drilling and logging speeds. The exploration and
ABSTRACT production companies, E&P, are constantly striving to
reduce their exploration, development and production
Technology advances in data acquisition systems have costs. One way E&P companies reduce costs is by
enabled significant improvements in acquiring vast constantly developing new techniques for increasing
quantities of data at higher drilling and logging speeds. drilling speed or the rate of penetration (ROP),
However, according to little-published data, uncertainty operating under the assumption that increased ROP
still exists about how data acquisition responds to the reduces cost, (e.g., saves money on rig time). In the
presence of sensor motion (time-windowed following discussion, ROP and sensor speed may be
acquisition), logging speed and angular rotation rate. used interchangeably because the important property is
Rotational sensors, measuring different formation the sensor speed.
properties that depend on azimuth, can have
fundamentally different statistics that influence their Regarding the comments above, several companies
response to sensor speeds. contacted us, expecting a reduction in log quality and
asking if there were logging or drilling speed guidelines
The need for higher logging speeds is a direct result of vs. data quality, see also Helgesen, et al. (2006),
the increasing cost of rig time. The question that must Dupriest, et al. (2005) and (Theys, 1999). Here, we will
be posed is: What is the tradeoff (i.e., the reduction in address only the effect of logging speed on the actual or
measurement quality versus the savings in time)? apparent resolution of moving sensors, data sampling
and other motion-related effects that can affect tool
To completely understand how a measurement system measurements.
interacts with the measurement target, we need to
understand the system. In particular, we must We recognized an internal need to improve our
understand the types of measurement involved, how the understanding of data acquisition issues caused by
measurement signals are detected, what is being increased ROP, in addition to the E&P company
measured, etc. In other words, we need to know all requests. The objective is to determine and justify
relevant properties of the measurement system. In any operations that reduce drilling and rig costs while
measurement system, there are six major components: simultaneously maintaining or improving data quality,
the signal, noise, resolution, sampling, and signal well placement and reservoir description.
interactions with the formation and with the measuring
instrument. Exploration of these six components can THEORETICAL CONSIDERATIONS
provide insight into the tradeoff between speed and the
desired measurement. The Basics

Examples will be provided to show the effects for time- To completely understand how a measurement system
sampled devices, (e.g., nuclear) and point measurement interacts with its measurement targets, we need to
devices, (e.g., electrical or acoustic) in the logging- understand the system. In particular, we need to know
while-drilling (LWD) and wireline (WL) environments. the types of measurement involved, how the
Some recommendations on the maximum acceptable measurement signals are detected, what is being
speed for quality data acquisition are proposed. measured, etc. In short, we need to know all relevant
properties of the measurement system. There are six
major components in any measurement system: the
signal, noise, resolution, sampling and signal
interaction with the test object and the measuring

1
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

instrument. In a practical sense, the goal of any Noise Source Description


measurement is the characterization of some property Instrument Electronics, sensors, shot noise,
related to the test object. In the E&P world, this goal cross-coupled signals, power
can be classified under various descriptive terms such supply, A/D and D/A converters.
as reservoir properties, lithology, etc., for large scale Instrument noise may also include
measurements and a plethora of properties on smaller some of the noise types listed
scales. To properly analyze the measured signal we below.
need a clear understanding of the general principles Statistical Generally associated with
surrounding our system, not fine-point details about a counting nuclear events over
particular measuring device and the corresponding some time window.
input signal—The Basics. Quantization Measurement is limited to certain
accuracy due to digitization, e.g.,
Signals and Noise 10 bits. This error is generally
±½ the least significant bit.
At the simplest level, any measurement contains two Sampling Sampling continuous systems
components, the signal and noise. We try to measure approximate the continuous
the signal, but this measurement is corrupted by noise system—the most common
and perhaps by the measuring instrument itself. By sampling errors are clock drift,
definition, the signal is the output of the ideal system- clock quantization and aliasing.
transfer function, and given identical inputs it produces Unwanted Any signal in the system that
identical outputs. The signal can be less than, equal to Signal competes with the
or greater than the system noise. Noise can be random, measurement,e.g., multiples and
non-random or coherent. Extracting the ‘real signal’ shear waves (after the first
depends on our skill in reducing and/or eliminating arrival) when measuring
noise, (Shannon, 1948, 1949). Noise sources can occur compressional waves.
virtually everywhere within the system. Broadly Coherent Noise Any signal component that is
speaking, these noise sources can be separated into coherent—the most common
several groups: instrument noise, statistical noise, source is due to the alternating
quantization noise, sampling, unwanted signal and current electrical power grid.
coherent noise (Table 1).
Table 1 Summary of noise sources and causes.
Fortunately, several noise sources can be partially
controlled. Careful circuit design, excluding the sensor Signal-to-Noise Ratio
and coherency filters, can significantly reduce coherent
noise. All sensors have an inherent lower noise, the In any system, the measured signal, M, is never the
intrinsic sensor limit. In many cases the analog-to- actual signal; it is the sum of the signal, S, plus all
digital, A/D, and digital-to-analog, D/A, quantization noise, N, within the system
noise can be set well below the intrinsic sensor noise,
effectively eliminating quantization effects. An obvious
M = S + ∑ Ni = S + N , (1)
corollary to this discussion is that someone’s noise can
be someone else’s signal, e.g., shear waves in seismic
data. with the sum indicating cumulative effects of all noise
sources, Ni. This formula is not strictly mathematically
Finally, we hope that the residual system noise is correct, but it illustrates the point. As the signal
confined to random noise, and possibly time-sampling approaches the noise, we are limited in our ability to
noise and A/D or D/A quantization noise. This noise extract meaningful data. The obvious question is: At
becomes the irreducible system noise that is always what point do we lose the ability to extract meaningful
present. For analysis of the irreducible system noise we signal from the measured data?
generally assume the random noise is described by
Gaussian or Poisson distributions, and the A/D and D/A The signal-to-noise ratio is frequently used as a quality
converter noise is described by a uniform distribution measure to determine if meaningful data can be
between 0 and 1 or between ±½ scaled to match the extracted from measured data. The signal-to-noise ratio
least-significant bit of the converter precision. (SNR) is generally defined in engineering by
2
Ps  As  , (2)
SNR = = 
Pn  An 

2
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

where P is average power, A is average amplitude, the the signal and the noise. Using equation 6 we get the
subscripts s and n indicate the real signal and the noise, following
respectively. As long as the power and amplitudes are
measured across the same loads, the right-hand term is M S (7)
= +1
valid. The electrical signals generally have a wide N N
Obviously, || can never be smaller than || or
dynamic range, so alternate forms are also widely used.
||, where the symbol | | refers to the expected
Ps A
SNR ( db ) = 10 log = 20 log s , (3) value of the magnitude and includes signals that can
Pn An swing positively and negatively. However, for any
particular measurement, M can be smaller than S.
Another frequently used form, particularly when
counting quanta or nuclear events, is With normally distributed noise and a signal change at
twice the noise level, it is not clear that the signal
µ , (4) change is in response to the input or due to the noise
SN R =
σ (Figure 1). We would normally expect that the variation
where µ is signal, the population mean or expectation in noise would frequently exceed that same level. In the
value of the signal, and σ is the standard deviation of case where the signal change is three times the noise
the noise. level we can reasonably expect that the measured signal
approximates the actual signal. This results in a true
With these definitions we can infer a useful criterion for signal-to-noise level of 2, not three, equation (8),
extracting a meaningful signal in the presence of noise.
Here we add an additional implied constraint—the M S S
meaningful part of the signal is the variation around the = 3 leads to 3 = + 1 or =2. (8)
N N N
local mean or average signal, i.e., the contrast. We can
have a signal with a high average power, amplitude or From a statistical perspective, with normally distributed
mean with a low (relative to the average) noise, and a noise a measured signal of twice the noise would have a
signal variation that is on the order of a small multiple 68-percent likelihood of being a response to an input,
of the noise, e.g., a simple case where N describes while a response of three times the noise would have a
normally distributed random noise: 99-percent likelihood of being a response to an input.

0
This corresponds to a signal-to-noise ratio of 2:1.

  1000  2 2    1,  1
Consequently, a signal-to-noise ratio of two is a
(5) reasonable criterion for signal detection. However, this
approach is only a handy rule for a quick estimate. This
where t and t0 are arbitrary positions on the horizontal estimate can be quite good for localized signal changes,
(time or x-axis, etc.) axis and t0 is the position of the e.g., instrument responses to non-periodic signals or
center of the normal distribution, and σd is the half- geologic layering within a formation. Under some
width at half maximum. conditions, a signal may be detected that is far below
this SNR level. These signals are generally long-
The question is: Under what conditions can we observe duration periodic signals, e.g., the nth harmonic of the
the Gaussian signal? In the following discussion, when power grid frequency (60 Hz in the USA), and special
we use the signal, this means the meaningful part of the processing can extract those signals.
signal, unless otherwise noted. As used here, the
meaningful parts of a signal are those changes in a Considerations Unique to Nuclear Logging
signal that are ‘real’, regardless of how small those
changes are. Ultimately, the question to be answered Nuclear logging involves counting some kind of
is: How small a signal change can we observe in the nuclear event, usually gamma rays, e.g., natural gamma
presence of noise? rays and gamma rays from neutron capture detected in a
gamma ray detector over some time window. We will
Recalling that the measured signal is the sum of the use the term “quanta” to include any signal that can be
signal and the noise, we can write counted—the signal is said to be quantized. Counting
quanta has a SNR that is best described by equation (4)
S =M −N (6) µ,
SNR = (4)
σ
We also know that we can measure neither the signal
where µ is the population mean count, not the observed
nor the noise, only their sum. All we can do is estimate
count, and the noise is given by σ = µ .
3
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

While the noise sets a limit to the accuracy of N, the


noise also sets the limit to the smallest change that can
be detected: ∆N . According to Rose (1948), ∆N is on
the order of N , with a constant of proportionality that
depends on probability considerations, e.g., ∆N = 2 N
as discussed above.

Adapting from Rose (1948), the signal, B, is


proportional to the number of counts observed in the
area A, (i.e., the area or aperture of a detector):

N (10)
B=
A

and the threshold contrast, the smallest change in signal


that can be determined, C, is defined as the change in
signal divided by the signal:

1
∆B ∆N −
(11)
C= = = N 2
B N

Note the threshold contrast is inversely proportional to


the noise. While the threshold contrast was developed
for quanta counting, the results are generally correct
for nuclear event counting. From these two equations,
(10) and (11), we find

1 (12)
B =
C2A
or

Fig. 1 Example illustrating the signal-to-noise


BC 2 A = constant (13)
problem. The signal is represented by a stepped
parabola (50 points/step) and a root mean square BC2α 2 = constant (14)
noise level of 4 added (A). The signal (yellow in
B) cannot be discerned from the noise until Where α is a geometrical term related to aperture, e.g.,
approximately point 700 (SNR≈2). The step at the angle subtended by h at a lens, α < < 1 rad ian ,
600 (SNR≈1) might be considered discernible, but and indicates the adaption from Rose, equation (14).
not without a high degree of confidence in the The constant term contains the detector parameters,
data. exposure time (counting time) and detector quantum
efficiency. This result states that signal, contrast and
aperture or area cannot be controlled independently;
Because the population mean is generally unknown,
any pair defines the third parameter. At this point the
especially in the logging environment, the
Rose discussion diverges from our system because his
approximation
interest was detection of light through a lens, hence his
N (9)
SNR = use of aperture. Ultimately, the question to be answered
N is: How small a signal change can we observe in the
is frequently used. The actual population mean will lay presence of noise? The answer to this question is the
in the range N − N < µ < N + N (one standard threshold contrast, C. Note that for a fixed area
deviation) only 68% of the time, so this approximation
is not very accurate. Obviously, as the time window BC 2 = constant (15)
increases, the counts increase and the SNR improves.
Not so obvious to some, as the time window increases or given the signal, B, the contrast, C, is determined.
the uncertainty in counts increases while the relative
uncertainty in counts decreases.
4
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

Signals and Sampling noise to be mirrored to a lower frequency, #  " ,


Figure 2.
The second part of our problem is sampling. Our ability
to sample the formation ‘signal’ depends on two Second, what can we do about this problem? There are
different considerations, one based on the sampling, several options that may or may not be useful for a
and the other on the ability of a system to resolve signal particular data set. If the signal has an upper frequency
changes. where the Nyquist criterion is met, we might be able to
increase the sample rate. If the signal of interest only
Sampling Theorem and Nyquist Frequency occupies a subset of the complete spectrum, we can
band-limit the signal by filtering before sampling so the
Generally, sampling systems attempt to represent a Nyquist criteria are met. Finally, the measuring
continuous signal, s ( γ ) , by measuring its values at a instrument might place a limit on the observable
frequency range. Of these three options, only the last
discrete set of points. The samples are useful for will be discussed in any detail, as the first two are
recovering the continuous signal only under certain generally well known. However, we need more basics.
conditions. Possibly the most important condition for
recovering the continuous signal is the Nyquist
condition (Nyquist, 1928): 1
0.5
0
-0.5
If a function x(t) contains no frequencies greater -1
than or equal to some frequency1, f, it is completely 0 100 200 300 400 500 600 700 800 900 1000
determined by giving its values at a series of points
spaced 1/(2f) apart. Fig. 2 Example of aliasing, arbitrary units. Assuming
the units are microseconds, µs, the sine wave, black,
Simply put, this requires that there be no changes in the has a period of 100 µs (10 kHz) which was then
signal that vary more rapidly than half the sampling ‘acquired’ at a sample interval of 83 µs, magenta,
frequency. Stated more rigorously, this is equivalent to with a Nyquist frequency of about 6 kHz, creating a
saying that the Fourier transform of the continuous sine wave with a period of about 500 µs, or 2 kHz.
systems equals zero for all frequencies, | f | greater than Obviously, the sampled signal cannot represent the
1/2T, where T is the sampling interval. Then, using the original signal.
Nyquist–Shannon interpolation formula, the continuous
function,  can be exactly regenerated from the Signals and Resolution
samples for any γ, if the Fourier transform S (f) satisfies
the condition   0 for all |  |  1/2! . This Many signals can be considered as a sequence of pulses
condition has frequently been imposed onto data that of varying amplitude, width and position, including
the data, by itself, might not justify (with strange and signals as simple as a square wave and many
wonderful results; as in “What happened?”). lithological layered structures. Resolution is a term used
to describe our ability to distinguish the different pulses
Aliasing and Sampling within the sequence.

Given that the Nyquist criterion,   0 for Rayleigh and Sparrow Criteria
all |  |  1/2! , might not be satisfied by the data,
then what are our options? Formally, given a pulse s(t), with a maximum at the
origin and a width w, we define a function p(t) that
First, we define the problem: consists of two pulses formed from s(t),
p(t ) = s(t −τ / 2) + s(t + τ / 2) (16)
If the Nyquist criterion is not met, then the sampled
data set is subject to a phenomenon called What is needed is a measure of the width of s(t) such
“aliasing”. Aliasing is easily explained in the that whenever the pair of pulses is separated by τ the
Fourier domain—the sampling causes the high- pair can be readily distinguished from the single pulse
frequency components,   " , of the signal and s(t). We concede that in this case of two well-defined
pulses they can be distinguished (isolated) from each
1 other at a level much lower than τ. However, in actual
Here we are using a general frequency, not just the
applications there might be multiple copies of s(t) that
frequency associated with time series data (Hertz) that
are attenuated and translated by unknown amounts
includes other domains, such as spatial frequency or
observed in the presence of additive noise. While there
wave numbers.
5
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

are many different resolution criteria in use, ultimately, the origin—that is, there is no curvature at the origin.
they all amount to different definitions of s(t). The two This criterion fails for triangular or rectangular pulses.
most-common criteria for resolution are the Rayleigh
and Sparrow criteria, Figure 3. Depending on the actual data and the data distribution,
any of these (and possibly yet another) resolution
Rayleigh Criterion, Half-power Point criteria may be better than the others.
1.2
Measurement—Formation Interaction
1
0.8
0.6
So far, we have only discussed noise, that part of a
0.4
measured signal that prevents unlimited accuracy in our
0.2 measurements. Three other limitations to our
0 measurements are due to the measurement instrument,
-50 -30 -10 10 30 50 the system being measured, and the interaction of the
instrument with the system. Arguably, the most
Sparrow Criterion
important instrument measurement property is the
1.4 instrument ‘impulse response’. Similarly, the system
1.2 being measured has properties that limit accuracy. In
1
0.8
our case we are typically examining a layered earth,
0.6 where we wish to determine various properties of each
0.4 layer. Every property we wish to measure has some
0.2
characteristic that limits these measurements: e.g., bed
0
-50 -30 -10 10 30 50 boundary depth—the bed surface is irregular,
resistivity—the resistivity is not uniform, etc. The
interaction between an instrument and the system being
Sparrow And Rayleigh Criteria investigated generally falls under two possibly related
1.4
broad categories: 1) the system transfer function, and 2)
1.2 convolution.
1
0.8
0.6 System Transfer Function
0.4
0.2
The system transfer function, STF, describes how an
0
-50 -30 -10 10 30 50 input signal that includes noise is transferred to the
Fig. 3 Comparison of Rayleigh half-power and system output:
Sparrow resolution criteria. %& '  ( )& (17)
where I is the system input at an arbitrary system axis,
u, S is the system transfer function and O is the system
The Rayleigh criteria come from optics and define the output. While the STF can completely describe the
width of the pulse as the difference between the value input-output relationship, the STF may be insufficient
of t at the peak of s(t) and the value of t nearest zero of to completely describe many measurement systems. For
s(t). This approach works well in optics (continuous example, most measurement systems consist in various
waves), but does not work with a Gaussian pulse. A forms of a sensor, electronics, digitizers and software,
variation of the Rayleigh criteria that works well with etc. Each stage in this process can have its own transfer
Gaussian and other similar distributions is the function, TF, and these TFs are generally cascaded or
(arbitrary) half-power criterion. For a pulse that has a connected into a final STF. In these cases the sensor TF
maximum at  0, the half-power is given by the becomes the input to the next TF stage so the system
smallest t where  $     $ 0/2. While this is a well- impulse response function, or system transfer function,
defined criterion, the half-power criterion is somewhat STF, becomes a sequence of transfer functions
arbitrary because alternate criteria, e.g., one-quarter operating on an earlier stage. Note that STF and TF can
power, could be used. be, and are frequently, used interchangeably.

The Sparrow resolution criterion specifies that τ is the Instrument Impulse Response
smallest value of τ for which p(t) has a second
derivative equal to zero at the origin. The second The system transfer function may be insufficient to
derivative equal zero implies that there is a flat spot at describe many measurement systems, especially if the

6
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

actual measurement, the sensor to output, goes through


multiple transfer functions. A better description of a
STF is the system impulse response, or impulse
response function, IRF, Figure 4. A B C
Output
System Transfer Function Fig. 5 Effect of impulse (gray) spacing on system
Input transfer function (dotted) to produce output. In this
example the IRF is not normalized. A: Both delta
functions and their gap is resolved. B: Both delta
Fig. 4 Example impulse response for a moving functions are resolved, gap is not. C: Delta
trapezoidal filter. The output reflects the shape of functions not resolved, and no gap is visible. The
the filter. Note that the well-defined and positioned peaking is caused by ‘seeing’ both delta functions
delta function has been broadened, indicating a at the same time. This output shape is unique to
decrease in resolution. this particular combination of delta functions, their
positions and this IRF.

The instrument impulse response function, IRF,


describes the output of an instrument to an impulse Formation Property Profile
function, that is, a Dirac delta function for continuous
systems or a Kronecker delta function in the discrete During logging we examine an earth system that can be
case: viewed as a continuous system, a discrete system (of
uniform layers) or a combination of a continuous
IRF ( u ) ← S ( u ) ∗ δ ( 0 ) (18)
system with a few discrete discontinuities (layer
where u is an arbitrary system axis, e.g., time, S is the boundaries). In reality, we can never be certain which
system transfer function and δ is the delta function. The view is most appropriate. Part of the problem is that we
asterisk indicates the system operation on an impulse can never measure the formation properties and their
function and applies to continuous and discrete systems. location exactly—that is, we never have a sensor that
samples as a delta function, in the continuous case, the
Knowing the IRF at time zero is sufficient to Dirac delta function:

completely define the instrument response to all stimuli
in a linear time-invariant system2. The output to the
f ( x0 ) = ∫
−∞
f ( x)δ ( x − x0 )dx , (19)
stimuli can be described completely by linear
combination of a properly scaled and shifted sequence gives the exact value of f(x) at exactly x0 , with obvious
of delta functions operated on by the STF. Frequently, extensions multiple dimensions, e.g., into 3-D space
the IRF is used synonymously with the STF. However, and time, *+
+, , -
-, , .
., ,
, , , etc.
this tendency should be used with discretion, as this can
hinder our ability to properly analyze, understand and Note that the Dirac delta function ‘samples’ the
mitigate the various error sources within the system. continuous function + at +, , and can be called the
Figure 5 illustrates the interaction between the IRF and ideal sampler. Depending on multiple factors such as
a sequence of delta functions. This interaction between noise, sampling and system transfer functions and the
the signal and the IRF is generally covered under a earth system can appear to be continuous, discrete or a
mathematical process called convolution, which will be combination of continuous systems with discrete steps
discussed below. This example illustrates how the IRF in the data. Again, we are forced to examine the
and the signal interact to modify the output signal. The interaction of the formation with the system transfer
IRF is not normalized, otherwise, the IRF and output function.
would be much lower. With this type of IRF,
normalization would produce an output curve whose Convolution
area (under the curve) is the same as the combined area
of the delta functions. When a measurement system interacts with a signal
source, the signal is modified by the system transfer
function as noted above (i.e., the system and signal are
mixed together based on the properties of the
2 measurement system and the signal). This mixing
The properties of a time-invariant system do not process is usually best described by a process called
change with time.
7
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

convolution. Almost by definition a data log is: Log = and is a ‘fast’ measurement. A ‘fast’ measurement
Convolution of Instrument Response Function and assumes that during the data sampling, the sensor
Formation Property Profile. Mathematically, moves a very small distance, ∆d, relative to the tool
convolution is defined as a generalization of equation resolution, R, during the acquisition sample, ∆ 0 1 .


A rotating time-windowed measurement with azimuthal
f ∗g = ∫
−∞
f (u ) g ( x − u )du
(20)
sensitivity divides the acquisition between various
azimuthal angle ranges, so that the acquisition includes

a time and an azimuth range or depth, and an azimuth
f ( x) = ∫
−∞
f (u ) g ( x − u )du
range for each measurement. The azimuth may be
where the asterisk, *, indicates convolution, and g is a referenced to the edge or center of the azimuthal
generalized STF, and f(x) is the convolution result, in sampling angle (vendor or instrument specific) and is
our case, the log. The good news is that we do not have otherwise the same as the time-windowed measurement
to perform the convolution, the system does that. The above.
bad news is that the convolution process limits our
ability to examine the system being measured. A rotating point measurement is defined as a single
measurement taken at a point, but having azimuthal
Time-Windowed Sampling and Point Sampling sensitivity. This requires two coordinates to define the
measurement location, e.g., time or depth and the
Normally, when considering data acquisition only azimuth for each measurement. The azimuthal data is
stationary measurements are considered. That is, the usually stored at the acquisition angle and is otherwise
sensors are not moving, or they are moving slowly the same as the point measurement above.
enough that the measurements can be considered to be
stationary. Very little literature exists on the effects of Each of these major sensor types are affected
moving sensors on various types of measurements. differently by tool motion, and must be treated
separately.
Broadly speaking, moving sensors can be divided into
two major types: point- and time-windowed SIGNAL-TO-NOISE RATIO VS. LOGGING
azimuthally insensitive measurements, characterized by SPEED
their position only. Point and time-windowed data can
be subdivided into an alternate type that is azimuthally Quanta Counting
sensitive. Azimuthally sensitive point measurements
generally have a limited azimuthal sensitivity, and are The logging speed for quanta counting has no direct
usually rotated to create azimuthally varying data effect on the SNR. The SNR depends entirely on the
displayed as images. Rotation can include physical number of quanta counted.
sensor rotation and/or an electronically induced rotation
(Bushberg, 2006). Note, there is an indirect effect of instrument motion—
the instrument is measuring the effects of a time-
Definitions: varying input. Which is to say: the underlying
formation being scanned is spatially variable and as the
A time-windowed measurement collects data over some instrument moves, it interrogates a different section of
time window, t, and the collected data is treated as a the formation. This indirect effect can and should affect
single measurement, e.g., gamma ray, counts/sec, that the observed count rate; thereby affecting the SNR.
may be converted algorithmically into engineering Other instrument sensitivities can also affect the count
units, e.g., API. The measurement is generally stored at rate: for example, tool standoff from the formation. Any
the depth of mid-point of the sensor at the time mid- drilling or logging practice, including logging speed,
point, but may be vendor or instrument specific. that moves the instrument away from its optimal
acquisition condition can degrade the tool performance,
A point measurement is defined as a single including reducing the SNR.
measurement, that is, a measurement is taken at a point,
e.g., depth or time—and the result of the measurement Resolution vs. Logging Speed
is the result of the distributed response of the system to
the formation, and either has no azimuthal sensitivity, On the other hand, for quanta counting systems logging
or the azimuthal sensitivity is ignored, e.g., formation speed does affect resolution. The logging speed
deep resistivity. Generally, this ‘point measurement’ increases the effective length of the sensor as will be
measures the properties of a local surface or volume, shown. For point measurements, resolution is generally

8
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

not significantly affected by logging speed—the time of an effective sensor length, and not necessarily the
acquisition is generally quite small, and consequently actual sensor size. In this 2-D example, the term 4 is
the distance traveled by the tool during acquisition is the area of a rectangle in time-length units. For a
much smaller than the resolution. moving sensor, where the sensor maps out a time-
distance parallelogram, the total count result is still the
Quanta Counting, Time Windowed Data same   4 5 because the areas are the same.
Similarly, the result using the trapezoid yields the same
The effect of moving sensors can be described in terms result. Both results can be verified by calculating the
of a simple 2-D model, Figure 6, for simplicity, but the areas of equivalent rectangles, parallelograms and
results are equivalent for a physical sensor. A series of trapezoids. If we define the effective width of the
step-stop-sample time-window measurements has the moving transducer as the half-maximum point, the
same properties as a simple stepped boxcar filter. effective sensor length is
Unfortunately, while logging, a step-stop sequence is
not practical. Rather, the sampling occurs ‘on the fly’: 6  27  4
7
6 47
(21)
the instrument never stops moving during the time
sample. The effect of this sampling is to modify the
boxcar filter of length L to a trapezoidal filter, Figure 6, This effective sensor length increase has the same effect
gray area, distance-gain (left) axes. For the purposes of as a decrease in resolution—the effective sensor length
this discussion, the tool speed is assumed to be determines the resolution (Table 2). Note the resolution
constant, resulting in straight-line tool movement. The is effectively the same as the area or aperture discussed
center of the tool moves from point A to point B, earlier. In the actual 3-D sensor case, the total count
rate depends on the area of the sensor, not just its
length:  3 485, 9  48 .
mapping out a parallelogram in the distance-time (right)
axes. The parallelogram has a base of length L, the
effective length of the sensor. Varying tool speed
results in a non-linear path from A to B, but does not Effect of Logging Speed on
significantly alter the analysis that follows. The area Effective Sensor Length and Aperture
under the modified trapezoid does not change; it will Logging Effective Effective Effective
just be harder to calculate. Speed Sensor Aperture Aperture
Length Half- 90%
Normalized Spatial Filter, ROP 72 m/hr Maximum
1.2 6.0
D L- D D m/hr m/s cm cm cm
1.0 5.0
B
F
0 0.00 15 15 15.0
Time, sec

0.8
L
4.0 36 0.01 20 15 14.5
Gain

0.6 3.0 72 0.02 25 15 18.0


0.5
0.4 2.0 108 0.03 30 15 21.9
0.2 1.0
144 0.04 35 20 24.7
A 324 0.09 60 45 43.2
0.0 0.0
0.0 0.1 0.2 0.3
Distance
Table 2 The effective sensor length depends on the
distance moved during the sample window, and
Fig. 6 Diagram illustrating sensor motion. A sensor of
measures the actual length of formation sampled during
length L moves a distance D in 5 seconds. The sensor
the acquisition. The effective aperture measures the
reference at A moves along the arrow to B. In the
distance between the half-maximum points and gives
Distance-Time axes (right) the sensor’s formation scan
the equivalent aperture for a stationary sensor. The
maps into a parallelogram. The gain over the entire
effective aperture 90% is the effective sensor length
interval maps into a trapezoid, where a gain of one
over which 90% of the data sample is collected, see red
indicates a full five seconds of acquisition over that
curve in Fig. 7.
section (L-D range). The total acquisition spans D+L
with varying gain (weights). Gain is defined as the
When the sensor moves parallel to its length, the width,
fraction of total time each distance element, dx, is
w, does not change, and the effective sensor length is
observed.
the same, equation (21). Note that for ROP less than or
equal to 108 m/hr, the effective aperture is constant and
The properties of this effect are further expanded in
equal to the sensor aperture. In these cases, the bulk of
Figure 7 where various sensor speeds are illustrated.
the observed signal arises from the central region and
The total acquired quanta count is proportional to the
indicates that while the effective sensor length is
time, t, sensor length, L, count rate, c, and product,
2 3 4 5 for a stationary sensor, where sensor length is
increasing, the effective aperture, half- maximum is

9
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

constant. However, for ROP greater than 108, the Sampling vs. Logging Speed
effective aperture, half-maximum increases linearly
with effective sensor length. The telemetry operating frequency may be constant, but
in the LWD case may be changed. For all downhole
1.2
Spatial Filter, Tapered Box Car Rop 36 m/hr data, either time, measured depth, or both are
1.1
1.0 irregularly sampled. Note: Time and measured depth
0.9
0.8
0.7
generally cannot be regularly sampled simultaneously,
0.6
tool speed variations are sufficient to explain why.
Gain

0.5
0.4
0.3
0.2 Furthermore, data transmission rates (not bit rates) are
0.1
0.0 not necessarily constant either, especially when data
0.00 0.10 0.20 0.30 0.40 0.50 0.60
Depth from an assortment of measurements must be packed
Spatial Filter, Tapered Box Car Rop 72 m/hr
into one continuous telemetry string, e.g., time-division
1.2
1.1 multiplexing or data interleaving.
1.0
0.9
0.8
0.7
0.6 Real-time data telemetry rate is generally NOT
Gain

0.5
0.4
0.3 constant; however, careful design of the data telemetry
0.2
0.1
0.0 can result in near-constant telemetry rates for critical
0.00 0.10 0.20 0.30
Depth
0.40 0.50 0.60 signals. Even with careful design, data
transmission/receive errors can create missing data
Spatial Filter, Tapered Box Car ROP 108 m/hr
1.2
gaps. These errors can occur on LWD and wireline
1.1
1.0
0.9
logs. Other minor data rates can be caused by clock
0.8
0.7
0.6
synchronization, relative drift and/or timing errors
0.5
Gain

0.4
0.3
within the acquisition system, or clock quantization.
0.2
0.1 For example, it is quite common to see nominally 10-
0.0
0.00 0.10 0.20 0.30 0.40 0.50 0.60 second sampled data with recorded samples of 9.9
Depth
seconds.
Spatial Filter, Tapered Box Car Rop 144 m/hr
1.2
1.1
1.0
0.9
In LWD acquisition, we can have two different data
0.8
0.7 sets for each signal that is transmitted to the surface, the
0.6
0.5 real-time and memory data. Memory data for timed
Gain

0.4
0.3
0.2
0.1
measurements will be stored in the tool at the tool’s
0.0
0.00 0.10 0.20 0.30 0.40 0.50 0.60
programmed data rate, e.g., 10 sec., based on an
Depth
internal clock with some, albeit small data latency.
Spatial Filter, Tapered Box Car 324 m/hr Surface data, generally transmitted by mud pulse
1.2
1.1 telemetry, has additional data latency caused by the
1.0
0.9
0.8
mud pulse system firmware, electronics and mechanics
0.7
of the pulsar, and the depth and speed of sound in the
Gain

0.6
0.5
0.4
0.3
0.2
mud system. Also because of the bit rate constraints on
0.1
0.0 the telemetry system, the dynamic range of the data
0.00 0.10 0.20 0.30
Depth
0.40 0.50 0.60
and/or its precision (number of significant bits) may be
decreased.
Fig. 7: Modeled effect of logging speed or rate of
penetration, ROP, for several drilling speeds and a
Aliasing
sensor of length of about 15 cm, with a 5-sec
acquisition window. The trapezoid plots are normalized
Aliasing can occur for one-dimensional and two-
such that a gain of 1.0 implies that section of the log
dimensional data. Unfortunately, from a single one-
was sampled for the entire 5 seconds. Note that after
dimensional data curve, aliasing usually cannot be
the special case at ROP=108 m/hr, where the distance
proven.
traveled is exactly one sensor length, the sensor no
longer accumulates any portion of the formation at the
Aliasing and Time-Windowed Systems
maximum sampling time. The red curve is the
normalized cumulative response function of the
Generally, time-windowed data collection systems are
acquisition, and has properties similar to probability
relatively immune to aliasing caused by logging speed
distribution function, e.g., the distance between where
changes, although resolution decreases with increasing
the curve crosses the gain axis at 0.1 and 0.9 gives the
logging speed. However, these systems are still
effective sensor length where 80% of the data is
susceptible to normal aliasing whenever the features
collected during the sample window.
10
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

being logged are on the order of the ‘current effective’ the lower section, high-speed data contained sufficient
tool resolution. That is, thin beds near the current data variation to illustrate the effect of higher logging
effective tool resolution may alias into thicker speeds. Sample intervals were about 5 to 10 times
appearing bedding layers with properties having some further apart than the data samples on the LWD data
combination of those of the aliased layers. This type of run.
aliasing would occur even in stop-sample-move-stop
sequences whenever the tool resolution, the step and the The formation logged in the MAD pass starts in the
feature size are near to the same values. If the feature Senora formation, a dark-gray shale formation with
size is much smaller than the resolution of the tool, the some thin inter-bedded sandy shale, silty shale,
tool measures an average response over the resolution limestone, coal and sandstone layers. This is followed
window. by the Chelsea Sandstone, 1,040 ft, with numerous
shale thin beds, transitioning to shaley sandstone and
Aliasing and Point Sampling Systems finally shale with sparse sandstone or siltstone lamina.
The MAD pass just missed the Upper Red Fork
Point-sampled systems are measurements that are very Sandstone at 1,240 ft.
fast when compared to the actual sample time. Two
examples of such measurements are resistivity and Time-sampled gamma ray, density, and neutron
high-frequency pulse echo measurements, acoustic porosity and point sampled resistivity were measured
imagers. In the resistivity case, measurements are during these tests; see Figures 9 to 11 at the end of the
virtually instantaneous, depending primarily on the paper. The gamma ray and density have effective
speed of light (actually at radio frequencies, RF) and resolutions of about 15 cm (6 in.) and 2cm (8 in.),
the speed of the electronics. In any event, the tool respectively, while the neutron porosity has a resolution
moves a very short distance when compared to the of about 49 cm (19 in.). The resistivity measurements
resolution, even when applied to resistivity imagers have a resolution of 1.6 m (5.4 ft). From these tool
with resolutions on the order of 0.5 cm (~0.2 in.). resolutions, we expect that the gamma ray, followed by
Similarly, acoustic pulse echo responses are also short, the density, to be the most sensitive to logging speed
an acoustic tool logging at about 6 m/min (20 ft/min) resolution degradation, and then followed by the
moves about 10 cm/sec. For a very conservative pulse neutron porosity. At the observed logging speeds we
time of 30 µs, the tool moves about 0.003 mm, which is expect to see no resolution degradation or aliasing on
far less than the typical focused 250-kHz resolution of the resistivity data.
10 mm (~0.3 in.) resolution.
ALIASING OBSERVED IN A BETA TEST WELL
The problems associated with higher logging speed
generally come from sampling rate, which if too low leads A different test well was logged by a lower resolution
to aliasing. Aliasing will occur when the sampling rate oil-based mud-capable resistivity imager. Analysis of
approaches the thin bed layering spacing. Note that a the data from this log revealed, by comparison with a
different thin bed definition must be applied to every higher resolution acoustic imager, aliasing in the image
‘point’ measurement. For example, a thin bed to a deep- from the OBM imager, Figure 8. The diagonal features
resistivity measurement could be several meters, while for observed in the image are probably drilling-induced
a resistivity or acoustic imager it would be only a few artifacts.
centimeters.
DISCUSSION
In some cases, aliasing can be observed in high-
resolution images, as will be shown in the next sections. Examples have been provided showing the effects of
logging speed on time-sampled and point-sampled data.
CASE STUDY USING A BETA TEST WELL This data is provided as raw as possible, i.e., no signal
processing or depth corrections have been applied to
During systems integration testing in late 2009 at our minimize any possible data alteration, filter,
Beta Test Site, we acquired a higher logging speed re- degradation or corruption that might limit our ability to
log than the normal LWD logging speed of 50 to 100 analyze the true effect of logging speed on the data
ft/hr. In this measurement after drilling (MAD) pass we acquisition. Time-windowed data show decreases in
obtained log data at rates from about 100 ft/hr to more resolution and a blockier appearance when displayed as
than 500 ft/hr, with much of the data acquired at 200 stepped curves rather than smooth curves. Referring to
ft/hr or higher. Unfortunately, the most interesting Figure 9, we note that LWD timed curves; gamma ray,
upper section was logged in the MAD pass at almost density and porosity, appear to be noisier than the
the same speed as the original LWD data acquisition. In corresponding MAD curves, raising the question of

11
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

Fig. 8 Example of a resistivity image log (left) and acoustic image log (right) where the resistivity log is
exhibiting aliasing and the acoustic image log is not. Both logs fall in the category of point sampled and are
susceptible to aliasing, but not resolution degradation. Diagonal features at the top of the log are resolved by
both logs. In the lower center pad of the resistivity log the diagonal feature slope is reversed and has a much
steeper slope. Notes: 1: The diagonal features in the acoustic image, lower section are not as visible as in the
upper section due to a small uncorrected stick slip in the acoustic image. 2: There is slight aliasing in the
images due to rendering that is not in the original images.

whether the LWD data is really noisier or merely an


artifact of the much lower sampling rate. We note that 1:;;<=>?@<  1A<BCDE  1%F · /H , (22)
the noise on these types of measurements is dependent
only on the measured counts. However, we concede where RSensor is the static sensor resolution, ROP the
that there may be additional noise while drilling caused rate of penetration or the tool speed, t is the sampling
by the relative position of the sensors within the well time in seconds and C is a conversion factor from (ft
bore, e.g., standoff variations. If the plot scales are sec)/hr or (m sec)/hr to in or cm (= 300 or 36 for ft or m
compressed, say to 2400:1, the LWD and MAD noise respectively). The last term is just the distance moved
levels appear to be more consistent with the MAD data by the sensor during the acquisition time window.
still appearing to have a lower noise level. With the Table 2 summarizes the logging speeds at which the
LWD sample rates, determining the noise level is much effective resolution of several sensors has been reduced
easier, and one is less likely to interpret a random data to half using a 10-second sampling window.
sequence as a signal. The risk with the higher logging
speeds is accepting small signal changes as real, rather We have also shown three different point sampled data
than just noise. sets, the first was a high-resolution acoustic image, the
second, a slightly lower resolution, 1/3 of the resolution
The effective resolution for a constant logging speed of the acoustic image, OBM resistivity log to illustrate
can easily be calculated for a timed measurement, see aliasing, and the third a much lower resolution
equation (22), assuming the half-maximum point of the resistivity log, to illustrate minimal effects of logging
trapezoidal weighting function is caused by the sensor speed.
motion:
12
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

From the high-resolution images, diagonal drilling Conference and Exhibition, 24-27 September 2006, San
artifacts are compared, Figure 8. In the first case, the Antonio, Texas, USA, SPE 102538.
artifacts from the acoustic image and the OBM image
show the same type of pattern: Features that slope Nyquist, H., 1928, Certain topics in telegraph
downward from left to right. In the second case, the transmission theory, Trans. AIEE, 47, pp. 617-644, Apr.
features appear to be the same in the acoustic log, but 1928, reprinted in Proc. IEEE, 90, No. 2, Feb 2002
are aliased into features that slope steeply upward from
left to right. Other manifestations of aliasing in high- Rose, A., 1948, Television pickup tubes and the problem
resolution images appear as small checkerboard of vision, Advances in Electronics, 1, pp 131-166,
patterns and are not shown herein. Marion, L. Ed., Academic Press Inc., Publishers, New
York, N.Y,.
In the smooth resistivity logs, Figure 9, there are no
observable differences in resolution. The only Shannon, C. E., 1949, Communication in the presence of
difference is in data values that are due to a time-lapse noise, Proc. Institute of Radio Engineers, 37, No. 1,
fluid invasion into the formation. In the stepped pp. 10–21, Jan. 1949, reprinted in Proc. IEEE, 86, No. 2,
resistivity logs we can see the effect of the reduced data Feb 1998.
sampling; however, the data is consistent with the
higher sampling rate, once we account for slight depth Shannon, C. E., 1948, A mathematical theory of
shifts caused by the use of raw uncorrected depth data, communication, Bell System Technical Journal, 27,
LWD (weight on bit) vs. MAD (no weight on bit). 379–423, 623–56.

SUMMARY and CONCLUSION Theys, P., 1999, Log data acquisition and quality
control, 2nd edition, Editions TECHNIP.
Technology advances in data acquisition systems have
enabled significant improvements in acquisition of data
at higher drilling and logging speeds. We have explored
how data acquisition behaves in the presence of noise,
sensor motion (time-windowed acquisition), logging
speed and angular rotation rates during acquisition. We
have further shown that rotational sensors, measuring
formation properties dependent on azimuth, can have
fundamentally different statistics influencing their
response in the presence of changing sensor speed. The
overall result is that higher logging speeds can result in
a distinct reduction in quality if not considered
carefully.

Examples have also been provided that show the effects


for time-sampled devices and point measurement in the
logging-while-drilling (LWD) and wireline (WL)
environments along with some recommendations on the
maximum acceptable speed for quality data acquisition.

REFERENCES:

Bushberg, J. T., 2006, The essential physics of medical


imaging, Second Edition, Philadelphia: Lippincott
Williams & Wilkins, p. 280ff.

Dupriest, F., Koediritz, W., 2005, Maximizing drill rates


with real-time surveillance of mechanical specific energy,
SPE/IADC Drilling Conference, 23-25 February 2005,
Amsterdam, Netherlands, SPE 92194.

Helgesen, T., Jonsbraten, F., Pepper, C., 2006, Optimized


nuclear logging for fast drilling, SPE Annual Technical
13
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

Fig. 9 Log results from two logging runs (plotted with offset scales) over same formation at different logging
rates show consistent formation properties. This data consists of LWD data (blue) run at 50 to 100 ft/hr and a
MAD pass (red) executed several days later. MAD pass data was logged at speeds varying from about 100 ft/hr
to over 500 ft/hr. Obvious differences are the noise density on the LWD data and time delay invasion effects on
the resistivity logs. Major data shifts, log to log, are caused by the relative sensor positions within the tool
string. Smaller data shifts are caused by using raw uncorrected data, with weight on bit, LWD, and no weight on
bit, MAD. Note: the times of the tops and bottoms of the MAD curves are the same. The MAD logging speed is
relative to the bit position.

14
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

Fig. 10 This is expanded portion of the same data as in Fig. 9. The data are plotted as standard smooth curves.
LWD data (blue) is sampled at a higher data rate than the MAD data, which was logged at a much higher
logging speed. The small peaks in the gamma ray and density data observed in the LWD data are still evident in
the MAD pass; however, they are not nearly as well defined. The neutron logs are also similar, but with a
smaller variation in the data. Slight depth shifts between the LWD and MAD data are as noted in Figure 9.

15
SPWLA 54th Annual Logging Symposium, June 22-26, 2013

Fig. 11 This is an expanded portion of the same data as Fig. 10. The data is plotted to emphasize the data
sampling. Rather than smooth curves, the data is plotted as rectangular steps reflecting the sampling. The length
of the step is generally related to the data rate in depth. LWD data (blue) is sampled at a much smaller interval
than the MAD data, which was logged at a much higher logging speed. Small peaks in the gamma ray and
density data observed in the LWD data are still evident in the MAD pass; however, they are not nearly as well
defined. The neutron logs are also similar but with little variation in the data.

16

You might also like