You are on page 1of 9

The processing sequence we have developed so far gives us the ideal input for predictive (or

gap) deconvolution; it is minimum phase, has the swell noise and strong amplitude linear noise
removed, and much of the spatially aliased high frequency dipping events have been eliminated as
well.
On marine datasets, the main goal of predictive deconvolution is in part to collapse any residual
effects caused by the limitations of the tuned airgun array, and to help suppress short-period
reverberations in the wavelet. These reverberations occur mainly from energy that has multiple
reflections from the sea-surface and seafloor, but they can also be from inter-bed multiples if there are
strong reflectors that are close together.
In this dataset I suspect there are also some mode converted energy specifically P-S-P mode
conversions where, especially in the high shot point end of the line, the basement overthrust creates
the right conditions for this to happen.
The basic tool we have for looking at the reverberations in a dataset is the autocorrelation function. It
uses a sliding window of fixed length window to mathematically compare the trace with itself, often
over a specific data range. The autocorrelation function is always symmetrical about time zero where
there is a strong peak. Subsequent peaks indicate where a time-shifted version of the trace is similar
to the original.

Shots from the start and end of the line with an auto-correlation appended to the bottom. The design
window for the autocorrelation function is indicated between the blue and yellow lines

When working with deconvolution, this kind of display should be your standard approach. Ive used a
bit of trickery here in that I have reduced the record length to 5500ms (for display purposes) and then
extended it by 100ms to create a gap between the shots and their autocorrelations.
For the design window, I have defined the start gate using a calculation based on offset (using the
speed of sound in water as 1500ms-1 and shifting this down by 200ms), and then made the gate
length 2500ms.
You can define the gates manually, but on marine data I prefer to create a gate that is tied to offset
and, if needed, shifted by the water bottom. In doing so, if you see an anomalous result, it is easier to
back-track and adjust and of course on large multi-line projects its less work.

The design gate needs to:


be at least 5-7 times the length of the auto-correlation

avoid any very strong reflections usually just the seafloor, but there can be others

contain reflections if you cant see reflections in the gate window, you will get a bad result
In this case Ive got an autocorrelation length of 300ms which should be enough to show the
reverberations caused by the water bottom (at about 80ms); note how reverberant the data is on
SP900.
The reason to focus on the autocorrelation is that it is not just a quality control parameter it is also
used to design the deconvolution operator we will apply.
You can use more complex designs such as having multiple design windows one above and one
below a strong unconformity but the problem can become this limits the design window and hence
the autocorrelation length that is viable. A long autocorrelation gives a more stable result!
The other key parameter, as well as the length of the operator (defined in turn by the autocorrelation),
is the predictive gap. In this case, we are not aiming to do much in the way of wavelet shaping or
whitening, so a longer multi-sample gap is preferable to a short one.
This is where things become very subjective. Some people have strong views on the gap being tied to
particular values, or to the first or second zero crossing of the auto-correlation function and so on
however all deconvolution code is different, and my advice is to *always* test the gap.
There are three basic approaches to deconvolution we need to test:
we can work one trace at a time, in the X-T domain

we can average autocorrelation functions over multiple traces, or even a shot

we can apply deconvolution in the Tau-P domain

The first of these is the usual marine work horse, but in situations where the data is noisy the traceaveraging approach can be effective. Tau-P domain deconvolution is a special case, as well discuss
later.
For the XT domain approaches, I generally start with operator tests using a 24ms gap; I run these
from about 1.5x the first peak on the autocorrelation function up to the largest value that makes sense
given the design criteria. In this case I might look at 150ms, 250ms and 300ms.
Once I have an operator, I then test gaps usually 8ms, 16ms, 24ms 32ms and 48ms, perhaps with a
spiking (one sample) gap as well.
The results tend to be pretty subjective, and depend on the interpreters needs, but 24ms is a fairly
standard choice.
Im not going to fill this post with images of different deconvolution test panels on shots and stacks
you can see those in Yilmaz (you should probably have access to a copy, Ive never worked
anywhere that didnt have one available).

Shots from the start and end of the line; a 24ms gap, 300ms operator XT domain
deconvolution applied. Start/end design gates displayed (blue, yellow lines)

Tau-P domain deconvolution is a little different. It is based on the idea that the multiples are more
periodic in the Tau-P domain than in X-T, but has the additional advantage that you dont have the
same restriction on design gate lengths at far offsets and hence can have a longer, more stable
operator.
The design process is the same as with the X-T domain, but in general a longer gap (32ms or 48ms)
works better. In general, Tau-P domain deconvolution is a lot more effective that X-T domain.
In this case Ive tested operators from 400ms to 500ms, and gaps of 24ms, 32ms and 48ms. These
tests are a lot slower to run, of course.

In practice the 500ms operator and 32ms gap gave the best result.

Shot record from start and end of the line with no deconvolution

Shot record from start and end of the line with 24ms gap, 300ms operator XT deconvolution

Shot record from start and end of the line with 32ms gap, 500ms operator Tau-P deconvolution

In practice, the differences are relatively minor between the X-T and Tau-P domain deconvolution
results. This is partly because we have already applied Tau-P domain linear noise suppression, which
can have a big impact on how effective the deconvolution is.
Ultimately the choice of what to use depends on the time and resources you have available the TauP domain deconvolution is computationally expensive, but if you are using Tau-P domain linear noise
suppression, these methods can be combined at that stage.
Running a second deconvolution on common receiver gathers can also help improve the
effectiveness of the result; if you have used shot-ensemble or Tau-P domain deconvolution in the first
pass.

Its also important to review stacked sections either the entire line or just key areas with these
tests, to ensure that the results on the stacks match what you require.

Stacked section with: constant velocity, stretch mute, amplitude recovery and swell noise recovery
(no deconvolution)

Stacked section from above with Tau-P domain muting and deconvolution applied

You might also like