You are on page 1of 66

Principles of Seismic Data

Processing

M.M.Badawy
Principles of Seismic Data Processing

Principles of Seismic Data


Processing

Mahmoud Mostafa Badawy


Lecturer Assistant of Geophysics, Geology Department, Faculty of Science,
Alexandria University, Egypt

2
Page

M.M.Badawy
Principles of Seismic Data Processing

Content:
Chapter 1: Seismic Generation
 Introduction
 Elasticity Term
 Wave Definition
 Waves Types
 Geometry of wave ray paths (Theories)
 Acoustic Impedance and Reflection Coefficient
 Velocity
 Resolution
 Problems

Chapter 2: Seismic Data Processing


 Processing Concept
 Processing Main Steps:
 Reformatting and Demultiplexed
 Geometry Definition
 Field Static Correction
 Amplitude Recovery
 Noise Attenuation (De-Noise)
 Deconvolution
 CMP Gather
 NMO Correction
 Demultiple
 Migration
 CMP Stack
 Problems

Chapter 3: Software Using


 Seismic Processing Using Vista Software
3
Page

M.M.Badawy
Principles of Seismic Data Processing

Chapter 1:
What Makes A Wiggle?
Seismic reflection profiling is an echo sounding technique. A controlled sound pulse is issued
into the Earth and the recording system listens a fixed time for energy reflected back from
interfaces within the Earth. The interface is often a geological boundary, for example the
change of sandstone to limestone.

Once the travel-time to the reflectors and the velocity of propagation is known, the geometry
of the reflecting interfaces can be reconstructed and interpreted in terms of geological structure
in depth. The principal purpose of seismic surveying is to help understand geological structure
and stratigraphy at depth and in the oil industry is ultimately used to reduce the risk of drilling
dry wells.

Wave is a disturbance which


travels in the medium or
without.

What Is A Reflection?
The following figure shows a simple earth model and resulting seismic section used to illustrate
the basic concepts of the method.
The terms source, receiver and reflecting interface are introduced. Sound energy travels
through different media (rocks) at different velocities and is reflected at interfaces where the
media velocity and/or density changes.
The amplitude and polarity of the reflection is proportional to the acoustic impedance (product
of velocity and density) change across an interface. The arrival of energy at the receiver is
termed a seismic event.
A seismic trace records the events and is conventionally plotted below the receiver with the
time (or depth axis)
4
Page

M.M.Badawy
Principles of Seismic Data Processing

Wave Propagation

For small deformations rocks are elastic, which is they return to their original shape
once a small stress applied to deform them is removed. Seismic waves are elastic
waves and are the "disturbances" which propagate through the rocks.

The most commonly used form of seismic wave is the P (primary)-wave which travels as a
series of compressions and rarefactions through the earth the particle motion being in the
direction of wave travel. The propagation of P-waves can be represented as a series of wave
fronts (lines of equal phase) which describe circles for a point source in a homogeneous media
(similar to when a stone is dropped vertically onto a calm water surface). As the wave front
expands the energy is spread over a wider area and the amplitude decays with distance from the
source.

This decay is called spherical or geometric divergence and is usually compensated for in
seismic processing. Rays are normal to the wave fronts and diagrammatically indicate the
direction of wave propagation. Usually the shortest ray-path is the direction of interest and is
chosen for clarity. Secondary or S waves travel at up to 70% of the velocity of P-waves and do
not travel through fluids.

The particle motion for an S-wave is perpendicular to its direction of propagation (shear
stresses are introduced) and the motion is usually resolved into a horizontal component (SH
waves) and a vertical component (SV waves).
5
Page

M.M.Badawy
Principles of Seismic Data Processing

Snell's Law
The mathematical description of refraction or the physical change in the direction of a wave
front as it travels from one medium to another with a change in velocity and partial conversion
and reflection of a P-wave to an S-wave at the interface of the two media.

Snell's law, one of two laws describing refraction, was formulated in the context of light waves,
but is applicable to seismic waves. It is named for Willebrord Snel (1580 to 1626), a Dutch
mathematician.
Snell's law can be written as:

Reflection: The energy or wave from a seismic source which has been reflected from an
acoustic impedance contrast (reflector) or a series of contrasts within the earth.

Refraction: The change in direction of a seismic ray upon passing into a medium with a
different velocity. The mathematics of this is defined by Snell’s law.

Reflection Coefficient:
The ratio of amplitude of the reflected wave to the incident wave, or how much energy is
reflected. If the wave has normal incidence, then its reflection coefficient can be expressed as:
6
Page

M.M.Badawy
Principles of Seismic Data Processing

If the A.I of the lower formation is higher than the upper one, the reflection polarity will be
+ve and vice versa.

If the difference in A.I between the two formations is high, the reflection magnitude
(Amplitude) will be high.

Velocity Analysis:
-The determination of seismic velocity is the key to seismic method.

-The process of calculating seismic velocity is to do better process seismic data. Successful
stacking, time migration and depth migration all require proper velocity inputs

-Velocity estimation is needed also to convert time section into depth section.
7
Page

M.M.Badawy
Principles of Seismic Data Processing

Kinds of Velocity:
• Average velocity: at which represent depth to bed (from surface to layer). Average velocity is
commonly calculated by assuming a vertical path, parallel layers and straight ray paths,
conditions that are quite idealized compared to those actually found in the Earth.

• Pseudo Average Velocity: when we have time from seismic & depth from well

• True Average Velocity: when we measure velocity by VSP, Sonic, or Coring

• Interval Velocity: The velocity, typically P-wave velocity, of a specific layer or layers o
rock,

• Pseudo Interval Velocity: when we have time from seismic & depth from well

• True Average Velocity: when we measure velocity by VSP, Cheak shot

• Stacking Velocity: The distance-time relationship determined from analysis of normal move
out (NMO) measurements from common depth point gathers of seismic data. The stacking
velocity is used to correct the arrival times of events in the traces for their varying offsets prior
to summing, or stacking, the traces to improve the signal-to noise ratio of the data.

• RMS Velocity: is root mean square velocity & equivalent to stacking velocity but increased
by 10%

• Instantaneous Velocity: Most accurate velocity (comes from sonic tools) & can be measured
at every feet

• Migration Velocity: used to migrate certain point to another (usually > or < of stacking
velocity by 5-15%)
8
Page

M.M.Badawy
Principles of Seismic Data Processing

Tape Formats:
Several tape formats defined by the SEG are currently in use. These standards are often treated
quite liberally, especially where 3D data is concerned. Most contractors also process data using
their own internal formats which are generally more efficient than the SEG standards.

The two commonest formats are SEG-D (for field data) and SEG-Y for final or intermediate
products.
The previous figure shows the typical way in which a seismic trace is stored on tape for SEG-Y
format.

The use of headers is particularly important since these headers are used in seismic processing
to manipulate the seismic data. Older multiplexed formats (data acquired in channel order) such
as SEG-B would typically be demultiplexed (in shot order) and transcribed to SEG-Y before
processing.

In SEG-Y format a 3200 byte EBCDIC (Extended Binary Coded Decimal Interchange Code)
"text" header arranged as forty 80 character images is followed by a 400 byte binary header
which contains general information about the data such as number of samples per trace. This is
followed by the 240 byte trace header (which contains important information related to the
trace such as shot point number, trace number) and the trace data itself stored as IBM floating
point numbers in 32 byte format.

The trace, or a series of traces such as a shot gather, will be terminated by an EOF (End of File)
marker. The tape is terminated by an EOM (End of Media) marker. Several lines may be
concatenated on tape separated by two EOF markers (double end of file). Separate lines should
have their own EBCIDC headers, although this may be stripped out (particularly for 3D
archives) for efficiency. Each trace must have its own 240 byte trace header. Note there are
considerable variations in the details of the SEG-Y format.
9
Page

M.M.Badawy
Principles of Seismic Data Processing

Convolution:
Is a mathematical way of combining two signals to achieve a third, modified signal.
The signal we record seems to respond well to being treated as a series of signals superimposed
upon each other that is seismic signals seem to respond convolutionally. The process of
DECONVOLUTION is the reversal of the convolution process.

Convolution in the time domain is represented in the frequency domain by a multiplying


the amplitude spectra and adding the phase spectra.

10
Page

M.M.Badawy
Principles of Seismic Data Processing

F-K Transform:
A two-dimensional Fourier transform over time and space is called an F-K (or K-F) transform
where F is the frequency (Fourier transform over time) and K refers to wave-number (Fourier
transform over space).

The space dimension is controlled by the trace spacing and (just like when sampling a time
series) must be sampled according to the Nyquist criterion to avoid spatial aliasing. Temporal
Aliasing was previously discussed. In the F-K domain there is a two-dimensional amplitude
and phase spectrum but usually only the former is displayed for clarity with colour intensity
used to show the amplitudes of the data at different frequency and wave-number components.
Several noise types such as groundroll or seismic interference may be more readily separated in
the FK amplitude domain than the time-space domain and therefore will be easier to mute
before the inverse transform is applied.

11
Page

M.M.Badawy
Principles of Seismic Data Processing

Introduction:
The purpose of seismic processing is to manipulate the acquired data into an image that can be
used to infer the sub-surface structure. Only minimal processing would be required if we had a
perfect acquisition system.

Processing consists of the application of a series of computer routines to the acquired data
guided by the hand of the processing geophysicist. There is no single "correct" processing
sequence for a given volume of data.

At several stages judgments or interpretations have to be made which are often subjective and
rely on the processors experience or bias. The interpreter should be involved at all stages to
check that processing decisions do not radically alter the interpretability of the results in a
detrimental manner.

Processing routines generally fall into one of the following categories:


 enhancing signal at the expense of noise
 providing velocity information
 collapsing diffractions and placing dipping events in their true subsurface locations
(migration)
 increasing resolution (wavelet processing)

12
Page

M.M.Badawy
Principles of Seismic Data Processing

Contractors:
Today most processing is carried out by contractors who are able to perform most jobs quickly
and cheaply with specialized staff, software and computer hardware. There are currently five
main contractors who are likely to have an office or an affiliation almost anywhere in the world
where oil exploration is taking place. In addition there are many smaller localized contractors
principally in London and Houston, and also some specialized contractors who concentrate on
particular processing areas.

These are summarized in the following table:

13
Page

M.M.Badawy
Principles of Seismic Data Processing

A Processing Flow:
Processing flow is a collection of processing routines applied to a data volume. The processor
will typically construct several jobs which string certain processing routines together in a
sequential manner.

Most processing routines accept input data, apply a process to it and produce output data which
is saved to disk or tape before passing through to the next processing stage. Several of the
stages will be strongly interdependent and each of the processing routines will require several
parameters some of which may be defaulted.

Some of the parameters will be defined, for example by the acquisition geometry and some
must be determined for the particular data being processed by the process of testing.

Factors which Affect Amplitudes


14
Page

M.M.Badawy
Principles of Seismic Data Processing

New Data:
 Tape containing recorded seismic data (trace sequential or multiplexed)
 Observer logs/reports
 Field Geophysicist logs/reports and listings
 Navigation/survey data
 Field Q.C. displays
 Contractual requirements

Simple Processing Sequence Flow:


 Reformat
 Geometry Definition
 Field Static Corrections (Land - Shallow Water - Transition Zone)
 Amplitude Recovery
 Noise Attenuation (De-Noise)
 Deconvolution
 CMP Gather
 NMO Correction
 De-multiple (Marine)
 Migration
 CMP Stack

15
Page

M.M.Badawy
Principles of Seismic Data Processing

Spherical Divergence:

 Due to the nature propagation of the energy on the shape of wave fronts, and
with increasing of the diameter of these waves, the energy decays through time so
we have to compensate this decay.

 The surface area of a sphere is proportional to the square of its radius so the energy
lost due to spherical divergence is proportional to 1/r2.

16
Page

M.M.Badawy
Principles of Seismic Data Processing

The Effect of Spherical


Divergence
17
Page

M.M.Badawy
Principles of Seismic Data Processing

Automatic Gain Control (AGC):

 The dynamic range of the recorded signal can vary from micro volts to volts. A fixed
gain will cause clipping of the large values or not enough amplification for very low
values. AGC provides higher gain for small values and lower gain for large data
values.

 The controller sets the amplification for each sample, passes the gain information
to the amplifier and the formatter. But take care that AGC should be usually for
display only because it harms the amplitude.

 We take shallow window and deep window and it makes amplification for the small
amplitudes in the deeper parts of the data.

 AGC - Automatic gain control: An amplitude gain procedure applied to the trace that
equalizes the trace energy over a contiguous sequence of specified time windows. After
application of AGC, attenuation and geometrical spreading effects can be roughly
corrected for and reflection amplitudes are normalized to be about the same value.

18
Page

M.M.Badawy
Principles of Seismic Data Processing

Before
Applying AGC

After Applying
AGC
19
Page

M.M.Badawy
Principles of Seismic Data Processing

Swell Noise (Marine Waves):

 A type of marine noise results from the scratching of the cable with water during
the survey.
 Its characteristics: (it has low frequency and high amplitude).
 Its shape: (vertical lines along the data or only parts of it).
 How we can attenuate it? We say that it has low frequency and high amplitude, so
we can use a filter related to frequency or amplitude or both, so we can use band
pass filter or amplitude\frequency filter.

20
Page

M.M.Badawy
Principles of Seismic Data Processing

Band Pass Filter:

It deals with the frequency, it may be used to cut low frequencies only to be called low
cut high pass filter or to cut high frequency only to be called low pass high cut filter or we
can determine arrange of frequencies to pass to be called four corners filter.

We here will use low cut high pass filter, why?!!! Because swell noise have low frequency.
Also band pass filter can work with slope, we can determine a slope and it will cut
according to that slope, low slope equals low effect on amplitude and high slope equals
high effect on amplitude.

21
Page

M.M.Badawy
Principles of Seismic Data Processing

22
Page

M.M.Badawy
Principles of Seismic Data Processing

Amplitude/ Frequency Filter:

In this filter we divide the data into frequency bands (e.g. 0-5, 5-10, 10-20 …), then we
determine windows from the data to work in, and determine number of traces in each
window then we determine the cut off, so the filter will calculate the average amplitude
in each window then it will compare this average amplitude with the amplitude in each
trace.

If the cut off value equals or higher than the amplitudes in the trace , it is okay ,if it is
lesser it will minimize the amplitude to the cut off value. And the smaller the cut off value
the harsher the filter is and the higher the cut off value the milder the filter is. I try with
cut off values till I find the best value that I'll apply, it may be 3,2or even 1.5.

23
Page

M.M.Badawy
Principles of Seismic Data Processing

 It did not properly working in the near offset due to the high amplitude caused by
the source. So how we can overcome this problem?
 We can do that by reversing the second shot gather beside the first one and starting
from the far offsets to the near offsets as it is only one shot in split spread array and
doing that with all shot gathers.
 In land data we don't find swell noise but we find ground roll.
***************************************

Direct Waves:
They are source- generated due to the direct travel of these waves from the source to the
receiver and they are dominant in near offsets. They can be attenuated by normal move
out, muting and stacking.

Refraction:
They are generated by critically refracted waves from the near surface layers. They are
dominant in the far offsets. They can be attenuated by NMO, muting and stacking.

Ground Roll:
 It is a source noise coming from propagation of waves in particles of near surface
layers without net movement. It is dominant in the upper part from the data and
interfered with direct waves and refracted waves.
 Its characteristics: (low velocity, low frequency and high amplitude).
 It could be attenuated by F-K filter or Tau-p filter.
24
Page

M.M.Badawy
Principles of Seismic Data Processing

F-K filters:
 It is applied in frequency domain not time domain. We use forward Fourier
transform to transfer from time domain to frequency domain.

 It is a relation between frequency (f) and wave number (k).it gives me two types
of events linear events and parabolic events, the linear events are those that have
low velocities and frequencies and I determine these linear events to a filter and it
calculates the velocity of this line then it removes all events with the same and
lower velocities, this filter is called cut off velocity filter.

 And don't forget that when you transfer the data to the time domain again you will
find an increase in the data time and its frequency content due to Fourier, so you
have to make blanking to this time.


 Note that: F-K is run on gathers and FX and FY are run on stacked data but they
remove the dip of both noise and data, so they are rarely used.

25
Page

M.M.Badawy
Principles of Seismic Data Processing

Seismic Data in F-K


Domain

Seismic Data in X-T


Domain

Seismic Data Before


Seismic Data After
26

Applying F-K Filter


Applying F-K Filter
Page

M.M.Badawy
Principles of Seismic Data Processing

 This is done if the data isn't aliased, so what if it is aliased?


 We use (Tau-P) filter or we make infill.

Tau-P filter:
 It is a Velocity dependent filter.
 Tau=intersection with time axis.
 P=1/V.

First we make 2D Fourier transform to transfer from time to frequency then we use 3D
Fourier transform to transfer to tau-pi domain.

This filter divides each event to more than one segment, and then it makes a tangent
for each segment to intersect the time axis in different tans, and calculates the slope for
every tangent, and then it makes a relation between a one tau and different slopes for the
different tangents that intersect the time axis at this tau (it makes a fan) with knowing of
maximum and minimum slopes.

The higher is P, the lower is the velocity and the lower is P, the higher is the velocity. Then
the result is a graph between tau and P with parts with regular events and others with
irregular ones we determine the interval that we are concerning with. And this is Tau-P in
linear mode.

27
Page

M.M.Badawy
Principles of Seismic Data Processing

28
Page

M.M.Badawy
Principles of Seismic Data Processing

 But if there is a residual ground roll, we should make the infill

Infill:
 Infill is a technique used to increase number of traces to avoid aliasing.

 We replace a trace between every two trace or more according to the Nyquist by
summation of the two traces and dividing them by two to give us the trace and we
reorder the traces in a manner by which we can return to our original data and also
we can flag the new or old traces to be known. After that we apply F-K filter
normally. But also there will be a residual ground roll scattered in the data (not
coherent) and it can be attenuated by Amplitude/Frequency filter.

Zero phasing:
 It is a process that can be applied at the first steps or at the last but it is preferred to
be at first.
 Zero phases: (the maximum amplitude is at zero time).
 Zero phases is a mathematically solution but we can be close to it using vibroseis.
 Minimum phase: (maximum amplitude at minimum time, we can obtain it
with dynamite).
 Maximum phases: (maximum amplitude at maximum time).
 Mixed phases: (it is a mixed phase in between minimum phase and
maximum phase and we can get it with air gun).
29
Page

M.M.Badawy
Principles of Seismic Data Processing

 Zero phasing is a process by which we can modify the position of peaks and
troughs to be at the reflector position instead of being above or below its real
position for facilitating the interpretation process.

To make zero phasing we should make:

1-Model source 2-cross correlation

Source modeling can be for dynamite using the


charge, whole depth and the recorder model.
We also determine the polarity of the traces
either it is normal or reverse. For vibroseise
we don't do that step.

 For air gun we get the source signature from the contractor.

 Then using software we determine the distance between the maximum amplitude
and zero time then we make shift toward zero time by a distance equal it from zero
time to max amplitude.

 And we can attenuate the bubble effect by designing the wavelet before shifting.
And by this step we designed a filter that we multiply it with the source signature
to ensure that the result is a zero phase signature. And then we apply this filter on
seismic data using cross correlation.
30
Page

M.M.Badawy
Principles of Seismic Data Processing

Reformat:
This will usually follow an Industry standard convention. e.g. SEGD, SEGY for magnetic
media (Data Format’ defines: How the data is arranged and stored.)

Geometry Definition:
- Important values for data processing are source – receiver OFFSETS!

- Where are the shots and receivers located?

- The area of mathematics relating to the study of space

and the relationships between points, lines, curves and

surfaces.

Geometry in seismic means defining where everything is located using the following:

 Coordinates of shot and receivers


 Relationship between ‘file’ numbers and shot locations
 Relationship between shots and receivers
 Missing shots and/or receivers
 Attributes for shots/receivers e.g. elevations, depths etc

(We need to supply the X, Y and Z co-ordinate of every shot and geophone station for the
line. Luckily, in many cases, we can rely on a regular shooting pattern to simplify the input.
31

Geometry may be simple (for example, regular 2D marine data), or extremely complex (a
land 3D survey shot over sand dunes).
Page

M.M.Badawy
Principles of Seismic Data Processing

Both land and marine data are acquired using multiple sources and geophone arrays, to
facilitate the acquiring of the large volumes of data necessary. The geometry for land data can
be extremely complex, essentially shooting multiple crooked lines at once!

If we know the positions of the source and receiver then we can calculate the position of a
Common Mid Point.

Field Static Corrections:


What if the surface elevation changes?

i.e. remove the difference in travel time caused by shots and receivers being at different
elevations.

(Static corrections are time-shifts applied to seismic data


to compensate for :)
 Variations in elevations on land
 Variations in source and receiver depths

(Marine gun/cable, land source)

 Tidal effects (marine)


 Variations in velocity/thickness of near-surface layers
 Change in data reference times
32
Page

M.M.Badawy
Principles of Seismic Data Processing

Static Assumptions:
1- The ray-paths through the near-surface layering are vertical (not quite true)

2- Weathering medium is isotropic

(SA + AG + GD + DR = SB + BG + GC + CR)

- The ray-paths through the near-surface layering are vertical (not quite true):

-This means that deeper the reflector, better the assumption. This also means that shallow data
is likely to suffer if weathering is thick

-The assumption of vertical ray paths is not strictly true and a complete solution of the problem
requires consideration of other factors such as the interplay of dynamic and static corrections
with lateral as well as vertical velocity variations.

-As far as velocity computations are concerned, we assume that the medium of weathering zone
is "isotropic" and therefore, the horizontal velocities we calculate are also applicable vertically.

-In reality, both these assumptions are not physically true. But we are forced to make these
assumptions in order to compute surface consistent statics.
33
Page

M.M.Badawy
Principles of Seismic Data Processing

Main Types of Static Corrections:


FIELD (Initial) STATICS:

• The main static correction based on field measurements/derived from data acquired in
the field e.g. up-hole survey, refraction data.

RESIDUAL STATICS:

• Derived during processing by using reflection data to ‘fine-tune’ the field statics.

There are two main types of static calculation:

By ‘Field’ we mean the initial statics applied and historically calculated by the field crew.

Sometimes calculated in the office - more on that when we look at refraction statics. Refraction
statics is also classified as ‘field statics’

Residual static computations are made after field statics has been computed and applied to the
data.

34
Page

M.M.Badawy
Principles of Seismic Data Processing

Amplitude Recovery:
Where’s all the source energy gone?

- The amplitude of a wave may be defined as:

‘The maximum departure of the wave from the average value’

- Basically, the size and magnitude of a waveform is called its

(Amplitude)

35
Page

M.M.Badawy
Principles of Seismic Data Processing

Noise Attenuation (De-Noise):

De-Noise: set of processes that are carried out on the raw seismic data to increase the
signal to noise ratio.

 Ambient Noise (Random): which doesn't exhibit correlation from trace to


trace, not generally source generated.
 Coherent Noise: predictable from trace to trace across group of traces
I.e. have a phase relationship between adjacent traces, commonly source generator.

Types of Noise:
Random Noise (Ambient Noise) (Natural):

 Noise generated by air waves


 Wind motion
 Environment noise
 Lose coupling of geophones in the ground

Coherent Noise (Artificial):

 Direct arrivals.
 Ground roll.
 Air waves.
 Shallow refractions.
 Reflected refractions.
 Ghosts.
 Multiples.
 Diffractions.
36
Page

M.M.Badawy
Principles of Seismic Data Processing

*****************************************************************

37
Page

M.M.Badawy
Principles of Seismic Data Processing

Ambient Noise (Random) (Natural):


For ambient noise we can use editing and muting for attenuating such types of noises
like high tension power, pumping, vehicles and so on.

We can make editing by killing traces and removing the traces. Also we can use
muting for removing or cutting unwanted signal, cutting of surface waves and cutting
of distortions caused by the dynamic correction.

Muting:

38
Page

M.M.Badawy
Principles of Seismic Data Processing

Trace Editing:

39
Page

M.M.Badawy
Principles of Seismic Data Processing

Deconvolution:
How to improve the vertical resolution?

Q: Relationship between frequency and attenuation?

A: High frequencies attenuated faster

Q: What is decon?

A: An inverse of filtering process

Deconvolution is a processing tool which has been used for:

 Wavelet Shaping
 Multiple Removal

Convolution:
 Convolution is the change of a wave shape as a result of passing it through a linear filter.

 When a signal passes through any filter (such as the earth), it is replicated many times
with different amplitudes and time delays, by the filter.

 Assuming that the signal, itself, does not alter with the passing of time (ie. it is time shift
invariant), then the filter produces a linear superposition of these copies of the signal.
The mathematical description of this process is known as convolution.

 Mathematically correlation process is similar to the convolution process except


‘direction’ of operator’. In the case of convolution, the direction of operator does not
matter and any two waveforms, when convolved with each other, will result in the same
output waveform. However, in the case of correlation, we will have one result by
correlating waveform A with waveform B and a different result by correlating waveform
B with waveform A. That is, depending on which waveform (A or B) we make an
operator during cross-correlation, the result will be accordingly.
40
Page

M.M.Badawy
Principles of Seismic Data Processing

Deconvolution:
The objective of deconvolution:

In theory……….

• Reveal the subsurface reflectors by removing the effects of the system wavelet,
including ghosts and short-period multiples.
In practice………

• Achieve a better estimate of the geological layers.

• Output trace to represent reflectivity functions in terms of amplitude, polarity and


depth/time.

Generally fall into one of two categories:

Deterministic Deconvolution: part of the seismic system is known. For example, where the
source wavelet is accurately known we can do source signature deconvolution.

Statistical Deconvolution: no information is available about any of the components of the


convolutional model. A statistical approach is needed to derive information about a wavelet
(either ‘source’, ‘system’ or combined wavelets).

41
Page

M.M.Badawy
Principles of Seismic Data Processing

CMP Gather:
How to order the data?

Q: what is a CMP?

A: a collection of traces from the same sub-surface point

With different source-receiver offset values (preferably)

Q: Why CMP domain?

A: NMO, stack, remove some structural influences from the ‘gather’

NMO Correction:
How to correct for time differences due to offset within the CMP?

NMO corrects for arrival time differences due to source-receiver offset variations attempts to
correct to zero-offset case.

42
Page

M.M.Badawy
Principles of Seismic Data Processing

(Normal Move out – NMO)

Equation is valid provided offsets are not too large (spread <6km?) and assuming velocity
doesn’t vary laterally. Otherwise have to include higher order coefficients into equation.

43
Page

M.M.Badawy
Principles of Seismic Data Processing

Demultiple:
How to remove false reflections?

General Properties of Multiples:


 Low velocity (high moveout)

 Velocity increases with depth

 High amplitude

 less geometric spreading

 Periodic

 Repeated cycles in horizontal layers

 Predictable

 From primaries

Primary and Multiple Velocity:


• As the primary and multiple energy has both travelled through the same layer the
multiple just spent longer in the layer, then what’s their velocity relationship?

44
Page

M.M.Badawy
Principles of Seismic Data Processing

Migration:
Do the reflections all come from vertically below?

Migration:
 A process which attempts to correct the

distortions of the geological structure inherent in

the seismic section.

 Migration re-distributes energy in the seismic

section to better image the true geological structures

Why Migration??
 Rearrange seismic data so that reflection events may be displayed at their true position
in both space and time.

 laterally in up-dip direction

 upward in time

 Collapse diffractions back to their point of origin.

 Improve lateral resolution - collapse Fresnel zone.

 To obtain more accurate velocity information (when performed pre-stack).

 For more accurate ‘depth’ section.


45
Page

M.M.Badawy
Principles of Seismic Data Processing

How geologic features appear after Migration?

 Dipping events :

- Dipping events appear to be steeper


- Migration moves events up dip
- Migration steepness events
- Migration shortens even

 Anticline :

- the anticline is broader and less steep on the 'stack' section.


- On the migrated section it appears less broad and steeper sides

 Syncline:
- Synclines appears on the stacked section as Bow-Ties
- Migration correct this shape

46
Page

M.M.Badawy
Principles of Seismic Data Processing

Migration comparison:

Type Pluses Minuses

Pre stack Migrated data is used to pick Higher cost than post stack.
velocity analysis.
Low S/N.

Post High S/N ratio. Assumptions in stack process breakdown


stack when dip and velocity variation.
Lower cost than pre-stack

Time Good result if velocity and Algorithms do not take account of ray
dip variation not too bending - poor when large dip and
complex - at an affordable velocity variations.
price

Depth Algorithms take account of Requires very accurate velocity-depth


ray bending. model. Time and cost increase.

2D Two pass on 3D data allows Only uses energy from plane of section
for use of different
algorithms, extra QC.

3D Uses energy from in and out Resource/cost issues


of plane of section. 47
Page

M.M.Badawy
Principles of Seismic Data Processing

CMP Stack:
How to reduce the number of traces?

Produces a ‘zero-offset’ trace (It results in S/N improvement)

What is Stacking?
We take all the traces that have the same common

mid point (in 2D or the same bin (3D) & sum them together.

 The CMP locations for the 5 source / receiver pairs all fall in the same bin.

 These 5 traces would be collected together ‘gathered’ and then summed together to
make 1 trace ‘stacked’.

 Shot gather data need to be sorted to CMP gathers.


 NMO correction apply to the CMP gathers.
 Stack the NMO corrected CMP gathers.
48
Page

M.M.Badawy
Principles of Seismic Data Processing

 For all sorts of reasons, the ideal seismic section would consist of a series of traces shot
with the shot and receiver in the same position. This would produce a true zero-offset or
normal-incidence section where, for a horizontal reflector, the incident rays would be at
right angles.

 In practical terms, placing the recording instruments on top of the shot is not a viable
proposition! So, in the real world, our shot and receiver are always some distance (or
offset) apart, and our reflections will include some distortion due to the increased travel-
time of the raypath to the longer offsets.

 The most important correction that is applied is that of normal moveout, usually referred
to as NMO.

Stacking Velocity (Vnmo)


The velocity associated with the best-fit hyperbola to correct moveout on CMP gathers and
align signal from the same reflector.

For small offsets and horizontal layering

Vnmo ~ Vrms
49
Page

M.M.Badawy
Principles of Seismic Data Processing

The velocity we deal with most!


Stacking or NMO velocity is the velocity of a constant homogeneous isotropic layer above a
reflector which would give approximately the same offset-dependence (normal moveout) as
actually observed. It is the value determined by a velocity analysis and is the value used for
optimum common=midpoint stacking.

 The velocities measured during Velocity Analysis.

 Often (erroneously) referred to as RMS velocities (Vrms) .

 Increase in value in the presence of dipping events.

 Stacking velocity (Vnmo) approaches RMS velocities (Vrms) only for small offsets.

For a single layer model, homogeneous and isotropic,

stacking=RMS=interval=average

50
Page

M.M.Badawy
Principles of Seismic Data Processing

The Power of Stack:

 Relies on signal being in phase and noise being out of phase i.e. primary signal is ‘flat’
on the cmp gather after NMO corrections

 A spatial or K- filtering process

 Data reduction - usually to [almost] ‘zero-offset’ trace

 Attenuates coherent noise in the input record (to varying degrees)

 Attenuates random noise relative to signal by up to N; where N = number of traces


stacked (i.e. fold of stack)

 K filter - filtering of spatial frequencies by summing/mixing


 K-filter - Apply an ‘all-ones’ filter and output the central sample.
 To apply a spatial K-filter to a record we must first collect the series of samples having
the same time values from each data trace - ie. form a common-time trace.
 This is the input data which must be convolved with our chosen filter to produce the
filtered output. The process is applied to each common-time trace in turn (0 msec, 4
msec, 8 msec, etc.).
 The summing filter is a high-cut spatial filter. It passes energy close to K=0, ie.
effectively dips close to 0ms per trace. Therefore, if signal has been aligned to zero dip
(as in NMO corrected data), signal will be passed.
 Organized noise contained in steeper dips will be suppressed - except at low temporal
frequencies or if the noise aliases and wraps-around through K=0.
 If we increase the number of filter points - ie. increase the fold - then the filter becomes
more effective at passing only energy close to K=0, or dips closer to zero.
51
Page

M.M.Badawy
Principles of Seismic Data Processing

Migration:
Migration of seismic data moves dipping events to their correct positions, collapses
diffractions, increases spatial resolution and is probably the most important of all processing
stages.

Migration theory has been long established but restricted computer power has driven the
industry to a bewildering array of ingenious methods to perform and enhance the accuracy
of migration.

It could be argued that much of the past research has been directed towards doing migration
less wrong rather than doing it right. Certainly there has been more research into migration
algorithms than the critical factor of determining the correct velocity model to use.

With today's availability of cheap computer power modern practice tends towards doing
migration as correctly as possible rather than as cheaply as possible. Most migration
algorithms have good points and bad points and work better in some data areas than in
others.

As in much of processing the choice of which migration algorithm to apply is rather


subjective. In this section we introduce the basic theory of migration and discuss the various
methods and terminology which have built up over the last 30 years. Yilmaz (1987) and
Bancroft (1998) contain many further details and examples of migration.

Basic Theory:
Zero-Offset Migration:
The theory of zero-offset migration is important since the stacking process simulates a zero-
offset section as well as attenuating noise and multiples. The migration process is referred to as
poststack migration or zero- offset migration.

If the stack does not produce a good approximation to the zero-offset section then prestack
migration must be performed prior to stacking. Due to the data volumes involved, prestack
migration takes at least the fold of the data longer to compute than poststack migration.
52
Page

M.M.Badawy
Principles of Seismic Data Processing

The adjacent figure (a) shows a zero-offset seismic experiment conducted over a constant
velocity medium. Sources and receivers are marked by red dots. The image of dipping reflector
dip ß results in seismic section (b) where the reflection point is plotted in green below the
receiver at a time equal to its reflection time (t1t4).

On the seismic section, the dip α and position of the reflector are incorrect and an interpretation
of this section would be in error. The equation shown in (b) relates the dip before and after
migration. The maximum dip on the seismic section of 45o corresponds to a reflector dip of
90o .

By taking a semicircular arc equal to the travel time from each of the recorded positions and
constructing a line at tangent to the arcs the true migrated position of the reflector is discovered
(c). The process of migration makes the resulting image look like the true geological structure.
Migration is sometimes also called imaging.

The migration process has moved the reflection up-dip and the migrated segment (blue) is
steeper and shorter than the reflection segment (green).

Frequencies will be lower on the migrated segment. In the diagram the velocity is assumed to
equal 1 so the vertical axis of time and depth are interchangeable.

For the migration to be correct (figure (a)) the vertical axis of (c) would be in depth and would
require the velocity to be known (in order to convert from the recorded time section to the
migrated depth section).

53
Page

M.M.Badawy
Principles of Seismic Data Processing

54
Page

M.M.Badawy
Principles of Seismic Data Processing

Kirchhoff Migration:

The earliest methods of migration by hand used the semicircular construction shown in the
adjacent figure (a) for the migration of a single point shown in green. The migrated result
shown in blue is a semicircle in a constant velocity medium.

This result is also called the impulse response of a process and is especially useful since a
seismic section can be considered to consist of a series of spikes - the migrated reflectors will
occur where the semicircles constructively interfere.

This is called Hagedoorn migration where the amplitude of the spike on the input time section
is distributed along a semicircle on the output migrated time section. Destructive interference
will cancel out noise, but sometimes residual semicircular smiles are seen in the resulting
section as a result of noise.

In (b) of the adjacent figure the constant velocity semicircle construction is used to migrate a
hyperbolic diffraction curve (green) to it's migrated position (blue point) where the semicircles
interfere. An alternative method would be to sum the amplitudes along the hyperbola and place
the summed amplitude at the apex. This latter form of migration formed the basis for the first
computer algorithms and is called diffraction summation, diffraction stack or more generally
Kirchhoff migration. In the figure (c) a Kirchhoff summation is illustrated for migration of a
dipping event.

The zero-offset section is considered to be a superposition of diffractors at each time sample


(Huygen's Principal). The diffractors interfere to form coherent events and individual
diffractions may be visible at discontinuities such as faults. At each output time migrated
position (shown by the blue dots and line) the amplitudes of the input zero-offset time data
(green dots and line) are summed along a series of hyperbolas controlled by the velocity field
(some of which are illustrated). Maximum amplitudes will occur at the migrated event,
otherwise the amplitudes will be minimal.
55
Page

M.M.Badawy
Principles of Seismic Data Processing

56
Page

M.M.Badawy
Principles of Seismic Data Processing

Migration:
A major difference in migration algorithms arises from the way the velocity field is utilised. In
the early 1970's when migration algorithms were being developed the computer power was so
limited that several approximations were introduced in order to get programs to run in anything
like a reasonable time.

These assumptions led to time-migration - a process which collapses diffractions and moves
dipping events toward the true position but leaves the migrated image with a time axis which
must be depth converted at a later stage. Time migration assumes that the diffraction shape is
hyperbolic and ignores ray bending at velocity boundaries.

Depth Migration assumes that the arbitrary velocity structure of the earth is known and will
compute the correct diffraction shape for the velocity model. The data are then migrated
according to the diffraction shape and the output is defined with a depth axis (although results
are often stretched back to time to enable comparison with time migrations).

If the velocity model for the depth migration is incorrect then the migration will be incorrect
and the error may be difficult to detect if the migration is performed post-stack.

57
Page

M.M.Badawy
Principles of Seismic Data Processing

58
Page

M.M.Badawy
Principles of Seismic Data Processing

59
Page

M.M.Badawy
Principles of Seismic Data Processing

The exploding reflector model and the finite difference methods automatically take care of the
amplitudes when using the downward continuation method.

Similarly the FK method of migration applies a defined amplitude scaling when moving the
data in the FK space.

Estimation of the diffraction stack amplitudes proved more of a challenge until the Kirchhoff
integral solution to the wave equation provided a theoretical foundation.

Assumptions used in the design of geological models are reviewed in preparation for evaluating
the design of migration programs that are derived from the wave-equation. A review of
Kirchhoff migration is then presented that begins as a diffraction stack process, and then
proceeds to matched filtering concepts and the integral solution to the wave-equation.

One dimensional (1D) convolution modelling and deconvolution are then used to introduce
inversion concepts that lead to “transpose” processes and matched filtering. These concepts are
then expanded for twodimensional (2D) data, to illustrate that Kirchhoff migration is a
“transpose” process or matched filter that approximates seismic inversion.

Evolution of amplitude in Kirchhoff migrations:


Diffraction stacking or Kirchhoff migration produces one migrated sample at a time by, first,
computing a diffraction shape for a scatterpoint at that location, second, summing and
weighting the input energy along a diffraction path, and third, placing the summed energy at the
scatterpoint location on the migrated section.

The process is repeated for all migrated samples. During summation, the amplitudes of the
input data are weighted, and it is this weighting of the input data that we are investigating, and
which is the dominant objective of many inversions.
60
Page

M.M.Badawy
Principles of Seismic Data Processing

Seismic traces contain wavelets that represent different properties, depending on the assumed
model. For example, with flat data, the peak amplitude of the wavelet may be assumed to
represent the amplitude of a reflecting boundary, or the same wavelet may be considered part of
a wave field. The amplitude will be handled differently when combining all the traces to form
an image of the subsurface.

Amplitudes may be computed by a number of processes such as:

• stacking
• diffraction stacking, and matched filtering
• solutions to the wave-equation
• inversion principles all of which are based on a specific type of model.

Consider the preparation of traces in a common midpoint (CMP) gather where gain recovery
has been applied to each trace. We now assume that the amplitudes of the wavelets represent
the reflection coefficients from the subsurface geology. Normal moveout (NMO) correction has
been applied to match the travel-times of offset traces with those at zero offset.

A mute is then applied to ensure that all the contributing wavelets look similar. These wavelets
are summed, and then divided by the number of contributing traces, to produce an average of
the wavelets. This averaging process maintains the amplitude of the wavelet while attenuating
the amplitude of noise. The result is a zero-offset trace with an improved signal to noise ratio
(SNR).

Seismic imaging is considered key to reduce risk and cost in exploratory as well as
development drilling. Although we have recently seen important advances, the authors claim
that a step change is required to significantly improve the industry’s ability to obtain accurate
seismic images of oil and gas reservoirs within geologically complex settings.

61
Page

M.M.Badawy
Principles of Seismic Data Processing

Kirchhoff integral solution to the wave-equation:

The parameters used in the diffraction stack method were estimated from the physical
modelling experiments. This migration process became rigorous when it was recognized
[Schneider 1978] that the Kirchhoff integral solution to the wave equation, which was used
in optics, gave a theoretical solution for seismic migration.

This theoretical solution provided both the amplitude and phase filters that had been previously
predicted by experimentation. A2D integral solution to the wave-equation from Gazdag (1984)
is shown in equation:

where r is the radial distance from the source receiver location to the scatter point, c = V/2,
and β the geological dip for the appropriate position on the diffraction. The cosine term may
be replaced by T0/T, giving a more familiar form of:

The Kirchhoff, FK, and downward continuation methods of seismic migration are based on
wave-equation solutions. These migration algorithms produce an image of the sub-surface by
propagating the energy recorded on the surface back to the area of the reflector.

In contrast to these wave-equation methods, seismic inversion attempts to estimate the


reflectivity of a geological model from the recorded energy.

Quite often, these inversions produce an algorithm that is almost identical to that of the
Kirchhoff method, with only slight changes to the amplitude scaling.
62
Page

M.M.Badawy
Principles of Seismic Data Processing

Some Glossaries:
 AGC - Automatic gain control. An amplitude gain procedure applied to the trace that
equalizes the trace energy over a contiguous sequence of specified time windows. After
application of AGC, attenuation and geometrical spreading effects can be roughly
corrected for and reflection amplitudes are normalized to be about the same value.

 CMG - Common midpoint gather. A collection of traces all having the same midpoint
location between the source and geophone.

 COG - Common offset gather. A collection of traces all having the same offset
displacement between the source and geophone.

 CRG - Common receiver gather. A collection of traces all recorded with the same
geophone but generated by different shots.

 CSG - Common shot gather. Vibrations from a shot (e.g., an explosion, air gun, or
vibroseis truck) are recorded by a number of geophones, and the collection of these
traces is known as a CSG.

 Fold - The number of traces that are summed together to enhance coherent signal. For
example, a common midpoint gather of N traces is time shifted to align the common
reflection events with one another and the traces are stacked to give a single trace with
fold N.

 IVSP data - Inverse vertical seismic profile data, where the sources are in the well and
the receivers are on the surface. This is the opposite to the VSP geometry where the
sources are on the surface and the receivers are in the well . An IVSP trace will
sometimes be referred to as a VSP trace or reverse vertical seismic profile (RVSP)
seismogram.

 OBS survey - Ocean bottom seismic survey. Recording devices are placed along an areal
grid on the ocean floor and record the seismic response of the earth for marine sources,
63

such as air guns towed behind a boat. The OBS trace will be classified as a VSP-like
trace.
Page

M.M.Badawy
Principles of Seismic Data Processing

 Reflection coeffcient. A flat acoustic layer interface that separates two homogeneous
isotropic media with densities ρ 1 and ρ 2 and compressional velocities v has the pressure
reflection coeffcient (ρ2v2–ρ1v1)/(ρ2v2+ ρ1v2). This assumes that the source plane wave is
normally incident on the interface from the medium indexed by the number 1.

 RTM - Reverse Time Migration. A migration method where the reflection traces are
reversed in time as the source-time history at each geophone. These geophones now act
as sources of seismic energy and the fields are backpropagated into the medium (Yilmaz,
2001).

 Stacking - Stacking traces together is equivalent to summation of traces. This is usually


done with traces in a common midpoint gather after aligning events from a common
reflection point.

 S/N - Signal-to-noise ratio. There are many practical ways to compute the S/N ratio.
Gerstoft et al. (2006) estimates the S/N of seismic traces by taking the strongest
amplitude of a coherent event and divides it by the standard deviation of a long noise
segment in the trace.

 SSF - Split step Fourier migration. A migration method performed in the frequency,
depth, and spatial wavenumber domains along the lateral coordinates (Yilmaz, 2001).

 SSP data - Surface seismic profile data. Data collected by locating both shots and
receivers on or near the free surface.

 SWD data - Seismic-while-drilling (SWD) data. Passive traces recorded by receivers on


the free surface with the source as a moving drill bit. Drillers desire knowledge about the
rock environment ahead of the bit, so they sometimes record the vibrations that are
excited by the drill bit. These records can be used to estimate the subsurface properties,
such as reflectivity (Poletto and Miranda, 2004).

 SWP data - Single well profile data with the shooting geometry . Data are collected by
placing both shots and receivers along a well.
64
Page

M.M.Badawy
Principles of Seismic Data Processing

 VSP data - Vertical seismic profile data. Data collected by firing shots at or near the free
surface and recorded by receivers in a nearby well. The well can be either vertical,
deviated, or horizontal .

 Xwell data - Crosswell data. Data collected by firing shots along one well and recording
the resulting seismic vibrations by receivers along an adjacent well.

 ZO data - Zero-offset data where the geophone is at the same location as the source.

Source-receiver configurations for four different experiments: SSP=surface seismic profile, VSP=vertical seismic
profile, SWP=single well profile, and Xwell=Crosswell. Each experiment can have many sources or receivers at
65

the indicated boundaries (horizontal solid line is the free surface, vertical thick line is a well). The derrick
indicates a surface well location, y denotes the reflection point, and the stars indicate sources.
Page

M.M.Badawy
Principles of Seismic Data Processing

References:
 Bancroft, J.C., 1998. A practical understanding of Pre- and Poststack Migration.
Volumes 1 & 2. SEG.
 Hatton L.,Worthington, M.H., & Makin, J., 1986. Seismic Data Processing - Theory and
Practice. Blackwell.
 McQuillin, R., Bacon, M., & Barclay, W., 1984. An Introduction to Seismic
Interpretation. Graham & Trotman.
 Sheriff, R.E., 1991. Encyclopedic Dictionary of Exploration Geophysics. SEG.
 Sheriff,R.E. & Geldart, L.P., 1982. Exploration Seismology. Volumes 1 & 2. Cambridge
University Press.
 Yilmaz, O. 1987. Seismic Data Processing. SEG.

66
Page

M.M.Badawy

You might also like