You are on page 1of 255

LAND PROCESSING

DAY 1
DEGRADATION OF SEISMIC WAVES
Introduction
• Specific Pre-Planning tools are developed for determining the
acquisition parameters such as offset, fold, azimuth distribution,
shooting pattern, effects of surface obstacles, make up shots, etc …
• The main aim of the Pre-planning, includes the evaluation of both
Geophysical and non Geophysical parameters in order to ensure
that 3D data quality meets the structural, stratigraphy and
lithological requirements. The broader parameters consideration are
for:
– Geological Objectives and Geological targets.
– Geophysical Parameters,
– Geophysical Constraints,
– Cost.
– Environmental consideration, health and safety requirements.
NOTATIONS or TERMINOLOGY
RECEIVER OFFSET
Rx: Receiver Spacing X: Offset
Ry: Receiver Line Spacing Xmax: Maximum Offset
Rl :Length of Receiver line Xmin :Minimum Offset
Nr :Number of Receivers per line
TAPERS
Nrl :Number of receiver lines/swath
Tpx : Inline Taper
Tr : Total number of receiver/swath
SHOTS Tpy : Cross line taper
Sx: Shot Line Spacing Fx :Inline Fold Build Up
Sy: Shot Spacing Fy : Cross Line Fold Build Up
Ts :Total number of shots/swath
Ns :Number of Shots per line
Sl :Shot Line Length
NOTATIONS or TERMINOLOGY
SAMPLING
: Spatial Sampling for Receiver & Shot
: Spatial Sampling in mid point domain
: Spatial Sampling in common offset domain

Fold (F)
: Inline Fold
: Cross Line Fold
Survey Area : Sa

Bins :
b2 : Bin Size (square bin)
Tb : Total number of bins
NOTATIONS or TERMINOLOGY

Receivers: Rx, Ry, Rl, Nr, Nrl, Tr

Shots: Sx, Sy, Ts, Ns , Sl , Sd

Offset: X, Xmax, Xmin

Tapers: Tpx , Tpy , Fx , Fy

Sampling: ,

Fold (F): ,

Bins :b2 , Tb
NOTATIONS or TERMINOLOGY
3-D Terminology:
• Different people may use different terminology, but a convenient
terminology is given below:
BOX
• In orthogonal design the BOX refers
to the area enclosed by two conse-
qutive receiver lines (spaced Ry)
and two consecutive source lines
(spaced Sx), the box area is then
Sb = Ry * Sx
NOTATIONS or TERMINOLOGY
DIRECTIONS:
• In-line direction: This is parallel to the
receiver lines. The sampling in this
direction is generally satisfactory.
• Cross-line direction: This is orthogonal to
receiver line. Sampling in this direction is
generally weak.

FOLD OF COVERAGE:
• It is the number of mid-points that falls
into same bin.
• The nominal fold or full-fold is defined for
the maximum offset, so the fold is not
nominal at the edge of the survey area.
HALO ZONES:
• It is defined by the “fold tapers” a gate in
which the fold increases gradually.
NOTATIONS or TERMINOLOGY
• This halo zones increases the size of the
3-D area.
• Run-in : It is the necessary distance to
bring the fold from minimum to its
nominal value in the shooting direction.
• Run-out : It is the necessary distance to
bring the fold to its nominal value at the
end point of the line.
NOTATIONS or TERMINOLOGY
MID-POINTS:
• Mid-point is a point located exactly in the middle of
the source and receiver distance.
• It is not necessarily located along the receiver-line.
• Common-Midpoint: In a horizontal layered medium
with constant velocity, CMP is located in the middle
of different source receiver pairs which reflection
corresponds to the same subsurface point.
• CMP-BIN: is a square or rectangular area which
contains all midpoints that correspond to same
CMP. Traces that falls in the same bin are stacked,
and their number corresponds to the fold of the
bin.
• BIN-SIZE: It corresponds to the length and breadth
of the bin. The smallest bin dimension is half of the
source and receiver interval (Sy/2 * Sx/2).
NOTATIONS or TERMINOLOGY
MOVE-UPS:
• In-line-move-up: when the template moves up
along the survey from its initial position after
completion of a salvo of shots.
• Cross-line-move-up: It occurs when the
template reaches the edge of the survey area
and moves up across the survey to start new
inline move up.
• PATCH (TEMPLATE): Area of all live receivers
recording from the same source.
• SWATH: Length over which sources are
recorded without cross line roll.
Patch: Also, patch is a acquisition technique, where source line is not parallel to
receiver line. When source and receiver lines are orthogonal, the spread is
called “Orthogonal” (or cross spread), if they are not orthogonal, it is called
slant spread.
NOTATIONS or TERMINOLOGY

Source Line: Where source points are


Roll Along located at regular interval
Receiver lines are plotted horizontal,
source lines are plotted vertical

Receiver Lines:
• Receiver Interval (Rx): It is the distance between the two consecutive
receiver along the receiver line.
• Receiver Density (Rd): No of receivers per surface unit (generally sq.km)
NOTATIONS or TERMINOLOGY
Roll-Along:
• In-Line Roll-Along: It’s the inline move-up of the template or it’s a distance
between two consecutive position of the template.
• Cross-Line Roll-Along: It’s the cross-line move-up of the template.
Source Line:
• Source-Line: it’s a line where source points are located at regular interval.
It can be parallel or orthogonal or any other direction to receiver line. In
marine survey, the source line follows the air-gun arrays.
• Source-line interval (Sx): It is the distance between two consecutive source
line.
• Source interval (Sy): It is the distance between two consecutive shots on
the same line.
• Shot Density (Sd): It is the total number of shots per unit area (/sqkm) of
survey.
NOTATIONS or TERMINOLOGY

Salvo:
• It’s the number of shots fired before the template is moved up along the
survey.
Swath:
• During the shooting, the template moves in one direction and reaches the
edge of the survey area, this generates a swath.
• Swath shooting mode: when the shot line is parallel to the receiver line, its
called swath-shooting mode.
NOTATIONS or TERMINOLOGY
Template:
• All active receivers corresponding to a shot point is called template.

3D DATA VOLUME:
• After pre-processing and migration the data is sorted in CMP bins, then
stacked followed by some post processing. The final processed data is
kept in volume form (x,y,z) coordinates. From this volume data can be
extracted in any direction, (in-line, cross-line, time slice, diagonal line, any
zig-zag/random line.)
• OX- inline direction,
• OY- Cross line direction
• OZ- depth direction.
IMAGING PARAMETERS
IMAGING PARAMETERS:
• Fold: The number of traces that are
located in a bin and which is going
to be summed is called the fold.
• After stacking, each bin contains
single trace whose S/N ratio is
multiplied by
IMAGING PARAMETERS
IMAGING PARAMETERS:
• In-Line Fold: =

• Cross-Line Fold: =

= =

• Total Fold: = In-Line Fold * Cross-Line Fold

= x

= =
IMAGING PARAMETERS
IMAGING PARAMETERS:

• Nominal Fold: It’s equivalent to total fold (in-line fold * x-line fold)

• Total number of mid-points = total number of shots * total number of


receivers.

Rule: The Total 3D fold should be greater than half of the 2D nominal fold.
LAND PROCESSING
DAY - 2
IMAGING PARAMETERS (BASIC QUESTIONS)
• What is Template, Patch, Salvo, Swath Shooting, Orthogonal or Cross
Spread shooting ?
• Which is better , Swath Shooting or Orthogonal Shooting ?
• Stacking ? Advantage of stacking ?
• Formula for
• In-line fold
• X-line fold
• Total fold
FIELD EXAMPLE - 1
Inline Fold =

Alternative Formula: Inline fold =

Cross-Line Fold = = = 5

Total Fold= In-Line Fold * Cross-


Line Fold
= 12 * 5 =60 fold =6000%
FIELD EXAMPLE - 2
Inline Fold =

Alternative Formula: Inline fold = 6

Cross-Line Fold = = = = 6

Total Fold= In-Line Fold * Cross-Line


Fold
= 6 * 6 =36 fold =3600%
DETERMINATION BASIC ACQUISITION PARAMETERS
1. SAMPLING INTERVAL (TEMPORAL & SPATIAL)
1. TEMPORAL ALIASING
2. SPATIAL ALIASING
2. MAXIMUM FREQUENCY
3. RESOLUTION & BIN SIZE
1. SPATIAL SAMPLING
4. LONG OFFSET
5. NEAR OFFSET
6. MIGRATION APERTURE
DETERMINATION BASIC ACQUISITION PARAMETERS

• Temporal Sampling:

• Example:
• SI = 8 msec, Signal frequency = 65Hz, Folding frequency = ?
DETERMINATION BASIC ACQUISITION PARAMETERS

• Spatial Sampling:

or or

• Where is the two way time separation


between the arrival times of the plane
wave at two receiver location
• Spatial aliasing occurs when is half of
the dominant period, i.e T/2, so (The
aliasing condition)
LAND PROCESSING
DAY - 3
FRESNEL ZONE (RESOLUTION)

• Temporal or the vertical resolution (the ability to resolve two


reflectors with close vertical spacing- an important requirement in
stratigraphic analysis) has historically dominated the attention of the
geophysicist.
• The resolution is completely understood only in three dimensional
sense.
• Temporal and spatial resolution are not independent, improving one
automatically improves the other.
• Under Ray theory, the reflection comes from a point and is described
by “Snell’s” Law. However, from wave theory, reflection does not
come from a point, instead it is generated by integration over an
area”. But for mathematical convenience, the Ray theory considers
the center of this area as the point of reflection.
FRESNEL ZONE

• The area of constructive reflection around


this point is called the “Fresnel Zone”. The O
size of this Fresnel zone is the limit of
spatial resolution.
ESTIMATION OF FRESNEL ZONE SIZE:
• The concept is simple and is explained in
the fig. A point of observation “O” on the
surface receives the reflection energy from
flat planer reflector surrounding the
reflection point. The receiver at the surface
“O” receives the constructively interfering
energy from subsurface as long as the path
difference is (As per Sheriff 1980), so the
remote reflection elements will be one-half
wavelength out of timing (compared to
“O”).
FRESNEL ZONE

• As per A.J.Berkhout, 1984, the limit of constructive interference should be


defined where the outer zone path length is one-eighth wavelength longer,
since this would cause the two way arrival time to be one-quarter wavelength
later than that of the Centroid path length (Vertical Path).
• Which criteria is used is not of paramount importance, Fresnel zone is a
numerical approximation to the spatial size of a weighing function
determined by the distance from the reflection point. It is like a filter, having
no abrupt spatial cutoff point.
• The mathematical expression for Fresnel zone diameter is:
&
• None of them is easy to use, as they involve unit of length (depth &
wavelength)
FRESNEL ZONE

• Lets put
Z = VT/2, where V= Velocity (rms) averaged, T= two-way travel time to the reflection interface
= half period of dominant frequency

Therefore,
and

Here we have assume that << T & V2 2


as
negligible.
• is estimated from the seismic data as the average half period observed in
the vicinity of two-way time where Fresnel diameter is desired. This
corresponds to approximately, the geometric mean frequency of the visible
pass band.
• Obviously, the Fresnel zone is not of the same size for all the frequencies
FRESNEL ZONE

• The largest Fresnel diameter corresponds to frequency at the lowest point in


the passband.
FRESNEL ZONE IN THE REAL WORLD:
• The above formulations has been done for two dimensional situation. In real
3-D case, the Fresnel zone gets affected by two parameters.
– Effects of Source-Receiver Offsets on 3-D Fresnel zone for stacked data.
– Effects of non-planer reflector in 3-D on the Fresnel diameter.

1. The offset between source and receiver increases the Fresnel diameter,
both in inline and lateral direction. For the limiting case where offset is
equal to the depth, the geometric mean of inline and lateral diameters is
about 12% larger than normal incidence Fresnel diameter. The inline
diameter is about 11% higher than the lateral diameter. Thus change in the
Fresnel diameter due to offset is rather unimportant.
FRESNEL ZONE

2. When reflection surface is not plane, then the Fresnel zone alters its shape
appropriately. Consider anticline and syncline as spherical in shape. Lets
define a parameter “K” which is the ratio of depth to radius of curvature of
anticline or syncline.
• Then in case of Anticline the Fresnel zone is reduced in area compared to
that for a plane by the factor:
R(A:P) =
• In case of Syncline the Fresnel zone is increased in area compared to that for
a plane by the factor:
R(S:P) = , when depth and radius of curvature are equal K=1, it
gives total focusing
• Fresnel zone area are enlarged for synclines more significantly than the
degree of shrinkage experienced for anticlines.
FRESNEL ZONE

IMPROVING SPATIAL RESOLUTION:


• The process of migration significantly improves the spatial resolution, it is
often called “inverse Fresnel Filter” and spatial deconvolution.
• The theoretical limit for spatial resolution is one-quarter wavelength, so:
Bin Size (Fresnel Zone Consideration)

Pre-Migration Post-Migration

Fd = Vavg √ 𝑻 / 𝑭 Fd = λ /4 = Vavg /4 F

where:

Fd = Fresnel Diameter
Vavg = Average Velocity
T = Time
F = Frequency of Pulse
λ = Wavelength
MIGRATION DISPLACEMENT

Consequences:
1. The dip angle of the reflector in geologic section is greater than in the time
section. Thus migration steepens the reflectors.
2. The length of the reflector as seen in the geologic section is shorter than in time
section. Thus migration shortens the reflector.
3. Migration moves the reflector in up dip direction.
MIGRATION DISPLACEMENT
MIGRATION APERTURE
• It is defined as the fringe or the extra length/area that must be added
around the subsurface target in order to correctly migrate the dipping
events and correctly focus the diffracted energy located at the edge of the
target area.
• The migration of dipping events has three effects:
1. Increasing the reflector dip
2. Shortening the reflector
3. Moving the reflector in the up dip direction.
• The formula for the horizontal & vertical displacement along with the angle
of reflector after migration is given below:
Dh = (V2 * t * tan/4 ( is the dip angle in the time section, V is the
medium velocity, t is the unmigrated TWT of the reflector)
Dv = t . [ 1 - )]

(whereis the angle of reflector after


migration)
MIGRATION DISPLACEMENT
MIGRATION APERTURE
• The effect of dip & depth of reflector and the velocity of medium on the
various displacements is given below in the table.
Observation:
• Dips after migration are higher.
• Steep dips are more displaced.
• Deeper reflectors are more displaced.
• Higher velocities generates higher
displacements.
Conclusion
• Events located on the stack section can
belong to anywhere in the radius of Dh in the
lateral direction, deeper by Dv in vertical
direction.
• So the survey area must be extended by
atleast Dh horizontally & record length
MIGRATION DISPLACEMENT
Migration Aperture & Migration Diffraction
• Diffractions are generated by the subsurface features whose dimensions
are smaller than seismic signal. In (x,z) plane, each discontinuity will
generate a circular diffracted wavefront. In (x,t) plane this diffraction is
represented by hyperbola with its apex as diffractor point and its equation
is: ,
• In theory, the hyperbola extends to infinity n time and distance, in practice
the hyperbola will be truncated spatially so as to preserve 95% of the
migrated energy. This corresponds to 300 takeoff angle from the apex of
hyperbola.
• Refer the fig in next slide, the migration aperture is given by:

Ma = z * tan = 0.577 z ( If = 30 deg.) 0.6 (Vt 0/2), where V is the


average velocity & t0 is zero offset time.
In case of dipping event Ma = z * tanα = (Vt0/2) * tanα , where α is maximum
geological dip.
LAND PROCESSING
DAY - 4
BIN SIZE & FRESNEL ZONE
BIN SIZE:
• BIN-SIZE: It corresponds to the length and breadth
of the bin. The smallest bin dimension is half of the
source and receiver interval (Sy/2 * Sx/2).
• Bin size will affect the lateral resolution of the
survey and its frequency content.
• CMP-BIN: is a square or rectangular area which
contains all midpoints that correspond to same
CMP. Traces that falls in the same bin are stacked,
and their number corresponds to the fold of the
bin.
• Resolution is defined as the ability of a seismic
method to distinguish two events of the sub-
surface that are close to each other.
• Lateral resolution is related to Fresnel zone.
BIN SIZE & FRESNEL ZONE
• As per Sheriff diameter of Fresnel zone is:

where t0 is the two way time, is the half period of the dominant frequency,

since, = , implies = is for pre-migrated data


= is for post-migrated data

• The above formula suggest that the resolution improves with increasing
frequency, deteriorates with depth (t0) & velocity.
• As per Fresnel zone criteria, the
Bin Size = ………………………………(1)
BIN SIZE & SPATIAL SAMPLING
• A proper sampling is given by Nyquist condition, which states that at least
two samples per wavelet is required to reconstruct the signal. Then sampling
interval is:
or
• According to Gijs Vermeer, there is a maximum wavenumber corresponding
to frequency fmax, such that the energy is nil for frequency higher than f max and
there is minimum velocity Vmin .

• Thus the spatial sampling for shot and receiver is:


………………..(2)
• Spatial sampling in mid point domain is:
…………………….(3)
BIN SIZE & SPATIAL SAMPLING
• For dipping formation the spatial sampling for shot and receiver is:
…………………………..……….(4)
• For Dipping formation the Spatial sampling in mid point domain is:
………..………………………………….(5)
• The above formula gives the maximum frequency and wave-number recorded, where no aliasing
occurs.
BIN SIZE & DIFFRACTION
DIFFRACTION
• Diffractions are indistinguishable from reflection on the basis of character.
• The amplitude of diffraction is maximum at the point where the reflection is tangent
on it.
• The amplitude decreases rapidly as we go away from this point.
• Its move out is almost double than the move-out for the reflection.

• Consider the given figure:


+ =
where is the two-way reflection time.

} + =
where d is the two-way diffractiontime.
Bin Size (Fresnel Zone Consideration)

• Resolution & Bin Size: Ideally the bin


size should be equal to the lateral
resolution after migration.

• Lateral resolution depends on the


radius/diameter of First Fresnel zone.

• Different formula has been suggested


by different people, the most common is
from Rayleigh & is called “Quarter
Wavelength” formula.

• According to this, Bin Size = (Note:


This formula is applicable only after
migration in CMP domain)
Bin Size (Dip Consideration)

• (Bin Size) Spatial sampling for the source and receiver domain is
• So
• For the dipping event (with the dip the above formula changes to:

• ( in the mid point domain)


• Note: If the Vmin is very small or Fmax is very large then becomes very small
and difficult to implement. In such scenario, the data gets aliased specially
for the ground roll (low velocity) and high frequency noise.
NOTATIONS or TERMINOLOGY

OFFSET:
• Offset: It is the distance between the source and the geophone or Centre of
group geophone. The concept of offset makes sense in pre-stack world.
• In-Line-Offset: It is the component of offset in inline direction.
• Cross-Line-Offset: It is the component of offset in cross line direction.
• Minimum-Offset: It is the largest minimum offset in the survey (distance of
the nearest geophone of a receiver line from the shot).
NOTATIONS or TERMINOLOGY
OFFSET (Contd..):
• As a rule of thumb, the Xmin should be less than 1-1.2 of the shallowest
depth of interest.
• Maximum-Offset: It is the distance between the actual source and the
farthest receiver of the template, or simply it is the largest recorded offset in
a survey. In orthogonal survey it is the length of the diagonal of the
patch/template.

Xmax =
or,
Xmax =

Where is the distance between actual shot and farthest receiver line in cross
line direction & is the distance between actual shot and farthest receiver in
inline direction.
NOTATIONS or TERMINOLOGY
OFFSET (Contd..):
• Rule of Thumb: Xmax is affected by target depth, and it should be In case of
higher large offset the primary is interfered by direct waves.
• The head wave interference starts at a distance:
Xh = , choose Xmax < Xh

Where is the rms velocity of the target, is the velocity of the head wave, is the mute time ~.2 sec,
is the TWTT to the target.
Long Offset
Long Offset:
• Through algebraic manipulation, the Dix hyperbolic NMO correction can be
reduced to:

• Vnmo= 3000 m/s, to = 3.0Sec X = ?


STATIC CORRECTION FOR LAND SEISMIC
Introduction

Fig. A hypothetical non-zero offset seismic recording showing a single raypath.


What is Static Correction & Why ?
What is Static Correction:
• It is the constant time shift (this is independent of the event time on the trace)
applied to the seismic trace in order to bring both receiver and source on a fixed
datum, below the weathering layer.
• If derived and applied correctly, they have dramatic effect on the final quality of
seismic section Fig.1.
• In the marine survey, both source and receiver are placed at the same datum level,
whereas in land they are placed at elevation as per the topography of the area.
• Generally we require our time measurement to show the structure of deeper
layers, however measuring reflection times from the surface gives us an incorrect
picture. This is because:
• The Earth is not flat.
• Presence of irregular weathering layer(Low velocity, varying both laterally and
vertically, unconsolidated).
• The effect of various factors of LVL (low velocity Layer) on the seismic section are
shown in Fig 2 to 5 :
What is Static Correction & Why ?

Fig: 1 Seismic section before and after static Correction


The Effect of Elevation & Weathering Thickness
Surface Elevation varies

Fig: 2

Weathering Thickness varies

Fig: 3
The Effect of Static Correction on Gathers
Effect on the Gather

Fig: 4
The Effect of Low Velocity Layer

Fig: 5
The Objective & Methods of Static Correction
The Objective:
The main objectives of Static Corrections are:
1. To place the source and receivers at the constant datum plane.
2. To ensure that the reflection events at the line crossing are matching or at the same
time.
3. To improve the quality of other processing steps.
4. To ensure the repeatability of seismic recording.

Methods of Static Correction


• There are three major approaches to the to the static computation.
– Field Static
– Refraction Static
– Elevation Static
– Tomo Static
– Residual static
Components of Reflection Time
• Considering the ray theory, the observed reflection time which is influenced by the
topography and near surface effect as well as offset distance “ X”.
• The travel time “t” can be broken into 4 parts:
T = T0 + Ts + Tr + Tx where
T= Total travel time,
T0 = Is the Seismic Structural time,
Ts = Shot “static” from datum to surface,
Tr = Receiver “static” from datum to surface,
Tx = Dynamic time shift (Dynamic means
dependent on the record time)to correct for offset.
• From the fig. it is clear that we need to derive the velocity vs depth model from
which we calculate different component of statics and apply to the data to bring it to
datum level.
• Next a dynamic correction or NMO corrections are applied to place the reflection
below mid point “M”.
Methods of Static Corrections
There are mainly four static correction methods
• Field Static
• Elevation Static
• Refraction Static
• Tomo Static
• It is necessary to confirm the replacement velocity & final datum from the client.
Elevation Static
• Elevation statics used for topography correction and generate vertical time shifts. Elevation
static correction used only in those areas where there is no weathered layers and no lateral
change in low velocity layers.
The datum static:
• The total static correction required for bringing the data at the datum level is called datum
static. It includes the corrections for weathering layer, static for the elevation corrections for
moving the data from the base of these near surface layers to a reference datum
Field Static Correction
Field Static
• In principle, from a knowledge of the topography of our seismic line, the source and receiver
parameters the velocities and thicknesses of the near surface layers, a complete statics
solution can be derived. This static computed from filed parameters is referred to as the "field
static“. These static information is stored in SPS files. After this correction the source &
receiver is supposed to be placed on a reference datum.
• Lets assume that we have derived the near surface model and consider a simple two layer
near-surface, consisting of a weathered layer of low velocity un-consolidated material and a
sub-weathered layer of more competent lithology, also assume that our datum is in sub-
weathering zone/layer.
• If we know the thickness of the weathering layer, elevation of the shot and the geophone,
and depth of the shot, we can compute the Field static correction as illustrated in Fig.3
• There will be two component of total static, a shot component and a receiver component.
• The biggest challenge in this method is to finding the value of weathering velocity and
thicknesses at each location.
Field Static Correction

Fig.3 Fig.4
Field Static Correction
Methods for Finding Thickness & Velocity
• The main technique used to find the velocity and thickness of the weathered layer is “well
velocity survey” or uphole shooting shown in Fig.4. Unfortunately, this is not done frequently
along the line.
• Another problem is that there are not enough shallow shots to determine the very near
surface velocities and thicknesses.
• Another simpler method is to use the uphole time The basic assumption is that the shot hole
is drilled just below the weathered layer, and the uphole time will therefore give us the
velocity of weathered layer.
• Using the uphole time, the delay method equations from Figure 3 can be further simplified &
rewritten so that the receiver static is simply the sum of the shot static at the receiver location
plus the uphole time
The Effect of the Near Surface:
• let us consider the effects of topography. Ideally, we would like to record the data on a
perfectly flat surface.
• The rugged topography, or irregular variations in elevation can severely distort our data.
Effect of Topography
The Effect of the Near Surface:
• In the case of land data, the effect on the subsurface structure is "anticorrelated“ with the
elevation profile of the surface. That is, highs on the surface are seen as lows on the
reflector, and vice versa. This is illustrated in Figure 5.
• The Velocity & Thickness Variation:
• This is the least well known part of the problem
and will have the greatest effect on the statics
solutions. In the processing of land data, several
simplifying assumption are made.
• The first is that, the top layer of the earth is made
of unconsolidated weathered material of variable
thickness and low velocity called the weathered layer. Fig.5 effect of surface topography on statics
Refraction Static Correction
The Velocity & Thickness Variation :
• the thickness of this layer is often taken from uphole time measurements. This assumes that
the shot has been drilled slightly below the weathered layer. Significant deviations in this
near surface layer thickness can be caused by geological effects as meandering river,
channels, variations in glacial till thickness, and variations in the water table.
Refraction Statics:
• The estimation of static corrections can be severely hampered by irregular topography and
rapidly varying velocity and thickness changes of the weathering and sub-weathering layers.
• Refraction method, which analyses the first break to estimate the thickness and velocities of
near surface layers is one of the best method.
• The first break can be picked manually or automatically. First break picking is essential step
to compute statics thru refraction. It is advisable to do pre-conditioning on data without
affecting the characteristic of first breaks. Pre-conditioning includes applying linear move-out,
limiting the trace length, limiting the offset, applying random noise attenuation, some form of
band pass filter and amplitude balancing.
Refraction Static Correction
• A theoretical plot of first breaks from a surface shot is shown in Figure 6. If we consider the
geometry of the ray from shot S to receiver R in Figure 6, the total traveltime can be shown
to be:
+ Where ic = Critical angle & Sin ic =
The first term is the intercept time which gives the depth
of the 1st layer and the 2nd term is the slope, which gives
The velocity of the 2nd layer.
• Here we assumes that the shot is fired at the surface,
But normally it is fired at certain depth, so we add the
Uphole time.
• Another assumption is that, the velocity of 2 nd layer
Is more than 1st layer.

Fig.6 The recording of refracted wave along


a seismic spread
REFRACTION INTERPRETATION PROCEDURE
• REFRACTION INTERPRETATION PROCEDURE: There are different methods of using the
refraction data for determining the weathering thickness and velocity. We can classify the
following approaches as: Slope/Intercept Method
- Slope & Intercept Methods
- Delay Time Methods
- Reciprocal Methods (Time-depth Method)
- The Generalized Linear Inverse (GLI) Method
- The Time-Term Method.
• Slope & Intercept Method: This is the simplest method of interpreting first breaks. The first
step is to fit slopes to a set of picked arrival times, and thus find the seismic velocities. Each
slope is then extrapolated back to the shot location to find the intercept time, and hence the
depth to a particular layer. Figure 7 shows a hand interpretation of a set of picked first
arrivals, and the resulting geological model.
• Notice that the method has found only the very smoothly-varying component of the near-
surface, and that smaller variations, such as those indicated –at locations 141 and 161, have
not been accounted for correctly.
Refraction Static Correction

Fig.7
Refraction Static Correction (THE DELAY TIME METHOD)
THE DELAY TIME METHOD:
• The delay time (Barry, 1967) is defined as the time between the shot or receiver and the
refractor minus the time for the normal projection of the raypath on the refractor. Thus, the
total travel path has two delay times associated with it, one for the shot and other for the
receiver, which are given by the general form:
……….(3) where = delay time below shot or receiver
= Depth below shot or receiver
• These definition can be seen in Fig.8, once we know
the delay time, then the depth can be calculated using
Formula (3).
• The delay time is found out by averaging the intercept time
from both forward and reverse profile and partioning the
Intercept time into shot and receiver component.

Fig.8. The basic principle behind the delay time method. Dip is assumed to be negligible in the
vicinity of shot & receiver.
Refraction Static Correction (THE RECIPROCAL METHOD)
THE RECIPROCAL METHOD (Hawkins method):
• The theory is based on “time-depth-term” and is very similar to “delay time” method, except
that the surface and refractor are no longer assumed to be horizontal, so the depth term at
shot and receiver are not vertical but perpendicular to the refractor.
• Difference in travel time over similar raypaths are used to estimate the time depth term and
hence the intercept time. This is best understood in Fig 9. Again the time depth term can be
defined as:
…………(4)
To find the time depth for a particular geophone, we simply add the travel-times from two
source on either side, then subtract the total time from shot-point to shot-point (this is defined
as the reciprocal time), and halve the result. In symbols, we can define the time-depth as:
………..(5)
• We can then transform the time-depth into the depth
to the refractor by using equation 4.

Fig.9
Refraction Static Correction (THE RECIPROCAL METHOD)
THE GENERALIZED RECIPROCAL METHOD OF PALMER:
• It is probably the most commonly used derivative method. The only difference from the
earlier method is that, the forward and reverse rays emerges from nearly same point on the
refractor, so that very small portion of the refractor needs to be planer.

Fig.10
The Generalized Linear Inversion Method ( GLI )
THE GLI METHOD:
• All the methods discussed so far has made some assumption about the near surface model.
They assumed that the near surface consist of some layers whose thickness and velocity
may vary laterally and vertically. The first arrival times depends on the thickness and velocity.
• Hampson & Russel 1984, devised an automatic method which
develops the near surface model iteratively based on the information
provided by the first break.
• This is called Generalized Linear Inverse or GLI which is illustrated
in Fig 10.
• The user inputs the initial guess about the near surface model in
terms of the number of layers expected, approximate velocities &
thicknesses.
• The program then calculates the first arrival using ray-tracing,
compare with the actual first arrival, improve the model to minimize
the difference in an iterative manner.

Fig.10
Refraction Static Correction (THE GLI METHOD)
• The application of this method is shown in
Fig.11 which is brute stack with elevation and
uphole correction, & Fig.12 shows the same
stack with static correction using GLI.
• After static, both long and short wavelength
static is taken care of.

Fig.11

Fig.12
Refraction Static Correction (The Time Term Method or Least
Square Method)
• In all previous techniques, the statics are derived by assuming some model of
sub-surface. The “time-term” method doesn’t require any model to be generated.
It derives the statics from statistical analysis of the 1 st break. Such methods are
called Time-term method or Least Square Method.
• Lets consider the basic travel time equation
…………………….(6) Where
= Total travel time from shot to receiver
= Shot static at shot “I”
= Receiver static at receiver “J”
= Offset from receiver to shot
V = Velocity of the 2nd layer which acts as a refractor.
The above equation can be further simplified by applying LMO correction & removing
from the 1st break, then we will have only two terms.
• The problem can be setup as a series of linear equations and solved by “Gauss-
Seidel” iteration.

Fig.12
SUMMARY
• Refraction analysis of first break for determining the near surface velocity model from which
static correction can be derived has progressed a long way from original slope intercept
method to automated statistical analysis.
• Most method produces almost identical result but the speed of analysis has gone up
dramatically.
STATIC CORRECTION FOR LAND SEISMIC
LECTURE - 6
Automatic Residual Statics

• Despite our best effort, the field statics and refraction static fail to give complete
solution to the static problem.
• There are number of possible reason for this, a few of them are:
1. The thickness and velocities of near surface is mostly different from our
assumption d/t complexity of the earth.
2. The velocity can vary both laterally and horizontally d/t changing lithology.
3. The thickness of the weathered layer may vary rapidly d/t to river deposition
or glaciation.
4. Water table may have effect on velocity distribution.
5. Deviation of raypath from vertical.
• The concept of residual static was developed in late sixties or early seventies.
This technique is based on the “reflection correlation” after NMO.
• The steps followed in this technique are:
1. All the unstacked traces are corrected for field static, followed by NMO
correction.
2. Pilot trace is generated for each CDP by stacking few traces or all traces of
the CDP.
Automatic Residual Statics

3. All traces of each CDP is cross-correlated with its pilot to derive residual
static for each trace.
4. The derived residual static of each trace is applied as residual static.
Example:
• The three traces in fig 13 looks identical, except for +ive and –ive time shift of
10ms.
• If we stack the three traces without any correction, then the result gets smeared
in time.
• Now trace A is x-correlated with B & C respectively. The x-correlation looks like
auto-correlation except that the zero lag value has been shifted by -1oms in 1 st
case & +10ms in the 2nd case.
• The traces are corrected for the shifted and then stacked, gives the best result.
• So, the residual static is used for our final static

Please Note that, residual static will not resolve major lateral static problem.
Automatic Residual Statics

Fig:13: A three trace stack, Fig.14: The cross-correlation of traces A & B, and
where traces are out of A & C, & the resulting stack after the indicated
alignment shifts have been applied
Automatic Residual Statics

Fig.15: A set of 600% CDP gathers before and after the application of correlation
static.
Linear Surface Consistent Residual Static Method.

• It is often observed that static pattern moves with same geophone group spacing
as the shot moves. This is called surface consistent pattern.
• As this static pattern moves over different CDP gathers, so it can’t be solved by
x-correlation of traces in CDP gathers itself.
• So, we must derive shot and receiver static in independent way.
• Consider figure 16. The total static correction can be divided into 4 component.
1. Shot Static
2. Receiver Static
3. RNMO component
4. Structural component.
• Mathematically it can be written as:
Tij = Si + Rj + Gk + Mk X2ij ---------------------- Where
Gk = Structural component of Kth CDP, Mk = Time averaged RNMO at Kth CDP.
Si = Source Static at Ith location, Rj = Receiver Static at Jth location
• Si & Rj are called the surface component
• Gk & Mk are called sub-surface component.
Linear Surface Consistent Residual Static Method.

Fig.16: A simplified cross-section of the earth showing surface and sub-surface


consistency.
Linear Surface Consistent Residual Static Method.

• Equation (7) is referred to as linear equation.


• These equation consist of a number of observations, from which the parameters
S, R, G, M must be solved, (Mathematically) but in Geophysical problems, we
usually have the situations in which number of unknowns is not equal to number
of parameters.
• So despite having number of observations, there is no unique solution. But by
using averaging technique, we can converge to a reasonable solution. The steps
followed are given below:
1. We sum over the common shot position to get the estimate of shot static.
2. Sum over common receiver position to get the estimate of receiver static.
3. Next sum over CDP traces to get structural component.
4. To get the RNMO values, sum over the CDP positions after weighting them with a
factor equal to the offset squared.
5. We then iterate through this and measure the error to arrive at optimum solution.
• This gives the overall best result, but the individual static may not be the best.
Linear Surface Consistent Residual Static Method.
Linear Surface Consistent Residual Static Method.
Sample Questions

1. Formula for Fresnel zone diameter before migration and after migration.
2. Formula for displacement in horizontal, vertical displacement after migration?
3. With suitable diagram, prove that the moveout for the diffraction at the edge of
the fault is double ( 2 Times) of the Normal Moveout.
4. Using Dix hyperbolic NMO correction Calculate the value of far offset or long
offset “X”: If Vnmo= 3000 m/s, T0 = 3.0 Sec & = 200 ms, X = ?
5. What is static correction? What is the objective of static correction for land data
6. Name the methods for determining static correction and describe any two
methods briefly (Short notes).
7. Describe with diagram, what is “Delay Time” & “Time Depth” term & Reciprocal
time in static correction and how to find the thickness and velocities of the
weathering layer below the shot and receiver.
8. What is residual static? How to determine residual static using cross-
correlation with diagram.
9. What is surface consistent static, write the formula for total static value as per
surface consistent convention & explain the meaning of each term
MIGRATION

• As per Sheriff migration is an inversion process, where reflections and diffractions


are placed at their true position. It increases the spatial resolution, collapses the
diffraction and focuses the true energy.
MGRATION PRINCIPLE & DISPLACEMENT

FIG: 1
MIGRATION DISPLACEMENT

Consequences: FIG: 2
1. The dip angle of the reflector in geologic section is greater than in the time
section. Thus migration steepens the reflectors.
2. The length of the reflector as seen in the geologic section is shorter than in time
section. Thus migration shortens the reflector.
3. Migration moves the reflector in up dip direction.
4. Consequently, it reduces the size of the anticline, increases the size of syncline &
unties the bow-ties.
MIGRATION

• Migration can be classified (On the basis of data type)


• Post Stack Migration
• Pre-Stack Migration
• Based on algorithm, migration is defined as
• Time migration (uses RMS velocity, which doesn’t account for ray bending at
the interface)
• Depth migration (Uses interval velocity which account for the ray bending at
the interface.)
MIGRATION DISPLACEMENT

• Horizontal and vertical displacement and the dip –dx,, dt & as seen on the migrated
time section can be expressed in terms of medium velocity v, travel time t and
apparent dip as measured on the un-migrated time section.
• dx =
• dt = t [ 1 - )2 ]

• =
• Post stack migration depends on stack quality and accuracy of the velocity
field, however the fidelity of migration depends on migration aperture and
spatial sampling.
MIGRATION APERTURE
MIGRATION APERTURE
• It is defined as the fringe or the extra length/area that must be added
around the subsurface target in order to correctly migrate the dipping
events and correctly focus the diffracted energy located at the edge of the
target area.
• The migration of dipping events has three effects:
1. Increasing the reflector dip
2. Shortening the reflector
3. Moving the reflector in the up dip direction.
• The formula for the horizontal & vertical displacement along with the angle
of reflector after migration is given below:
Dh = (V2 * t * tan/4 ( is the dip angle in the time section, V is the
medium velocity, t is the unmigrated TWT of the reflector)
Dv = [ 1 - )]

(whereis the angle of reflector after


migration)
EXPLODING REFLECTOR
EXPLODING REFLECTOR:
• Stack section can be considered as the data recorded with coincident source and
receiver (zero-offset).
• In exploding reflector model we assume that exploding sources are located along
the reflecting interface, and each CMP location on the surface has one receiver.
• Sources explode in unison and sends out waves that propagate upward and get
recorded.
• The section obtained from exploding
reflector and zero-offset recording are
mostly equivalent except that, the
zero-offset records two way time and
exploding reflector records one way
time.
• For that reason we assume that the
velocity in case of exploding reflector FIG: 3
is V/2.
EXPLODING REFLECTOR
EXPLODING REFLECTOR:
• Another difference is that exploding reflector has only up-going wavefields,
whereas zero offset section has both up-going and down-going wave.
PHYSICAL PRINCIPLE OF MIGRATION (HARBOR EXAMPLE)

FIG: 4

• A storm barrier at a distance z3 from the beach having small gap in-between.
• Physicist called this gap as “point aperture” which is similar to point source
(both generate circular wave fronts).
• When plane wave front parallel to the storm barrier hit the “gap”, the gap acts as
a secondary source and generate a semicircular wave front.
• Lay out the receiver along the beach to record the approaching wave.
• Observation: Huygens secondary source respond to a plane wave front and
generate circular wave front.
Physical Principle of Migration (Harbor Example)

FIG: 5
• The reflectors in the sub-surface can be visualized as being made up of many
points that acts as Huygens secondary source.
Physical Principle of Migration (Harbor Example)

• The difference between a “point source” and “point aperture source” are as
follows:
o The point source gives equal amplitude response for all angles, angle
independent.
o The amplitude response from “point aperture source” is angle dependent,
which is described by “obliquity factor”. This effect is compensated by scaling
the amplitude by cosine of the angle between AD & BC, before it is placed at
the output location.
• Wave energy decays as (1/r2) where r is the distance from the source to the
wavefront., so the amplitude must be adjusted by a factor as (1/r) before
summation.
• Huygens secondary source respond as wavelet with unique phase and frequency,
so the received signal must be corrected for amplitude and phase.
CORRECTION BEFORE DIFFRACTION SUMMATION
• Following three corrections must be considered before diffraction summation:
o The obliquity factor: in which the amplitude depends on the cosine of the
angle between the direction of propagation and the vertical axis z.
o The spherical spreading factor: which is proportional to for 2-D wave
propagation and (1/vr) for 3-D wave propagation.
o Wavelet shaping factor :
 For 2-D, It is designed with 45-degree constant phase spectrum and
amplitude spectrum proportional to square root of the frequency.
 For 3-D, It is designed with 90-degree constant phase spectrum and
amplitude spectrum proportional to the frequency
MIGRATION ALGORITHM
• The main requirement of the algorithms is to handle:
• Steep dips
• Lateral and vertical velocity variation
• Should be easy to implement.
• Migration algorithm can be classified under three main categories:
• Based on “Integral Solution” to the scaler wave equation.
• Based on finite difference solution,
• Frequency wav-number implementation.
• Chronological Development.
• Semi-circle superimposition method was used before the advent of computer
age.
• Diffraction summation technique was second in development ladder. The
curvature of the diffraction hyperbola depends on the medium velocity.
• Kirchhoff's summation is same as the diffraction summation but it makes the
correction for the amplitude and the phase change, before summation.
MIGRATION ALGORITHM
• The migration methods which operate in:
a. Space-Time domain are “Kirchhoff’s Migration” and “RTM”
b. Space & Frequency domain are “Explicit Finite Difference” and some “Implicit
Finite Difference”.
c. Wave-number & Frequency Domain” with constant velocity along with its
extension which can handle lateral velocity variation are Stolt Migration &
Phase Shift Migration which operates on stack section.
MIGRATION

Zero offset section: single trace Zero offset section consisting


contains a single blip of energy of single diffraction hyperbola

t2 = 2 +

It will be migrated to semicircle It will be migrated to point


FIG: 6
DIFFRACTION SUMMATION
• The Huygens source signature is semicircle in x – z plane and a hyperbola in x– t
plane. This gives two practical methods for migration.
• Migration based on the superposition of semicircle.
• Summation of amplitude along the hyperbolic paths.
• In the above slide the zero offset section contains a single blip of energy on the
single trace, so it can be called “Migration Impulse Response”
• So, in the first method, an amplitude in x – t plane of un-migrated zero offset
section is mapped onto a semicircle in the output x – z plane. The migrated section
is formed by superimposing many semicircles.
• In the 2nd method of migration results from the observation that a zero offset section
consisting of a single diffraction hyperbola migrates to single point.
• So in the 2nd migration scheme, the amplitude is summed along the hyperbola of
un-migrated section in x – t plane and then it is placed on the apex in x -  plane
(migrated).
• The 1st method was used before the age of digital computers.
• The 2nd method is known as “Diffraction Summation Method”, was the 1st computer
implementation of migration.
DIFFRACTION SUMMATION
• The curvature of hyperbolic trajectory for amplitude summation is governed by the
velocity function.
• The velocity function used to compute the traveltime trajectory is the rms velocity at
the apex of the hyperbola at time .The travel time “t” is given by t 2 = 2 +
• Having computed the input time “t” the amplitude of the input location “B” is placed
on the apex location “A”.
CONSTANT VELOCITY MIGRATION (SEMI CIRCLE SUPERIMPOSITION)

FIG: 7
CONSTANT VELOCITY MIGRATION (SEMI CIRCLE SUPERIMPOSITION)

FIG: 8
Constant Velocity Migration

FIG: 9
KIRCHHOFF’S SUMMATION/MIGRATION
• The diffraction summation that incorporates the “obliquity correction, spherical
spreading correction and wavelet shaping is called “Kirchhoff's Summation”. The
migration based on this summation is called the Kirchhoff’s migration.
• Whatever described from physical point of view (obliquity factor, spreading, phase
and amplitude) can be described by the “integral solution to the scaler wave
equation:
( + - )
where x is the horizontal spatial axis, z is the vertical axis (positive downward)
and t is the time. P(x,z,t) is the compressional wave field in a medium with
constant density and velocity v(x,z)
• The integral solution of the scaler wave equation yields three trems:
• Far field term which is proportional to (1/r)
• Two other terms which are proportional to (1/r2)
• Hence it is the far field term which makes most of the contribution to the
summation.
KIRCHHOFF’S SUMMATION/MIGRATION
• The output image at the subsurface location is computed using only far-field term
of 2-D zero-offset wavefield which is recorded at the surface by the following
summation.
…………………………………..(1)
Where is the rms velocity at output point and which is the distance between the
input ) and output . The asterisk denotes the convolution of the “rho” filter with input
wavefield .
• The rho filter corresponds to the time derivative of the measured wavefield, which
yields the 90 degree phase shift and adjustment of the amplitude spectrum by a
ramp function of frequency. This rho filter is independent of the spatial variables.
• For 2-D migration , the half derivative of the wavefield is used which is equivalent
to 45-degree phase shift and amplitude adjustment by a function of frequency
• The farfield term is proportional to the cosine of the angle of propagation and
inversely proportional to (spherical spreading term) for 2-D and vrms.r in 3-D
KIRCHHOFF’S SUMMATION/MIGRATION
• From equation (1) the output image is computed at using the input wavefield at
• The range of summation is called “migration aperture”
KIRCCHHOFF’S MIGRATION IN PRACTICE:
• The important parameters used in this migration are:
• Migration Aperture, In general the velocity increases with time, so the aperture
also increases with time, even for the same dip.
• Maximum Dip of the formation to migrate.
NOTE:
• The curvature of the diffraction hyperbola is governed by the velocity function, see
the example in the next slide (Fig.10). The low velocity hyperbola has narrower
aperture compared to high velocity.
KIRCHHOFF’S SUMMATION/MIGRATION

FIG: 10
Shape of the Hyperbola with change in velocity (a) Low velocity (2000m/s), (b) High Velocity (4000m/s)
(C) Vertically varying velocity, Migration aperture is small for low velocity and large for high velocities.
KIRCHHOFF’S SUMMATION/MIGRATION

Test of Aperture width. (a) Zero offset data, (b) Desired


output (c) Aperture = 35 trace, (d) Aperture 70 trace, (e)
150 trace, (f) 256 trace half aperture width

FIG: 11
KIRCHHOFF’S SUMMATION/MIGRATION
• Migration aperture is related to horizontal shift and is given by:
• Migration Aperture (in number of traces) 2Nx +1, where Nx = dx / , is the CMP
interval.
EFFECTS OF APERTURE ON MIGRATION:
• Examine fig 12 (next slide). The small aperture eliminates the steeply dipping
events (sort of dip filtering), increasing the aperture allows proper migration of
dipping events. The optimal value of half aperture width is 150 traces, increasing
the aperture to 256 does not improve the section any further.
• A good way to calculate the migration aperture is to generate diffraction hyperbola
using the regional averaged, vertically varying velocity and calculate the aperture
using 30 degree criteria.

Problem:
Dip = 45 deg; V= 3500 m/s = 25 m, so dx = ?, and aperture = ?
KIRCHHOFF’S SUMMATION/MIGRATION

Test of Aperture width in Kirchhoff's migration (a) zero offset


section containing dipping events with 3500 m/s velocity, (b)
desired migration using phase shift, (c) migration using 35-
trace (d) 70 trace, (e) 150-trace (f) 256 trace half aperture width.

FIG: 12
KIRCHHOFF’S SUMMATION/MIGRATION
EFFECTS OF APERTURE ON MIGRATION:
• A test of aperture on stack section is shown in the next slide. A small aperture
causes smearing in deeper part, which destroys the dipping events and produces
spurious horizontally dominant events.
• Why spurious horizontal events specially at the deeper part, the reason is with the
small aperture (or near the end of the data) only the small portion of the diffraction
hyperbola apex is used in summation, which is mostly flat, producing horizontal
spurious events.
CONCLUSION
1. Excessively small aperture width causes destruction of steeply dipping events and
rapidly varying amplitude changes.
2. Excessively small aperture organises random noise, especially in the deeper part
of the section, as horizontally dominant spurious events.
3. Excessively large aperture means more computer time, degrading migration
quality in poor S/N ratio. This will also cause random noise to creep into good
shallow data.
KIRCHHOFF’S SUMMATION/MIGRATION
MAXIMUM DIP TO MIGRATE:
• It is recommended that we should specify the maximum dip to migrate to eliminate
the steeply dipping noise as well as to save on the cost. The aperture width is
related to the dip in the section to be migrated.
• The diffraction hyperbola along which summation is done are truncated beyond the
specified maximum dip limit.
VELOCITY ERRORS:
• Increasing lower velocity than the optimum, causes less and less collapse of
diffraction hyperbola, so giving rise the shape of frown– it is undermigrated.(Fig.13)
• With increasing higher velocities, the diffraction hyperbola is inverted more and
more taking the shape of a smile – It is overmigrated (Fig.14).
KIRCHHOFF’S SUMMATION/MIGRATION

(a) Zero-offset section with


dipping events with 3500 m/s
velocity.(b) Desired migration,
(c) migration with medium
velocity 3500 m/s (d) migration
with 5% lower velocity, (e) 10%
lower, (f) 20% lower.

(a) Zero-offset section with


dipping events with 3500 m/s
velocity.(b) Desired migration,
(c) migration with medium
velocity 3500 m/s (d) migration
with 5% higher velocity, (e) 10%
higher, (f) 20% higher.

FIG: 13 FIG: 14
KIRCHHOFF’S SUMMATION/MIGRATION

(a) Zero-offset section with


dipping events with 3500 m/s
velocity.(b) Desired migration,
(c) migration with medium
velocity 3500 m/s (d) migration
with 5% lower velocity, (e) 10%
lower, (f) 20% lower.

(a) Zero-offset section with


dipping events with 3500 m/s
velocity.(b) Desired migration,
(c) migration with medium
velocity 3500 m/s (d) migration
with 5% higher velocity, (e) 10%
higher, (f) 20% higher.

FIG: 15 FIG: 16
Finite-Difference Migration
• Again consider the famous Harbor experiment. Instead of taking the section
recorded along the beach (1250 m) as diffraction hyperbola and collapsing it to get
migrated section, consider an alternative method. The recording cable is moved
250m from the beach towards the barrier, start the recording at the instant the
plane waves hits the barrier as shown in Fig.17 (b).
• Move the cable 500m from the beach and record the section as shown in (c).
• Note that each recording yields a hyperbola in which the apex moves closer to
zero time.
• Here, the hyperbola recorded a distance away from the beach is used to construct
the hyperbola that would be recorded at another distance closer to the source.
• This process stopped when we reach the source and the hyperbola collapses at
its apex. In our case it happens when the recording is done at the barrier itself.
This is also called imaging principle.
Finite-Difference Migration

Fig. 17.Moving the receiver cable from beach into the water at discrete intervals parallel to
beach line.
FIG: 17

Fig. 18 Computer Simulation of the above experiment, here we continue the recovers downward at
discrete depth interval

FIG: 18
Finite-Difference Migration
DOWNWARD CONTINUATION
• This harbor experiment can be simulated in the computer. Start with the wavefield
recorded at the surface and derive the response for any other level as if the
receivers has been moved down to depth levels, at finite intervals.
• This process is called down ward continuation, where the upcoming wave field
recorded at the earth surface is used to compute the wave field that would have
been recorded at any lower level.
• The computer simulated wavefields at these different depths are shown in Fig.18.
• There is one important difference between the physical experiment in Fig.17 and
computer simulated downward continuation experiment in Fig.18.
• In the harbor experiment the receiver cable is same at each step (Fig.17),
whereas the effective cable length gets shorter and shorter(Fig 18) towards the
source.
• This is because in the 2nd case the recording is confined between two raypaths
depicted on the section (Fig.18 a).
Finite-Difference Migration
DOWNWARD CONTINUATION
• Downward continuation to a depth shorter than the actual depth gives
undermigration, also if the we use low velocity than the actual, we get the
undermigration.
• Conversely, downward continuation to a depth more than the actual depth gives
overmigration, also if the we use high velocity than the actual, we get the
overmigration.
• Another important consideration is the depth step size.
DIFFERENCING SCHEME
• Finite difference migration algorithm are based on the differential solutions to the
scaler wave equation, which are used to downward continue the input wavefield
recorded at the surface.
Bow Tie Effect

FIG: 12
RADON TRANSFORM
Noise Attenuation through Radon Transform
• Attenuation of unwanted events such as noise and multiples poses key problem in seismic data
processing. Attenuation of noise increases the resolution of the data and helps in picking more subtle
anomalies.
• Noise is anything on the seismic data which does not fit our conceptual model of the data. Fig 1. shows
different types of noise. They can be broadly classified as
• Incoherent or Random noise: are the noise which has no discernible pattern from trace to trace.
• Coherent noise are those which shows regularity from trace to trace, e.g. reverberatory trapped
mode energy (multiples & ground rolls).

NOISE ATTENUATION STRATEGY:


• Noise attenuation strategy has been broadly subdivided into 3:
• Stacking: has been recognized as the most effective way to deal with Random noise and long
period multiples at far offset and less effective for short period multiples and ground rolls. Most
stacking is done by taking mean average of the sample value over the nmo corrected gather.
However stacking using the median average sample value can be effective in eliminating large
amplitude spikes.
• Muting: Here the offending part of the data is zeroed out before stacking, it is effective for shallow
reverberation and very long offset multiple interference, but it can’t be used in the area where both
primary and noise overlap (Remember the saying about “throwing out the baby with the bath
water”)
Noise Example

Fig:1. (a) Example of Noisy Shot Profile, (b) shot


profile from (a) with noise train annotated
Noise Attenuation through Radon Transform

r Noise
g ies fo ion
te at
Stra Attenu

g ing ring
ckin Mut Filte
St a

an nel a nne
l
ian h h
Mea
n
Med le C lti C
Sing Mu

d pa
ss ive ted ncy iz e d
di c t ig h ere i lt e r e ra l n i lt e r
Ban ilter Pre con We e Mix Coh er -K F en
G ado -P F
F De Tra
c Fi l t F R Ta u

Fig. 2
Noise Attenuation through Radon Transform
• Filtering: It can further be subdivided as:
• Single channel filtering: e.g. band pass filter, this assumes that the noise separates cleanly with
the signal in the frequency domain (it effectively removes the low frequency noise, like ground rolls.
Predictive deconvolution is another single channel operation, where predictable multiple patterns
are identified and removed from the seismic data. A basic problem with this technique is that it
requires detail knowledge of multiple generating mechanism.
• Multichannel Filtering: It is the most effective filtering technique. Here a group of traces are used
in the estimation and removal of noise e.g. weighted mix, where the output trace is the weighted
mix of number of input traces, the output appears less noisy because noise has smeared over
large area. f-k filter is common 2-D filter. It is based on the fact that in 2D frequency domain certain
noise pattern separates very effectively from signal. It is also called velocity or pie-slice filter
because it can effectively remove noise patterns within “slice of linear dips”. A disadvantage of f-k
filter is that it is applied too severely, so the data looks smeared. More recent 2-D filtering
techniques are tau-p transform and generalized “Radon transform”.
Conceptual Review of Radon Transform:
• As we know that “Seismic data = Model + Noise “ so if we could find a domain where signal and noise
are separated , then removal of noise becomes much easy. This is the aim of “Generalized Radon
Transform”. In this technique, data are modeled as series of curved trajectories. The term generalized
means that the any type of curved trajectories can be used for modeling the data.
Various Names for the Same Thing

• Tau-p transform
• Slant stack
• Radon transform
• Plane wave decomposition
Definitions of p

• p is a measure of slope
• p = DT/DX
• p is the inverse of apparent horizontal velocity
• The value of p at a point on a curve is the slope of the tangent to the
curve at that point
DX

f V.DT

W
av
ef
ro
nt
sinf = V.DT/DX
p = DT/DX = sinf/V
Skewing & Projected Sum

Conceptual Review of Radon Transform:


• To understand the basics of the technique, lets understand two concepts:
• Skewing: Is the process of projecting a curve trajectories as a straight line on the 2D plane.
• There is only one way to introduce linear skew whereas there are many ways to introduce the
curved (non-linear) skew.
• Projected Sum: The three dipping events can be thought to dip at either a constant number of
milliseconds per trace or at a constant angle if the velocity is held constant.
• The three dipping/horizontal lines are projected on the plane and summed. The horizontal line gets
projected to a point, whereas the other two are “smeared” over a line whose length is same as
their dip over time.
• It may be noted that the amplitude of zero dip line reinforce each other, whereas the amplitude of
other two lines interfere destructively.
• So the data can be skewed by different angle and then projected on 2D plane followed by
summation. Now we can collect a range of skewed sums.
• This is the principle of Tau-p transform or “Slant Stack”.
• The forward Tau-p transform is nothing more than a series of linearly skewed sums (shown in fig 8).
• Tau-p transform or slant stack is mathematically defined as: t = where, X = offset, = time at “0”
offset, p = move out term (expressed in angle), X = Offset.
Skewing & Projected Sum

Fig 3.Skewing Fig. 4. Projected Sum technique

Fig 5. Skewed Horizontal Projection Upward Fig 6. Skewed Horizontal Projection downward
Skewing & Projected Sum

Fig 7. Collection of Projected Sum Fig 8. Tau-p transform or Slant stack- Linearly
skewed sum
Generalized Radon Transform

Conceptual Review of Radon Transform:


• This equation tells us that a linear event with dip “p” millisecond/trace and zero-offset intercept time will
transform to a point at time “t” after movement/moveout “p” after tau-p transform.
• In the inverse tau-p transform, the point will transform into straight line with the indicated moveout.
• Although there is only one way we can introduce linear skew, there are several way to introduce curved
(nonlinear) skew.
• The two most common nonlinear curves in seismic are “Parabola” & “Hyperbola”. The respective
equations are:
t=2
t = 2 )1/2
• So we have three equations:
t= (Linear) -----------------------------------------------(1)
t= 2
(Parabola) --------------------------------------------(2)
t= )2 1/2
(Hyperbola) ------------------------------------------(3)
Generalized Radon Transform
• Please note that in equation (1) & (3), the linear and hyperbolic, “p” is slowness or inverse
velocity in s/m or s/ft, but for equation (2) i.e for parabola, “p” is slowness divided by
distance s/m2 or s/ft2.
• The three curves has its own advantage and disadvantage. For example the straight line can
be used for modeling any event with linear moveout, but falls down when we try to model
hyperbolic NMO curves.
• Hyperbola would be the ideal curve for matching NMO, however the square root operation
makes it highly non-linear, therefore difficult to formulate on computer.
• So, the parabola would be best compromise, as it displays the curvature but doesn’t involve
square root. It can be shown that Parabola can be effectively used to model the seismic data
after NMO correction.
• Here in the diagram we will consider the curved events either a parabola or a hyperbola.
After NMO correction (Fig.9) P indicates the corrected primary and M1 and M2 under-
corrected multiple, the projected sum from seismic profile is shown on the right of each
figure. In this case the primary maps to single point.
• Now consider left side of Fig.10, where the curved skew flattens the multiple M1, so it maps
to a point, in next Fig 11., M2 is flatten and subsequently mapped to a point.
Generalized Radon Transform

Fig:9. Curved skew sum, Fig:10. Curved skew sum, a Fig:11. Curved skew sum,
NMO corrected gather. flatten curve is mapped to point.

Fig.13. Selective inverse Radon Transform and


modeling, A point is mapped into line. Note that
Fig.12. Generalized discrete Radon transform only Po is retained on the inverse transform
Generalized Radon Transform
• So, any curved event (Primary or multiple) can be flattened using suitable values of “P” and
then projected as point.
• This gives the conceptual overview of “Generalized Discrete Radon Transform”
• It can be seen from fig. 13 that the three events of the CDP profile, is mapped to three
isolated points in the Radon transform domain.
• The usefulness of the transform lies in the fact that we can selectively filter the data in the
Radon transform by elimination of single point. Then perform an inverse transform to get
back to time domain.
• Theoretically it is possible to remove entire line of noise (multiple).
• If the primaries are zeroed out in transform domain, then the inverse transform will give the
multiples.
• This means, we can decompose the input model into three separate model, Primary model,
Multiple model and everything that is left over- namely the error and the noise model.
Generalized Radon Transform

Fig.14. Inverse Radon transform, A point is Fig.15. Decomposition of Input Data


mapped into curve
Graphics of Tau-p Transform

99_Radon Transform-Part 2.pptx


Generalized Radon Transform
• A parabola is represented by “zero-offset time” and its moveout or difference between far
offset time and zero offset time. Moveout = Far offset time – Zero Offset time.
• A hyperbola after NMO correction becomes very close to parabola.
• The difference between the model and the actual can be reduced by increasing the moveout
values “p”.
• For the purpose of multiple attenuation, it is necessary to model the “multiples” by removing
the primary using the velocity /ray parameter discrimination.
• Subtracting the multiple model from the input data gives rise to only primary along with some
remnant noises.
Advantages of Radon Transform
• Achieves multiple and noise attenuation at all offsets equally.
• Requires no knowledge of multiple generating mechanism.
• Requires no detail knowledge of primary and multiple velocities.
• Accommodates non-uniform acquisition geometries.
Advantages of Radon Transform
• Multiples must have sufficient moveout discrimination to be attenuated.
The Velocity Stack (Radon) Transform

• Unlike transform, Velocity


Stack transformation involves
the application of hyperbolic
moveout correction and
summation over offset axis.
Here the offset axis is
replaced by velocity axis.
• The relationship between the
input coordinates (h,t) and
transform coordinates is given
by;
• t2 = ----(6-9b) where h is
the half of the offset and v is
the stacking velocity.
Fig:6.4-1, slant-stack (left) and velocity-stack
(right), mpping of a CMP gather (center)
The Radon Transform

• In domain, the refracted arrival or linear noise maps onto a point and primary as
well as multiples are mapped onto ellipse, since we have truncated hyperbola, so
we have truncated ellipse. A fast velocity hyperbola maps onto tighter ellipse than
slow-velocity hyperbola.
• Multiples are not periodic in offset domain, even for horizontally layered earth, but
they are periodic in ray-parameter domain
• As mapping function is hyperbolic in velocity-stack transformation, so primary or
multiples maps to point in velocity domain.
• Hence we will be able to distinguish between multiples and primaries in the
velocity domain based on velocity discrimination and use this criterion to attenuate
multiples.
• The finite cable length, discrete sampling along the offset axis and closeness of
hyperbolic path at near offset causes smearing of stacked amplitudes along
velocity axis.
• Unless the smearing is removed, the inverse mapping from velocity domain to
offset domain does not reproduce amplitudes in the original CMP gather.
Velocity-Stack Transformation

Fig.6.4-2.(a) A synthetic CMP gather with


three primary reflections; (b) A synthetic
CMP gather with one primary reflection
(0.2 sec) and its multiples; (c) composite
CMP gather containing the primaries and
multiples in (a) and (b), (d) the
conventional velocity-stack gather derived
from the composite CMP gather
Velocity-Stack Transformation

• The mapping from offset to velocity domain is achieved by applying hyperbolic moveout correction and
summing over the offset given by:
u( ---------------------------------(6 -10a)
• Traces in composite CMP gather (6.4-2c) are stacked with a range of constant velocities and are
displayed side by side, forming the conventional velocity-stack gather (6.4-2d).
• The maximum stacked amplitude corresponds to primary and multiple velocity, the horizontal streak on
either side, is d/t contribution from small offset.
• The inverse mapping from velocity space back to offset domain is achieved by applying inverse hyperbolic
moveout correction and summing over velocity & is given by: (here d’(h,t) represents modeled CMP
gather)
d’(h,t) = ------------------------------------------(6-10b)
• The gather obtained is called modeled as it is doesn’t create the original data d(h,t). It has been observed
that the discrete transform given by equation (6-10a) & (6-10b) are not exact inverse to each other.
• Fig (6.4-3b) shows definite decrease in amplitude at far offset specially along the events with large
moveout, this degrades the velocity resolution.
• This is because the travel time for horizontally layered earth is given by the Taylor series:
t = + C1h2 + C2h4 + C3h6 + ………, where C1, C2 are scaler coefficients. This can be further
expressed as: t=
• Beylkin,1987 gave an integral equation for transforming (h,t) domain data to (v,t) which is called Linear
Radon Transform domain as
------------------------------------(6-11a)
Velocity-Stack Transformation

Fig.6.4-3: (a) velocity stack gather as in (6.4-2d), (b) CMP gather reconstructed using equation 6-10b,(c)
velocity-stack gather derived from (b) using equation 6-10a, (d) reconstructed CMP gather, Note the
degradation of velocity resolution on velocity-stack gather (c) d/t reduction in far offset amplitude.
Velocity-Stack Transformation

• Here the integration is along the curve expressed as linear function of travel times “t” &
• Accordingly d(h,t) and its Radon transform u(v, ) are defined as the continuous function in offset and
velocity domain respectively.
• However, we don’t get the gather in offset domain by using the inverse of formula (6-11a), instead the
Radon inverse formula was given by Belkyn, 1987 , which involves the convolution of u(v,t) with rho filter
before integration over velocity.
d(h,t) = where is the rho filter ------------------------(6-11b)
• In practice the discrete form of can be convolve with discrete form of u(v,) prior to summing over finite
range of velocities to reconstruct the original data d(h,t).
Parabolic Radon Transform

• Here the integration is along the curve expressed as linear function of travel times “t” &
• Accordingly d(h,t) and its Radon transform u(v, ) are defined as the continuous function in offset and
velocity domain respectively.
• However, we don’t get the gather in offset domain by using the inverse of formula (6-11a), instead the
Radon inverse formula was given by Belkyn, 1987 , which involves the convolution of u(v,t) with rho filter
before integration over velocity.
d(h,t) = where is the rho filter ------------------------(6-11b)
• In practice the discrete form of can be convolve with discrete form of u(v,) prior to summing over finite
range of velocities to reconstruct the original data d(h,t).
Radon Transform Miscellaneous

• Attenuation of unwanted events such as noise and multiples poses key problem in seismic data
processing.
• Effective solutions exploit the move-out or curvature differences between the offending events and the
events of the interest.
• One such solution is Radon Transform, and in their discrete form they are known for different variations
(linear, parabolic, hyperbolic, generalized) and names (slant stack, beam forming, fan filtration,
transform).
• The method of choice depends on the inherent property of target signal, computational cost etc…e.g.
parabolic and hyperbolic transform are preferred Radon methods if the data after move-out correction are
best characterized superposition of parabola & hyperbola respectively
Mathematical Analysis of Slant Stack Transformation
• A plane wave can be synthesized using two step process. 1 st LMO is applied through a
coordinate transformation defined by:
…………………………………………………(1)
Where is the ray parameter, is the offset, is the two-way travel time, is the
intercept time at = 0
• 2nd the data are summed over offset axis by

• By repeating the LMO correction for a range of “” value and performing the summation as in
equation (2), full slant stack gather is constructed. A slant stack gather is commonly known
as - gather.
• The transformation from “” domain to “ - domain is reversible.
• First apply inverse LMO correction to the data in - domain by
= ………………………………………(3)
• Then sum the data in “ - domain over the ray parameter “ axis to obtain
P
Slant Stack Transformation
• To restore the amplitude properly , rho filtering is applied before the inverse mapping. This
is done by multiplying the amplitude spectrum of each slant stack trace by absolute value of
frequency.
• rho filter is equivalent to differentiating the wave field before the summation.
CONCPT OF FREQUENCY DOMAIN MIGRATION

Fig:1: A 90-degree reflector model (a) Earth Section (b) Record Section
CONCPT OF FREQUENCY DOMAIN MIGRATION

Fig:2: A Dipping reflector model (a) Earth Section (b) Fig:3: The Earth Model. (a) Some representative Earth
Record Section Dips--- Open fan (b) Corresponding Record Section-
Folded Fan
𝐴𝐶 ′ 𝐴𝐶
𝑆𝑖𝑛 𝛼 𝑎= = =𝑡𝑎𝑛 𝛼 𝑏
𝑂𝐴 𝑂𝐴
CONCPT OF FREQUENCY DOMAIN MIGRATION

Fig:4: A bounded dipping reflector (a) Record Fig:5. A correspondence between record section
Section (b) Construction of Migration and the depth section (a) Wedge of Events before
Migration (b) Half Disc of Events after Migration
CONCPT OF FREQUENCY DOMAIN MIGRATION

Fig:6: A Review of Fourier Transform--- time domain, frequency domain amplitude and phase (a) Spike at a
time 0 (b) Spike Shifted in time and (c) Boxcar
CONCPT OF FREQUENCY DOMAIN MIGRATION

Fig:7: A 3-D sketch of three bounded horizontal reflectors (a) Line Spike model in depth domain (b) Fourier
transform in X-direction (c) Typical amplitude contours in the 2-D frequency domain.
CONCPT OF FREQUENCY DOMAIN MIGRATION

Fig:8: A 3-D sketch of three bounded vertical reflectors (a) Line Spike model in depth domain (b) Fourier
transform in depth-direction (c) Typical amplitude contours in the 2-D frequency domain.
CONCPT OF FREQUENCY DOMAIN MIGRATION

Fig:9: Bounded dipping reflection segment. (a) Fig:10: Migration Mapping, (a) Depth domain Mapping,
Reflection in the depth domain (b) Frequency (b) Construction of Frequency domain Mapping and (c)
domain representation Preservation of Spatial Frequency
CONCPT OF FREQUENCY DOMAIN MIGRATION

Fig:11: Migration Mapping in frequency Fig:12: Wedge of Dips prior to migration. (a) A wedge
domain. (a) A line of constant Kz frequency of events in the depth domain, (b) The frequency
and its migrated mapping (b) A grid of curves domain representation of the wedge.
of constant Kz
CONCPT OF FREQUENCY DOMAIN MIGRATION

Fig:13: Wedge of Dips after migration. (a) A Fig:14: The full fan earth model. (a) the full fan, (b) the
seismic disc in the depth domain, (b) The bounded fan of the record section and (c) the migrated
frequency domain equivalent correspondence of (b).
CONCPT OF FREQUENCY DOMAIN MIGRATION

Fig:15: The frequency domain representation Fig:16: The full fan earth model. (a) the full fan, (b) the
of Figure 14. (a) Fourier transform of full fan, bounded fan of the record section and (c) the migrated
(b) Fourier transform of the bounded fan with correspondence of (b).
the representative pulse spreading and (c)
migration of (b). Note the dotted lines in (b)
and (c) correspond under the migration
mapping as do points A, A’, B, and B’.
CONCPT OF FREQUENCY DOMAIN MIGRATION
• The main requirement of the algorithms is to handle:
• Steep dips
• Lateral and vertical velocity variation
• Should be easy to implement.
• Migration algorithm can be classified under three main categories:
• Based on “Integral Solution” to the scaler wave equation.
• Based on finite difference solution,
• Frequency wav-number implementation.
• Chronological Development.
• Semi-circle superimposition method was used before the advent of computer
age.
• Diffraction summation technique was second in development ladder. The
curvature of the diffraction hyperbola depends on the medium velocity.
• Kirchhoff's summation is same as the diffraction summation but it makes the
correction for the amplitude and the phase change, before summation.
CONCPT OF FREQUENCY DOMAIN MIGRATION
FREQUENCY WAVENUMBER MIGRATION:
• We know that the migration methods which operate in:
a. Space-Time domain are “Kirchhoff’s Migration” and “RTM”
b. Space & Frequency domain are “Explicit Finite Difference” and some “Implicit
Finite Difference”.
c. Wave-number & Frequency Domain” with constant velocity along with its
extension which can handle lateral velocity variation.
• Stolt (1978) & Gazdag (1978) gave two post-stack migration method which is
extremely fast, but operates only in constant velocity medium, at the most there
can be vertical velocity variation. They overcome this limitation by giving fast
speed, so they have become workhorse.
• Here the input trace (x, z, t) (space & time domain) is transformed into
monochromatic plane-wave-component (Kx, Kz, w). This is a useful transformation
because, in the Fourier domain the constant velocity wave equation becomes a
simple algebraic identity, which relates the frequency “w” and the wave-number
components Kx, Ky, Kz.
CONCPT OF FREQUENCY DOMAIN MIGRATION
FREQUENCY WAVENUMBER MIGRATION:
• Stolt Migration uses this relationship to move amplitude and phase of each (Kx, Ky,
w) to their corresponding (Kx, Ky, Kz) location, downward continuing and imaging
in single step.
• After that the data is interpolated onto regular grid, followed by inverse Fourier
transform brings the data back into (x, y, z), then produces the space domain
image.
• Gazdag phase shift migration is more complicated . It performs downward
continuation of each (w, Kx, Ky) separately from one depth to another. This
migration honors Snell’s law, with the plane wavefront changing dip as they
propagate through the V(z), making it more powerful for imaging steep-dip
intrusions in sedimentary basins such as Gulf of Mexico.
• It has been used to image to image greater than vertical dips (overhung salt faces)
with laterally invariant velocity
CONCPT OF FREQUENCY DOMAIN MIGRATION
FREQUENCY WAVENUMBER MIGRATION:
• Frequency-wavenumber migration is not easily explained from physical point of
view.
• We know that dipping events in t-x domain maps onto radial line in f-k domain. The
steeper the dip, the closer the radial line to the wave-number axis.

Fig:17
CONCPT OF FREQUENCY DOMAIN MIGRATION
FREQUENCY WAVENUMBER MIGRATION:
• The above figure shows the dipping events before and after migration in t – x and
f – k domain.
• The Nyquist wave number is 20 cycles/km, and the bandwidth is given by the
corner frequencies 6,12,- 36,48 Hz for the pass-band region of the spectrum.
• Note that migration rotates the radial lines outwards and
away from the frequencies axis, keeping the wavenumber
unchanged.
• The energy associated with the left flanks of the diffraction
maps onto the left quadrant of the f – k plane, and the Fig:18
energy associated with the right flanks of the diffraction,
maps onto right quadrant of the f – k plane.
• Migration of a dipping event in f-k domain is shown in
figure 18.
• Here the vertical axis represents the temporal frequency
for the event in its unmigrated position B and the vertical
CONCPT OF FREQUENCY DOMAIN MIGRATION
FREQUENCY WAVENUMBER MIGRATION:
• Migration in frequency-wavenumber domain involves mapping the lines of constant
frequency AB in – Kx plane to circle AB’ in Kz – Kx domain.
• Therefore migration maps point B vertical onto point B’
• Note that in this process the horizontal wavenumber Kx
does not change.
• When the mapping is completed then the dipping event
OB is migrated to OB’ .
• We now examine the diffraction hyperbola and its collapse
to the apex after migration in the f – k domain. Fig:18
• A diffraction hyperbola is represented by an inverted
triangular area in the frequency domain as shown in fig.19,
the nyquist wavenumber is 40 cycles/km and the
bandwidth 6,12 – 36,48 (the corner frequency) for the
passband.
• The base of inverted triangle represents the high
frequency end of the pass band.
CONCPT OF FREQUENCY DOMAIN MIGRATION
• The tip of the triangle corresponds to low frequency end of the pass band.
• Migration turns the triangular area into a circular shape as shown in fig.20.
• Here the diffraction hyperbola is assumed to be made up of series of dipping
events such as , A, B, C, D and E. The zero dip segment A is mapped along the
frequency axis.

Fig:19
Fig:20
CONCPT OF FREQUENCY DOMAIN MIGRATION
• The asymptotic tail E maps along the radial line that represents the boundary
between the propagation and the evanescent region.
• The evanescent region corresponds to the energy that is located at or grater than
90 degree from the vertical.
• The opposite side of the hyperbola maps to 2nd quadrant (negative Kx).
PHASE-SHIFT MIGRATION:
a) Considering a review of f – k migration briefly. The two-way scaler wave equation
can be written as:
- = 0, -----------------(1)
where x & z are space variables, t is time variable, V is velocity of wave
propagation & P(x,z,t) is the pressure wave filed.
b) Assume constant velocity and perform 3-D Fourier transform of the pressure
wave-field and obtain the dispersion relation between transform variables as:
-------------------------(2)
CONCPT PHASE SHIFT MIGRATION
Where & are wave numbers in z & x direction, & is angular temporal
frequency.
c) Then adapt the dispersion relation to the exploding reflectors model by halving the
velocity for the upcoming waves. This gives:
)2 ---------------------------- (3) Here, the horizontal wavenumber has
been normalized w.r.t
d) Operate on pressure wave-field “P” and inverse transform in “Z” to obtain the
differential equation:
) ---------------(4)
e) Obtain the solution which is given by.
= ----------(5) here for convenience, the variable & have been omitted
from P
f) The equation (5) is the basis for phase shift migration.
FK Migration

Dip vectors are


rotated downwards to
steeper dip
F

Lines of constant
frequency migrate to Points migrate
semi-circles vertically downwards

-Kn K=0 +Kn


Deconvolution
Deconvolution

Introduction: Deconvolution compresses the basic wavelet in the recorded


seismogram, attenuates the reverberation and short period multiples, thus increases
the temporal resolution. Sometimes, besides compressing the wavelet it can remove
significant part of the multiple energy.

Assumption to make forward model of seismic trace


• The recorded seismogram can be modeled as a convolution of the “earths impulse
response” with the seismic (source) wavelet. This seismic wavelet has many
component including source signature, recording filter, receiver array response.
The earth impulse response comprise of primary reflections (reflectivity series)
and all possible multiples.
1. Earth is made up of horizontal layer of constant velocity.(gets violated in
structurally complex areas)
2. The source generates compressional waves which impinges the layer boundary
at normal, so no mode conversion.
3. The source wavelet is stationary.. Means it doesn’t change as it travels in the
subsurface.
Deconvolution

Assumption to make forward model of seismic trace


• Mathematically a seismic trace is given by x(t) = w(t) * e(t) + n(t) --------(1)
• Where x(t) = recorded seismogram (known), w(t) = basic source wavelet,
e(t)= earth impulse response, n(t)= random ambient noise.
• Deconvolution tries to recover reflectivity series {strictly speaking, the impulse
response, e(t) } from the recorded seismogram.
4. The noise component n(t) is zero x(t) = w(t) * e(t)
5. The source waveform is known
• If the source wavelet w(t) is known, then the solution to the deconvolution
problem is deterministic.
• If the source wavelet w(t) is unknown, then the solution to the deconvolution
problem is statistical

So the convolutional model for noise free seismogram will be x(t) = w(t) * e(t) ---(2)
In frequency domain it will be
Ax(ω) = Aw(ω)Ae(ω),
Deconvolution

(a) Sonic log, (b) Reflection coefficient derived from log, (c) Reflection coefficient
in time domain, (d) Reflection coefficient with multiples, (e) after convolution with
source wavelet.
Deconvolution

• If the source wavelet w(t) is known, then the solution to the deconvolution
problem is deterministic.
• If the source wavelet w(t) is unknown, then the solution to the deconvolution
problem is statistical
So the convolutional model for noise free seismogram will be x(t) = w(t) * e(t) ---(2)
In frequency domain it will be
Ax(ω) = Aw(ω)Ae(ω),
Bottom: Convolution of earths impulse response with the seismic wavelet
Middle: Convolution of autocorrelogram of earths impulse response & autocorrlogram of seismic wavelet
Top: Amplituse spectra of the earths impulse response & the seismic wavelet.
Deconvolution

• The amplitude spectra of the earths impulse response and the seismic wavelet is
very similar.
• The smooth version of the amplitude spectrum of the seismogram is nearly
indistinguishable from the amplitude spectra of the wavelet.
• The above observation suggest that amplitude response of the earths impulse
response must be flat or in other sense earths reflectivity should be random.
6. Assumption 6:Refelctivity is a random process which implies that seismogram
has the characteristics of the seismic wavelet. This assumption is key to the
predictive deconvolution, that is why we assume that the autocorrelogram of the
“seismic trace” is same as the auto-correlogram of the “seismic wavelet”.
Deconvolution

• Inverse Filtering
• Weiner Filtering
INVERSE FILTERING
If a filter operator f (t) were defined such that convolution of f (t) with the known
seismogram x(t) yields an estimate of the earth’s impulse response e(t), then
e(t) = f (t) ∗ x(t) , since x(t) = e(t) * w(t) x(t) = f(t) * x(t) * w(t) eliminating
x(t) from both the sides:
(t) = w(t) * f(t) where δ(t) represents the Kronecker delta function
(t) =

By solving the above equation for the filter operator f (t), we obtain
f (t) = δ(t) ∗ -------------------------------(1)
• Therefore the inverse filter operator f(t) needed to compute the earth impulse
response from the recorded seismogram is the mathematical inverse of the
seismic wavelet w(t).
Deconvolution

• From equation (1) , its clear that


• inverse filter converts the basic wavelet to a spike at time t=0,
• likewise the inverse filter converts the seismogram to a series of spike, that
defines the earth impulse response.
• Therefore inverse filtering is a method of deconvolution provided source wavelet
is known.
Flow Chart for Inverse Filtering
Computation of Inverse of the source wavelet

• Inverse of the source wavelet is computed using the z-transform.


• Case I: Let the source wavelet be the two point time series w(t) = (1, - ), the z-
transform of this wavelet is w(z) = (1 - z) -----------------------(1)
• The inverse filter f(z) = = 1 + z + z2 + ----------- (2)
• The coefficient (1, , , --- ) represents the time series associated with filter operator
f(t), that is a infinite series whose value decay rapidly, so in practice the operator is
truncated.
• Consider only two term filter (1, ), the input wavelet is (1, - ), then the actual
output will be (1, ) * (1, - ), = (1,0, - ) whereas the desired output is zero delay
spike (1,0,0).
• Though the actual out put is not ideal, but it is spiker than the input (1, - ), this
result can be improved if we consider one more term in filter.
• Note: z-transform is related to Fourier transform by the relation: z= exp(- )
Inverse of the source wavelet

• The actual output from three term filter is (1,0,0, -), this is more accurate than the
earlier one.
• Case II:
• Let the input wavelet is (- , 1), then The inverse filter f(z) = = -2 -4z -8z 2 -(1)
(Refer to the binomial theorem in the “Appendix”)
• Then f(t) = (-2, -4, -8, -----)
• Consider only two term filter (-2, -4), the input wavelet is (- then the actual output
will be (-2, -4) * (- ), = (1,0, - 4) whereas the desired output is zero delay spike
(1,0,0). So this is far from the desired and less spiker than the input (- .
Least Square Inverse Filtering

• Lets find out the two term filter (a, b) such that energy error between actual output
and the desired output (1,0,0) is minimum in the least square sense.
• Convolve the filter (a,b) with the input wavelet (1, - ), then the cumulative energy
of the error L is defined as the sum of the squares of the differences between the
coefficients of the actual/computed and desired outputs:
L= (a – 1)2 + (b - )2 + (- )2 -----------------------------------------------(3)
Find out the coefficient a & b such that L takes the minimum value (by taking the
partial derivatives of L w.r.t a & b and taking its value to zero.
• The design and application of this least square inverse filter is described in the
next slide. The filter coefficient (a, b) comes out to be (.95, .38).
• By comparing the results of inverse filtering with the least square inverse filtering it
is found that Least square inverse filtering provides better result in both the cases
of minimum phase or maximum phase source wavelet.
• It has also been proved that the energy difference is reduced if the energy
distribution of the desired out is similar to the energy distribution of source
wavelet.
Types of wavelet
• When the energy is front loaded,
its called minimum phase (top).
• When the energy is concentrated
in the middle, its called mixed
phase (middle).
• When the energy is loaded at the
end its called maximum phase
(bottom)
• All the three wavelet has same
amplitude spectra but different
phase spectra.
• A wavelet which is zero for t < 0 is
called “causal”
• Wavelet A = (4,0,-1)
Consider 4 three point wavelet, the energy • Wavelet B = (2,3,-2)
of each wavelet is 17. • Wavelet C = (-2,3,2)
• Wavelet D = (-1,0,4)
Types of wavelet

1. Time delay is
equivalent to phase lag.
2. Zero lag autocorrelation
of each wavelet is its
total energy.
3. As per Perceval's
theorem the area under
power spectrum is
equal to the zero lag of
autocorrelation.

• The performance of the inverse and least square inverse filter depends on the
length of the filter as well as the type of wavelet.
• A filter is stable only when the seismic wavelet is minimum phase.
• Spiking Deconvolution: is the process by which the “seismic wavelet” is
compressed to zero lag spike.
Deconvolution (Summary of Assumption)

Assumption No 7 : The seismic wavelet is minimum phase, therefore it has minimum


phase inverse.
a) Assumption 1,2 & 3 allows formulation of convolutional model of 1-D seismic.
b) Assumption 4 eliminates the noise term.
c) Assumption 5 (source wavelet is known) forms the basis of deterministic
deconvolution.
d) Assumption 6 (It allows the estimation of the auto-correlogram and amplitude
spectrum of unknown seismic wavelet from recorded 1-D seismogram) is the basis
of statistical deconvolution.
Optimum Wiener Filter

The filter coefficient (a, b) for obtaining desired output (1,0,0) from the input (1, -1/2)
using least square technique is achieved by solving: = or

2=2
• As the autocorrelation of input wavelet (1, -1/2) is (5/4, -1/2) which is same as 1st
column of 2 x 2 matrix (above) and the cross correlation of desired output (1,0,0)
with input wavelet (1, -1/2) is (1, 0) is same as the column matrix on the right.
• The above pattern was first observed by Wiener and he gave a generalized
solution for deriving the filter coefficient for obtaining any desired output from a
given input wavelet.
Optimum Wiener Filter

• The general matrix form for n point wiener filter is


Optimum Wiener Filter
= ---------------------(1) Here
output with the input wavelet
• Please note that when the desired output is zero lag spike (1,0,0…) then the wiener
filter is identical to the least square inverse filter.
• The optimum wiener filter is optimum in the sense that the least square error
between actual and desired output is minimum.
• The wiener filter is applies to large class of problems in which any desired output can
be considered. Five choices of desired output are;
• Type 1: Zero-lag spike
• Type 2: Spike at arbitrary lag
• Type 3: Time advanced form of input series.
• Type 4:Zero phase wavelet
• Type 5: Any desired arbitrary shape
Optimum Wiener Filter

• The symmetric autocorrelation matrix is called “Toeplitz” matrix and can be


efficiently solved by “Levinson recursion method”. The algorithm based on this is
known as Wiener-Levinson algorithms.

A flow chart for Wiener filter design and application


Spiking Deconvolution

• When the desired output in the “optimum wiener filter” is zero lag spike, then it’s
called spiking deconvolution.
• The cross correlation of the desired spike (1,0,0,0…0) with the input wavelet (.
• Spiking deconvolution is mathematically identical to least square inverse filtering.
The only difference is that in case of spiking deconvolution the autocorrelation
matrix on the left side of the equation is computed from the input seismogram
(Statistical deconvolution) but in case of least square inverse filtering (deterministic
deconvolution) the autocorrelation is computed from the source wavelet.
• Fig. 2.3-2 is the summary of Spiking deconvolution based on Wiener-Levinson
Algorithm.
• It may be noted that the final output ,(frame k) is not zero lag spike (frame n), what
went wrong. The input is mixed phase wavelet and not the minimum phase as
desired by Assumption 7.
Spiking Deconvolution

a) Input Mixed phase wavelet.


Fig.2.3-2 b) Shows that most of the energy
is confined to 10 to 50 Hz.
d) Shows the autocorrelation
function.
e) Spiking deconvolution operator
f) The amplitude spectrum of the
operator (f) is approximately
the inverse of the amplitude
spectrum of the input, (b). This
approximation is improved as
the operator length is
k) is the output when operator (e) is convolved with increased.
original wavelet (a) g) Phase spectrum of (e)
n) Is the desired output, which needs minimum h) Minimum phase of (a)
phase input like (h) i) Autocorrelation of (h)
Spiking Deconvolution

• One way to extract seismic wavelet (provided its minimum phase) is to compute
the spiking deconvolution operator and finds its inverse.
• Conclusion
• If the input wavelet is not minimum phase, then spiking deconvolution can’t convert
it to perfect zero lag spike as in frame k.
• Finally note that the spiking deconvolution operator is the inverse of the minimum
phase equivalent of the input wavelet. This wavelet may or may not be minimum
phase.
Pre-Whitening

• We know that the amplitude spectrum of spiking deconvolution operator is the


inverse of the amplitude spectrum of the input wavelet, what if the amplitude
spectrum of the input wavelet has zero in it ? Or any of its frequency component
has very very small or zero amplitude?
• Zero amplitude rarely occurs, as noise which is additive in both time and frequency
domain is always present, but to be in safer side, an artificial level of white noise is
added to the amplitude spectrum of the input seismogram before deconvolution.
This is called pre-whitening.
• The white noise added is between 0 & 1.
Pre-Whitening

Fig.3.2-3. Pre-whitening amounts to adding a bias to the amplitude spectrum of the seismogram to be
convolved. This prevents dividing by zero since the amplitude spectrum of the inverse filter (b) is the
inverse of that of the seismogram (a). Convolution of the seismogram with the filter is equivalent to
multiplication of their respective amplitude spectra (c) which is almost white.
Wavelet Processing by Shaping Filter

• As we know that spiking deconvolution has trouble in compressing an


mix/maximum phase input wavelet (say - ½, 1) to a zero delay spike (1,0,0).
• In general, for any given input wavelet a series of delayed spike can be defined as
desired output.
• The least square error between the actual and the desired output can then be
plotted as a function of delay.
• The delay lag that corresponds to the least error is chosen to define the desired
delayed spike output.
• The actual output from the wiener filter using this delayed spike should be the most
compact possible result.
• The wiener filter design in which the desired output is of “arbitrary shape” is called
“wavelet shaping” and the filter is called “Wiener Shaping Filter”.
Wavelet Processing by Shaping Filter (Miscellaneous)

• The wavelet shaping term is used with flexibility. Its most common use is
estimation of the basic wavelet embedded in the seismogram.
• Then designing a shaping filter to convert the estimated wavelet to a desired form ,
usually broad-band zero-phase wavelet (fig.2.3-8) and finally applying the shaping
filter to the seismogram.
• Dephasing: When the input wavelet of mix-phase is converted/shaped to zero-
phase wavelet of (different bandwidth) then this process is called de-phasing.
• Signature Processing: When a recorded air-gun signature is shaped to its
minimum phase equivalent and then into a spike, then it is called signature
processing.
• Wavelet Processing:
1. The term wavelet processing is used with flexibility. The most common meaning
refers to estimation of basic wavelet embedded in seismogram, then design a
shaping filter to convert it to desired form normally broad-band zero-phase wavelet
and finally applying the shaping filter to the seismogram.
Wavelet Processing by Shaping Filter (Miscellaneous)

• Wavelet Processing:
• Another type of wavelet processing involves wavelet shaping in which the desired
output is zero phase with the same amplitude spectrum as that of the input
wavelet. Then the design and application of the filter is called Predictive De-
convolution.
• Given the input x(t), we want to predicts its value at some future time (t + α )
where α is the prediction lag.
• Wiener filter is designed where the desired output is the time advanced form of the
input. When the filter coefficient so arrived is convolved with the input, then the
error series is equivalent to the primaries, (we assumed that the filter so designed
has predicated all the multiples.).
• Mostly it is used to predict the multiples, some times when the formation is cyclic,
it can also be used to predict the primary, but its rare.
Predictive Deconvolution

• When the desired output is the time advanced form of the input.
• Spiking deconvolution is a special case of predictive deconvolution with unit
prediction lag.
• In general the following statement can be made: “ Given a wavelet of length (n + α)
the prediction error filter contracts it to an α-long wavelet, where α is the prediction
lag. When α = 1, the procedure is called spiking deconvolution.
Predictive Deconvolution

Fig:2.3-12, A flow charts for interrelations between various deconvolution filters.


Evaluation of Assumption in Predictive Deconvolution

Lets evaluate the implication of the assumptions (1 to 7) made for the deconvolutions,
specially the predictive deconvolution.
a) Assumptions 1, 2, & 3 are the basis for the convolutional model of the recorded
seismogram. In practice the deconvolutions often gives good result even if these 3
assumptions are not met.
b) Assumption 3 can be relaxed by considering the time variant deconvolution.
c) Assumption 4 can be taken care off by minimizing the noise during recording,
doing noise removal process before deconvolution and considering noise free
zone to design the filter.
d) Assumption 5, if the source wavelet is known and minimum phase (Assumption
7) , then the deconvolution would give the best result in noise free (Assumption 4)
area.
e) If assumption 6 (Earth reflectivity is random) is violated and the source wavelet is
not known, then the deconvolution output is inferior.
f) The output is further degraded if the source wavelet is not known.
g) Finally in addition to violating assumption 5 & 7 if there is noise in the data
(assumption 4 is violated) then the output would be unacceptable.
Evaluation of Assumption in Predictive Deconvolution

• Deconvolution has been applied to more than billion traces and in most of the
cases it gave satisfactory result.
• Critical evaluation of the effect of assumptions on the predictive deconvolutions
using fig 2.4-1 through 2.4-5
• When predictive deconvolution does not work on some data, the most probable
reason is that one of more of the “7” assumptions made has been violated.
Evaluation of Assumption in Predictive Deconvolution

Fig:2.4-1 (a)
Impulse response,
(b) Seismogram, (c)
Spiking
deconvolution using
known minimum
phase wavelet, (d)
deconvolution using
un-known minimum
phase source
wavelet. Impulse
response (a) is a
sparse-spike series.
For unknown source
wavelet (violation of
assumption 4)
spiking
deconvolution yields
less than perfect
result (Compare c &
d)
Evaluation of Assumption in Predictive Deconvolution

Fig:2.4-2 (a) Impulse


response, (b) Seismogram,
(c) Spiking deconvolution
using known minimum phase
source wavelet, (d)
deconvolution using un-
known minimum phase
source wavelet. Impulse
response (a) is based on
sonic log. For the unknown
source wavelet (violation of
assumption 4) spiking
deconvolution yields less
than perfect result (Compare
c & d)
Evaluation of Assumption in Predictive Deconvolution

Fig:2.4-3 (a) Impulse response, (b)


Seismogram, (c) Spiking
deconvolution using known mixed
phase source wavelet, (d)
deconvolution using un-known mix
phase source wavelet. Impulse
response (a) is a sparse spike like
series. For a mixed phase source
wavelet (violation of assumption 5)
spiking deconvolution yields
degraded output (d) even when the
wavelet is known (c)
Evaluation of Assumption in Predictive Deconvolution

Fig:2.4-4 (a) Impulse response, (b)


Seismogram, (c) Spiking
deconvolution using known mixed
phase source wavelet, (d)
deconvolution using un-known mix
phase source wavelet. Impulse
response (a) is a sparse spike like
series. For a mixed phase source
wavelet (violation of assumption 5)
spiking deconvolution yields
degraded output (d) even when the
wavelet is known (c)
Evaluation of Assumption in Predictive Deconvolution

Fig:2.4-5 (a) Impulse response, (b)


Seismogram with noise, (c)
deconvolution using un-known mixed
phase source wavelet, (d) deconvolution
using un-known mix phase source
wavelet. Impulse response (a) is based
on the sonic log. In the presence of
random noise(in violation of assumption
3) spiking deconvolution can produce a
result with relation to the earth’s
reflectivity. (Compare a to c)
Operator Length in Predictive Deconvolution

• Consider a single isolated minimum phase wavelet as in Fig 2.4-6 (b), assumption 1 through 5 is satisfied,
the ideal result of spiking deconvolution is zero lag spike as in (a).
• In the figure, η , α , are the operator length of the prediction filter, prediction lag and % whitening then the
length of the prediction error filter is (η + α), the prediction lag is unity and equal to 2 msec sampling rate
for spiking deconvolution.
• In the fig 2.4-6, the operator length keeps changing from 22msec until 224 msec.
• In this and following numerical analysis, we refer to the autocorrelation and amplitude spectrum (plotted
with linear scale)
• Short operator produce a spike of smaller amplitude and relatively high frequency tails, the 128 msec
operator gives almost perfect spike output, bringing it closer to the spectrum of impulse response.
• Recall that spiking deconvolution is the inverse filtering where the operator is the least square inverse of
the seismic wavelet. Therefore an increasingly better result should be obtained when more and more
coefficients are included in the inverse filter.
Operator Length in Predictive Deconvolution

Fig:2.4-6 Test of operator length for a single, isolated input wavelet where n= operator length α = prediction
lag, percent whitening, (a) Impulse response, (b) seismogram with minimum phase source wavelet
Operator Length in Predictive Deconvolution

• Now consider the real situation of unknown source wavelet. Based on assumption 6, autocorrelation of
the input seismogram is used to design the deconvolution operator.
• The result of using the trace autocorrelation rather than source wavelet autocorrelation for designing the
deconvolution operator is shown in fig.2.4-8.
• It can be seen that the deconvolution recovers the gross aspect of spike series but the deconvolved
traces have spurious small-amplitude spikes trailing each of real spike.
• Increasing operator length doesn’t indefinitely improve the quality. 94 msec operator gives the optimum
result for both 2.4-7 & 2.4-8. This is because of the fact that the autocorrelation of source wavelet is less
that 100 msec (Fig. 2.4-6, b) so anything beyond 94msec doesn’t represent source wavelet.
• Consider Fig:2.4-9, where the source wavelet is minimum phase but unknown, here also the
deconvolution has restored the spike that correspond to the major reflection.
• Fig:2.4-10 shows the degradation effect of mixed phase source wavelet. Both the minimum phase &
mixed phase has same amplitude spectra, but because the minimum phase assumption is violated, the
deconvolution doesn’t convert the mixed phase wavelet to a perfect spike, instead the deconvolved output
is complicated high frequency wavelet. Also the dominant peak in the output is negative, while the impulse
response is positive.
• This difference in sign can happen if the mixed phase wavelet is deconvolved.
Operator Length in Predictive Deconvolution

Fig:2.4-7 Test of operator length where n= operator length α = prediction lag, percent whitening, (a) Impulse
response, (b) seismogram with known minimum phase source wavelet
Operator Length in Predictive Deconvolution

Fig:2.4-8 Test of operator length where n= operator length α = prediction lag, percent whitening, (a) Impulse
response, (b) seismogram with known minimum phase source wavelet
Operator Length in Predictive Deconvolution

Fig:2.4-9 Test of operator length where n= operator length α = prediction lag, percent whitening, (a)
Reflectivity, (b) Impulse response (c) seismogram with unknown minimum phase source wavelet
Operator Length in Predictive Deconvolution

Fig:2.4-10 Test of operator length for a single isolated input wavelet, where n= operator length α = prediction
lag, percent whitening, (a) Impulse response (c) seismogram with mixed phase source wavelet
Prediction lag in Predictive Deconvolution

• Now What Operator Length Should be used for spiking deconvolution ? Assumption 6 says that the
autocorrelation of the input trace has similar characteristics as the source wavelet, so the First transient
zone of the autocorrelation of the input trace should be used as operator length.
PREDICTION LAG
• The predictive deconvolution has two uses: (a) Spiking deconvolution – the case of unit prediction lag
(b) Predicting the input seismogram at future time defined by prediction lag. Case (b) is used to predict
and attenuate the multiples.
• Lets examine the interpretative effect of prediction lag. Consider the single isolated minimum phase
wavelet (Fig.2.4-14) , here the operator length and % prewhitening is kept fixed while the prediction lag is
varied.
• Let the prediction lag is “α” and prediction filter length is “n” and the input length is (α + n), this prediction
filter will convert the input wavelet to another wavelet of “α” sample long. The first “α” lag of
autocorrelation is preserved while the next “n” lags are zeroed out.
• When the prediction lag is unit (sampling rate), then the result is equivalent to spiking deconvolution.
• Prediction lag greater than unity yields wavelet of finite duration. The amplitude spectrum of the output
increasingly resembles that of input wavelet as the prediction lag is increased.
• When “α” = 94 msec, predictive deconvolution does nothing to the input wavelet because almost all the
lags of its autocorrelation is untouched.
Prediction Lag in Predictive Deconvolution

Fig:2.4-14 Test of prediction lag for a single isolated minimum phase input wavelet where n= operator length α
= prediction lag, percent whitening, (a) Impulse response, (b) seismogram with minimum phase source
wavelet
Prediction Lag in Predictive Deconvolution

• This suggest that under the ideal condition of noise free data, the resolution of the output can be
controlled by controlling the prediction lag. Resolution is highest for unit prediction length (Spiking
deconvolution) and as the “α” increases the resolution decreases, This is true for a mixed phase wavelet
also, however these assessment are dictated by S/N ratio.
• Though the spiking deconvolution gives the highest resolution but this may be degraded if high frequency
energy is mostly noise.
• In the same Fig. prediction lag of 8 and 22 ms corresponds to the first & 2 nd zero crossing on the
autocorrelation of the input wavelet. The 1 st zero crossing produces a spike with some width, while the 2 nd
zero crossing produces a wavelet with positive and negative lobe.
• When the prediction lag is increase, then the output becomes band-limited (decrease in resolution).
• Band limit can also be achieved by applying a band-pass filter on the output of spiking deconvolution, but
it will not be same as the band-limited output by predictive deconvolution, 1 st the amplitude spectrum will
be box-car in case of spiking deconvolution followed by band-pass filter, whereas it will be closer to the
amplitude spectrum of input incase of predictive deconvolution. 2 nd the spiking deconvolution followed by
band-pass filter includes/boost the high frequency noise which is not the case in prediction deconvolution.
• The application of spiking deconvolution on the field data is not desired as it boost the high frequency
noise in the data.
• If the prediction lag is larger, then it affects the low frequency end of the data, making the output more
bandlimited.
Percent Prewhitening

• Consider the single isolated input wavelet with minimum phase source wavelet (Fig:2.4-24), keeping the
prediction lag and operator length fixed, vary the % prewhitening, it may be noted that increasing the %
prewhitening has similar effect as that of increasing the prediction lag.
• It means prewitening also narrows the spectrum without changing much of the flatness character, while
larger prediction lag narrows the spectrum as well as alters its shape, making it look more like the
spectrum of the input seismic wavelet.
• Prewhitening preserves the spiky character of the output, although it adds low amplitude high frequency
tail, on the other hand, increasing prediction lag produces a wavelet with duration equal to prediction lag
(Fig:2.4-14).
• Finally the combined effect of prediction lag greater than unity and prewhitening on a single isolated
wavelet is shown in Fig:2.4-30. All the study suggest that, prewhitening narrows the output spectrum
making it band-limited.
• It is also observed that spiking deconvolution with little prewhitening is almost equivalent to spiking
deconvolution without prewhitening followed by a band-pass filter.
• Though the prewhitening has similar effect as the prediction lag, but the output from the varying
prewhitening is unpredictable, whereas output from the varying prediction lag is predictable.
• Prewhitening is used only to ensure that numerical instability in solving for the deconvolution operator can
be avoided .
• In practice 1% to .1% is the standard value which is used.
Percent Prewhitening Test

Fig:2.4-24 Test of % prewhitening on single isolated input wavelet where n= operator length α =
prediction lag, % whitening (a) Impulse response, (b) seismogram with minimum phase source wavelet
Percent Prewhitening Test

Fig:2.4-30 Test of % prewhitening on single isolated input wavelet where n= operator length α =
prediction lag, % whitening (a) Impulse response, (b) seismogram with minimum phase source wavelet
Effect of Random Noise on Deconvolution

• As per assumption 4, the noise should be zero on the seismogram. The autocorrelation of ideal random
noise is zero for all lags except the zero lag. So, the effect of random noise on the deconvolution operator
should be similar to that of prewhitening.
• Both effects modify the diagonal of the autocorrelation matrix.
• Consider the Fig:2.4-24 & 2.4-31, random noise is added in 2.4-31, it may be observed that the output
from spiking deconvolution with 128 ms operator length is similar to the spiking deconvolution without
random noise but with 20% prewhitening.
• Random noise is always harmful to the deconvolution. Sometimes it introduces spike on the output which
may be confused with the primary signal.
Effect of Random Noise on Deconvolution

Fig:2.4-31 Test of effect of Random Noise the performance of deconvolution. Input seismogram (b)
associated with reflectivity, (a) Contains single isolated wavelet (at around .2sec) buried in random
noise. Here n= operator length α = prediction lag, % whitening
Multiple Attenuation

• As we know that predictive deconvolution predicts the periodic event, like the multiples in the seismogram
and the “prediction error filter yield unpredictable component- reflectivity series. Consider the case of
water bottom multiple. Let Cw be the reflection coefficient of water bottom & t w the two way time
equivalent to depth. Then the time series is
(1,0,0……0,- Cw, 0,……0, Cw 2 , 0,…….0, - Cw 3 ,0, …) as represented by trace (b) in fig:2.4-34. It may be
seen that the separation between the two spikes is t w , (trace b), the periodicity of the multiple can be seen in
the amplitude spectra as peak or notches.
• The noise free convolutional model for the seismogram that contains the water bottom multiples can be
written as:
x(t) = w(t) * m(t) * e(t) …………………………………………………………………………………....(2-40),
where x(t) is the recorded seismogram, m(t) is the periodic component (multiple) as demonstrated by
the trace, e(t) is the earth impulse response excluding the multiples associated with the water bottom,
w(t) seismic wavelet/source wavelet.
• Multiple attenuation is done by the predictive deconvolution. The prediction lag is selected from the
autocorrelogram of the input trace (Fig 2.4-34,c) by avoiding the 1 st part of the autocorrelogram which
represents seismic wavelet. The operator length is chosen to include the first burst of the periodic event.
• Once the multiple is eliminated, we are left with the water bottom primary. If desired this wavelet can be
compressed to spike by spiking deconvolution. The sequence can be interchanged, that is first spiking
deconvolution followed by predictive deconvolution trace “f”.
Multiple Attenuation

Fig:2.4-34, (a) Reflectivity, (b) Impulse response (c) Seismogram. Two - step deconvolution aimed at
attenuating the multiples, then spiking the remaining primary wavelets.(d) to (e), The process can be
performed in reverse order (f) to (g) Here n= operator length α = prediction lag
Multiple Attenuation

• A sufficiently long spiking deconvolution operator can achieve both, compressing the wavelet as well as
removing the multiples, but this can be dangerous, if primary reflections are unintentionally suppressed.
This is shown in fig 2.4-35. Here water bottom reflection is followed by another reflection at .28sec (a),
The impulse response contains the water bottom multiple & peg-leg multiple generated from the deeper
event (b).
• The amplitude spectrum contains pair of peaks which indicates the presence of two periodic events.
Careful selection of prediction decon parameter removes the periodic multiples as seen in trace “d”,
followed by spiking produces trace “e”
Multiple Attenuation

Fig:2.4-35, Multiple attenuation by predictive deconvolution, (a) Reflectivity, (b) Impulse response (c)
Seismogram, Two step deconvolution predictive decon (d) followed by spiking (e), Traces f, g, & h
results when single step deconvolution is applied to the input trace.
How to Ensuring No Harm to Primary ?

• Refer fig. 2.4-35, the first 50ms represents the seismic wavelet, This is followed by the burst between 50
to 170ms that represents the correlation of water bottom and primary.
• The isolated burst between 170 to 340ms represents actual multiple series (both peg-leg and water
bottom multiple)
• The prediction lag must be chosen to bypass the 1st part of the autocorrelogram representing the seismic
wavelet and possible correlation between the primaries.
• The operator length must be chosen to include the 1 st isolated burst, in this case between 170 to 340ms.
• It may be noted that it is only the vertical incidence or zero offset where the periodicity of multiple is
preserved, therefore the predictive deconvolution may not be entirely effective for non zero offset data,
such as common shot or CMP data.
• Sometimes the PD is applied on the stack data, but result may not be satisfactory as the amplitude
relationship between the multiples are altered by stacking process, primarily due to velocity difference s
between primary and multiple also the geometric spreading compensation using primary velocity change
the amplitude relation of the multiple.
• There is one domain where the periodicity and amplitude of multiples are preserved- slant stack.
Autocorrelation Window Selection
• Consider Fig:2.5-1, the CMP gather shown consist of 5 prominent reflections at around 1.1, 1.35,
1.85, 2.15, and 3.05s. The gather also contains strong reverberation associated with these
reflections.
TIME GATE ANALYSIS
• 4 scenario has been analyzed (a) Entire length of the trace, (b) The start gate is following the first
arrival path and the deeper portion is excluded which is dominated by ambient noise, (c) The early
portion which contains energy corresponding to guided and the deeper portion, both are excluded,
(d) A narrow window which exclude the shallow as well as the lower middle part.
• The 3rd choice works best as it represents the reverberatory character of the data over most of the
offset (panel c).
• In general, the autocorrelation window should include part of the record that contains useful
reflection signal, and exclude coherent or incoherent noise.
• An autocorrelation function contaminated by noise is undesirable since the deconvolution process
is most effective on noise free data (assumption 4).
AUTOCORRELATION WINDOW LENGTH:
• A narrow window should be avoided, as it may lack the characteristics of reverberation s and even
those of basic seismic wavelet.
Autocorrelation Window Selection
Fig:2.5-1 An autocorrelation
window test used to design
decon operators. The solid line
indicates the window boundaries
The entire 6.0s record was
included in (a). The
autocorrelation is displayed
beneath the record.
Autocorrelation Window Selection
• In general, any autocorrelation function biased; that is, let the 1 st lag value is computed from, say
“n” nonzero samples, the 2nd lag value is computed from “n-1” nonzero samples, and so on. If “n”
is not large enough, there can be an undesirable biasing effect.
• The rule of thumb is that; if the largest autocorrelation lag used in designing the deconvolution
operator were “m” then the number of data samples should be no less than 8m.
• After autocorrelation window, lets determine the operator length. Consider fig.2.5-2, (a) is the input
gather which is subjected to spiking decon of prediction lag 4 ms (1 sample) and the prewhitening
is 0.1%
• A short 40 ms operator length leaves some residual energy that corresponds to the basic wavelet
and reverberating wave-train in the record.
• 160 ms operator length, no remnant of the energy is associated with the basic wavelet and
reverberations.
• Any operator length greater than 160 ms does not change the result significantly.
• As the prediction lag increased from unity (Spiking), the deconvolution process would be
increasingly less effective in broadening the spectrum. In practice the prediction lag is unity, or
first or second zero crossing of the autocorrelation function.
% PREWHITENING
• Typically a value between 0.1 to 1 percent is sufficient to ensure stability in designing the
deconvolution operator.
Autocorrelation Window Selection
Fig:2.5-2; Test of Operator Length. The
corresponding autocorrelogram is beneath
each record. The window used in
autocorrelation estimation is shown in
fig.2.5-1c. (a) Input gather, Deconvolution
using the prediction lag = 4 ms (Spiking
decon), 0.1% prewhitening, and prediction
filter operator lengths (b) 40 ms, ( c ) 80
ms (d) 160 ms, (e) 240 ms.

40 ms 80 ms 160 ms 240 ms
Autocorrelation Window Selection
• The flattening of average amplitude spectrum is maximum for the spiking deconvolution. A little
high prediction lag (more than unity) we observe insufficient flattening at higher end of the
frequency. For larger prediction lag, there is insufficient flattening at both higher and lower end of
the frequency. A very large prediction lag causes amplitude spectrum of the deconvolved data
remain similar to that of the input.
• The data should be pre-conditioned before deconvolution as noise data gives problem. If there is
significant coherent noise in the data, then dip filtering may be applied.
Signature Deconvolution

• In marine seismic acquisition, the far-filed signature of the source array can be
recorded.
• Deterministic deconvolution is applied to remove the source signature followed by
Predictive deconvolution.
• X(t) = s(t) * w(t) * e(t) where s(t) is the source signature recorded at the far
field before it travels down to earth, e(t) is the impulse response of the earth,
w(t) is unknown wavelet which includes the propagating effect in the earth,
and response of the recording filter. (The old equation was x(t) = w(t) * e(t), so
this w(t) has been split into two parts, the known source signature s(t) and
unknown w(t).
• As s(t) is known, so an deterministic inverse filter can be designed to remove
this. Then the unknown wavelet w(t) is then removed using the statistical
method of spiking deconvolution.
• There are two way to handle s(t);
1. Convert the source signature to minimum phase equivalent, followed by
predictive deconvolution/spiking deconvolution fig:2.5-7.
Signature Deconvolution

2. Convert s(t) to spike followed by predictive deconvolution.


The process involves the following steps:
a) Estimate the minimum phase equivalent of the recorded source signature by
computing the spiking deconvolution operator (Equn 2-39) and taking its
inverse,
b) Design the shaping filter to convert the source signature to its minimum phase
equivalent or zero delay spike (Equtn. 2-30)
c) Apply the shaping filter to each trace in each recorded shot record.
d) Apply the predictive deconvolution to the output data from step “c”
The results shown in Fig.2.5-7 & 2.5-8 (panel “c”) should be compared with single
step statistical deconvolution (fig.2.5-9). Since the source was not minimum phase, so
fig 2.5-b should be better than 2.5-9d.
• The results of the signature processing depends on the accuracy of the recorded
signature.
• Avoid signature processing of the old recorded data, unless the source signature is
recorded recently.
Signature Deconvolution

Fig.2.5-7. Signature processing.


A shaping filter is designed to
convert the recorded signature
s(t) to its minimum phase
equivalent and applied to input
record (a). The output (b) has the
same bandwidth as the input.
This (b) is then processed by
predictive decon using operator
length 160 ms, and prediction lag
(c ) 4 ms, (Spiking), (d) 12 ms, (e)
32 ms
Signature Deconvolution

Fig.2.5-8. Signature processing.


A shaping filter is designed to
convert the recorded signature
s(t) to a spike and applied to
input record (a). The output (b)
is then processed by predictive
decon using operator length 160
ms, and prediction lag (c ) 4 ms,
(Spiking), (d) 12 ms, (e) 32 ms

It may be noted that output from


signature processing (b) still
contains a wavelet component –
the w(t) component that still
needs to be removed.
Signature Deconvolution

Fig.2.5-9. Signature processing.


Compared with statistical
deconvolution. (a) Input shot
record. (b) Shaping filter is
designed to convert the signature
s(t) to its minimum phase
equivalent and applied to the input
record, followed by spiking
deconvolution, (this panel is same
as Fig.2.5-7c), (c) A shaping filter is
designed to convert the recorded
signature to a spike followed by
spiking deconvolution (this panel is
same as Fig.2.5-8 c), (d) Spiking
deconvolution of the input record
(a). The autocorrelogram suggest
that the wavelet compression is
achieved in all the three cases, (b),
(c), and (d)
Post Stack Deconvolution

• Post Stack Deconvolution is required for several reason;


1. Residual wavelet is invariably present on the stack section, because the 7
assumption for the deconvolution is never met on the field data, therefore the
deconvolution fails to compress the basic wavelet into spike.
2. Since CMP stack is an approximation to zero offset section where the multiples
would be periodic, so the predictive deconvolution can be effective in removing
the remaining multiples.
• Fig.2.5-18 is an example of post stack deconvolution. After deconvolution the
spectrum is flattened further, the wavelet is compressed and marker horizons are
better characterized.
• As the prediction lag is increased, the flatness character of the spectrum as well
as the vertical resolution is compromised (Fig.2.5-19).
Post Stack Deconvolution

Fig.2.5-18; (a) A portion of the stack section and after spiking deconvolution using operator length of
(b) 120 ms, (c) 160 ms, (d) 220 ms (e) 320 ms. The autocorrelogram at the bottom indicates that much of
the reverberating energy is attenuated with OPL 320 ms.However the spiking deconvolution has failed
to flatten the spectra completely d/t non stationarity of the signal.
Post Stack Deconvolution

Fig.2.5-19; (a) A portion of the stack section and after predictive deconvolution using operator length of
320 ms and prediction lag of (a) 8 ms, (b) 12 ms, (c) 24 ms, (d) 32 ms, (e) 48 ms.
Vibroseis Deconvolution

• The vibroseis source is a long duration sweep signal in the form of frequency
modulated sinusoid that is tapered on both the ends. The vibroseis seismogram
can be represented as
• X(t) = s(t) * w(t) * e(t)-----------------------------------------------------(2-42) where s(t)
is the sweep signal, w(t) is the seismic wavelet. The convolution in the
frequency domain becomes multiplication
• X(w) = S(w).W(w).E(w)---------------------------------------------------(2-43) Now in
terms of frequency and phase spectra (2-43) becomes
• Ax(w) = As(w) Aw(w) Ae(w) ---------------------------------------------(2-44a) and
• = + + --------------------------------(2-44b)
• Cross correlation of the recorded seismogram x(t) with the sweep signal s(t) is
equivalent to multiplying equation (2-44a) by As(w) and subtracting from equation
(2-44b). The correlated vibroseis seismogram x’(t) therefore would have the
following amplitude and phase spectra.
• A’(w) = A2s(w) Aw(w) Ae(w) -------------------------------------------(2-45a)
• --------------------------------------------(2-45b)
Vibroseis Deconvolution

• The inverse Fourier transform of A2s(w) yields the autocorrelation of sweep signal
which is called the Klauder wavelet k(t)
Signature Deconvolution
Fig.2.3-8, Signature Processing, (a)
Recorded signature, (b) desired
output (c) Shaping operator, (d)
Shaped signature. The desired
output is zero delay spike (top) and
the minimum phase equivalent of
the recorded signature (bottom)
BSR (Bottom Simulating Reflector) & Gas Hydrate Stability Zone.

• The term BSR stems from their principal characteristic that these reflectors mimic
the seafloor topography in marine seismic reflection data thereby crosscutting
sedimentary strata. BSRs are known to occur in continental margin sediments in
regions of gas hydrate and free gas (Shipley et al., 1979...
• A seismic reflection occurring in the upper few hundred meters of marine
sediments mimicking the seafloor, crosscutting sediment layers, and showing a
phase reversal is known as a “bottom-simulating reflector.”
• BSR is mostly related to presence of Gas Hydrate with a large impedance contrast
between gas hydrated sediments above and free gas layer below.
• A BSR is a seismic reflection indicating the lower limit of hydrate stability in
sediments due to the different densities of hydrate saturated sediments, normal
sediments and those containing free gas.[2]
• A diagenetic-related BSR occurs at the opal-A/opal-CT transition zone, lies often
deep and outside the base of the gas hydrate stability zone, shows no phase
reversal, and does not always mimic the seafloor.
BSR (Bottom Simulating Reflector) & Gas Hydrate Stability Zone.

• The occurrence of gas hydrate in shallow marine sediments causes an increase


of seismic velocity compared to that of water saturated host sediments. This
increase of velocity depends on the spatial distribution of hydrates in pore spaces
of sediments.
• Velocities derived from seismic data are low frequency
• Characterisation of a gas hydrate reservoir from seismic data generally assumes
homogeneous distribution of gas hydrates, which leads to overestimation of gas
hydrate saturation.
• A Study in KG basin has demonstrated that the amount of gas hydrate estimated
from seismic data is indeed higher (∼12.12 per cent of pore volume) than that
obtained from simulated 2-D heterogeneous velocity and density models (∼9.60
per cent of pore volume) along the seismic line.
• Gas hydrates are clathrate compounds in which the host molecule is water and
the guest molecule is typically a gas or liquid. Without the support of the trapped
molecules, the lattice structure of hydrate clathrates would collapse into
conventional ice crystal structure or liquid water. Most low molecular weight gases,
including O2, H2, N2, CO2, CH4, H2S, Ar, Kr, and Xe, as well as some higher
BSR (Bottom Simulating Reflector) & Gas Hydrate Stability Zone.

• Gas hydrates are crystalline water-based solids physically resembling ice in


which small amount of light hydrocarbon gases are trapped inside
“cages” of hydrogen bonded frozen water molecule. In other words gas hydrates a
re clathra
te compounds in which the host molecule is water and the guest molecule is
typically a gas or liquid. Without the support of the trapped molecules, the lattice
structure of hydrate clathrates would collapse into conventional ice crystal
structure or liquid water. Most low molecular weight gases, including O2, H2, N2,
CO2, CH4, H2S, Ar, Kr, and Xe, as well as some higher hydrocarbons and freons,
will form hydrates at suitable temperatures and pressures.
• Gas hydrates are crystalline form of methane and water, and exist in shallow
sediments of outer continental margins.
• Gas hydrates are a crystalline solid formed of water and gas. It looks and acts
much like ice, but it contains huge amounts of methane;
BSR (Bottom Simulating Reflector) & Gas Hydrate Stability Zone.

• They are envisaged as a viable major energy resource for future. Thus, delineation
of gas-hydrates by geophysical methods is very important for evaluating the
resource potential along the Indian continental margin with a view to meet the
overwhelming demand of energy for India
• Large quantity of Gas hydrates are expected in the EEZ (Exclusive Economic
Zone) of India.
• Large quantity of existing multichannel seismic data has been evaluated and based
on that two potential sites of 100 x 100 km each in KG Basin & Mahanadi basin
has been identified and surveyed for Gas Hydrates.
• It is estimated that about trillions of cubic meters of methane gas is available in the
gas hydrates of Indian EEZ.
• Gas Hydrates can be the future source of energy for India. Development of
technology to harvest Gas Hydrates can ensure energy security of the nation. Gas
Hydrate exploration, development of tools for the recovery of gas from these gas
hydrates is the need of the hour.
BSR (Bottom Simulating Reflector) & Gas Hydrate Stability Zone.

• Basic infrastructure has been developed in three national laboratories to take up


studies on Gas Hydrates in India and expertise was gained during the 10th and
11th plan activity.
• Exploitation of gas hydrates from continental margins of ocean basins is a
technological challenge.
• Krishna-Godavari basin has evidence of occurrence of huge thickness of gas
hydrates demonstrated by shale fracturing mechanism and world’s deep seated
occurrence of gas hydrates have been sampled in Andaman basin.
BSR (Bottom Simulating Reflector)

Methane clathrate (CH4·5.75H2O)


or (4CH4·23H2O), also
called methane
hydrate, hydromethane, methane
ice, fire ice, natural gas hydrate,
or gas hydrate, is a solid
clathrate compound (more
specifically, a clathrate hydrate) in
which a large amount of methane is
trapped within a crystal structure of
water, forming a solid similar to ice.
[ 1]
Originally thought to occur only in
the outer regions of the
Solar System, where temperatures
are low and water ice is common,
significant deposits of methane
clathrate have been found under
sediments on the ocean floors of
the Earth.
Deconvolution

• Surface consistent convolutional model: x’ij = sj (t) ∗ hl(t) ∗ ek(t) ∗ gi (t) + n(t),
Where x’ij is the model of recorded seismogram, sj(t) is the wavefront component
associated with the source location j, gi(t) is the wavefront component associated
with the receiver location I, hl(t) is the wavefront component associated with the
offset “l” where l = |i – j| , ek(t) is the earth impulse response at the mid point of
source & receiver , so k = (i + J)/2
Prewhitening

• We know that the amplitude spectrum of the spiking deconvolution operator is the
inverse (approximately) of the amplitude spectrum of the input wavelet. This is
sketched in the fig below.

Prewhitening amounts to adding a bias to the amplitude spectrum of the


seismogram to be deconvolved to prevent the division by zero, since the
amplitude spectrum of the inverse filter (middle) is the inverse of the
amplitude spectrum of that seismogram (left). Convolution of the filter with
the seismogram is equivalent to multiplying their respective amplitude
spectra- this yield nearly a white spectrum (right)
Miscellaneous

• Matrix equation which is used to derive the “least square inverse filter” are made
up of auto-correlataion lags of the input wavelet (left column of the matrix) and
cross correlation of the desired output with the input. This second item makes the
matrix on the right side of the equation.
• These observations were generalized by Weiner to derive filters that can convert
the input to any desired output.
• The optimum Wiener filter is optimum in that the least-square error between
actual and desired output is minimum.
• When the desired output is zero delay spike then the “Optimum Wiener Filter” is
Least-Square Inverse Filter”.

You might also like