You are on page 1of 26

Master Dissertation

Multiples Attenuation in Western Offshore Basin


A comparative study of available techniques of multiple attenuation by processing a long offset deep marine seismic data.

Pankaj K Mishra M.Sc. Geophysics IIT Kharagpur

UNDER THE GUIDENCE OF

Prof. S.K.Nath Former Head of Department Department of Geology and Geophysics IIT Kharagpur, India

Mr. S. Basu Senior Geophysicist, SPIC Oil and Natural Gas Corporation Ltd India

INDIAN INSTITUTE OF TECHNOLOGY

ACKNOWLEDGEMENT It is a great pleasure to pay my gratitude to my academic supervisor Prof. S.K Nath (former Head, Department of Geology and Geophysics, IIT Kharagpur) for providing me enough theoretical background as well as such a opportunity to do my master dissertation in industry and also for his guidance throughout my master degree. I am grateful to Mr. S.Basu (SPIC, ONGC, Mumbai) for his true guidance throughout this project .Their full and continual support helped me a lot. I am thankful to Mr. D. Chatterjee (GGM, SPIC ONGC Mumbai) for providing all the facilities necessary for this project. I am also thankful to Prof. Biswajit Mishra, (Head of the Department of geology and geophysics, IIT Kharagpur) for providing me all possible facilities throughout the program. I would also like to express my heartfelt thanks to Mr. T.K. Bharti (SPIC, ONGC), who has contributed his time in helpful discussion and created friendly atmosphere which led to the successful completion of the work and preparation of my thesis. Finally, I acknowledge Indian Institute of Technology, Kharagpur for providing me such a great platform.

-Pankaj K Mishra

Chapter1: Introduction
1.1 The Problem of Multiple Reflections
Subsurface images provided by the seismic reflection method are the single most important tool used in oil and gas exploration. Almost exclusively, our conceptual model of the seismic reflection method, and consequently our seismic data processing algorithms, treat primary reflections, those waves that are scattered back towards the surface only once, as the signal. The travel times of the primary reflections are used to map the structure of lithology contrasts while their amplitudes provide information about the magnitude of the lithology contrasts as well as other information such as presence or absence of fluids in the pore spaces of the rock. In seismic exploration the problem of multiple reflections contaminating seismograms and thus disguising important information about subsurface reflectors is well-known. Today, the majority of all oil and gas resources are discovered in offshore continental shelf areas both in shallow and deep water. Before oil-producing wells can be drilled, geophysicists have to provide an image of the physical properties in the subsurface that shows where reservoirs can be expected. In a marine exploration we encounter the problem that the water layer often behaves as a wave trap (Backus, 1959), where seismic waves are multiply reflected between sea surface and sea bottom. Waves that are transmitted through the sea bottom can also reverberate between deeper reflectors. The energy of these interbed multiples and water layer reverberations can become so strong that the primary reflection arrivals of deeper target reflectors become completely invisible. As a result, marine seismograms often show a ringy character with strong multiples superposed on most of the primary arrivals from deeper reflectors. For correctly locating a target reflector that might indicate an oil reservoir, these interfering multiple reflections have to be eliminated, or since this is only rarely possible, they have to be at least attenuated. The efficient elimination of multiples from marine seismic data is one of the outstanding problems in geophysics. The efficient elimination of multiples requires large amounts of computer time. The marine seismic industry is a multi-million dollar market, and improvement of the accuracy and efficiency of the removal of multiples will lead to cost reduction and shorter turnaround times in this industry.

1.2 Classification of Multiples:


Multiples can be either short period or long period. In recorded marine seismogram most multiple reflections arise from an interface with a strong impedance contrast such as free surface and water bottom. Figur1.1 shows ray path diagrams for: (a).water bottom multiples of first and second order (b).free surface multiples of first and second order (c).peg-leg multiples of first and second order (d).intrabed multiples of first and second order (e).interbed multiples of first and second order

These are few of the numerous configurations of ray paths associated with multiple reflections encountered in marine data. Regardless of the type of multiples, they all have two common properties that can be exploit to attenuate then varying degree of successperiodicity and moveout that is different from primaries. The shot records over the deep water contain long period water bottom multiples and peg-leg multiples associated with reflectors just below the water bottom. Whereas the shot records over the shallow water

contain short period multiples and reverberations. The guided waves in the shallow water records also contain multiples which have ray paths within the water layer. Same kind of multiples can be shown in the stack section also as demostrated in the figure below.

1.3 Attenuation of Multiples: The standard approach in seismic data processing is


to attenuate the multiples before imaging, that is, in data space. Most algorithms for the attenuation of multiples in data space are based on three main characteristics of the multiples: (1).their periodicity in arrival time (predictive deconvolution), (2).their difference in moveout with respect to the primaries in CMPs (f-k and radon) (3). their predictability as the auto-convolution of the primaries (Surface Related Multiple Elimination (SRME)). Predictability has always been important in multiple removal. In the early days of seismic processing (the 1960s), single trace statistical prediction was very successful (Robinson, 1957). In the early 1980s prediction-error filtering has been given a wave theoretical base, providing a unified theory for surface-related and internal multiples (Berkhout, 1982). It has increased the effectiveness of multi-channel, prediction-error filtering significantly (Verschuur, 1991). Nowadays, multiple removal algorithms are to a large extend presented by wave theory based, multi-channel, prediction-error filters. Each of these approaches has distinctive advantages and disadvantages.

In this thesis, I refer to data space as the un-migrated space. This means data as a Function of time. I consider two main sets of data: source gathers and CMP gathers.The first are function of the source co-ordinates, offsets and time while the second are function of the CMP co-ordinates, half-offsets and time.

Chapter 2: Geometry Merging and Raw Data Analysis


For processing we are given a raw seismic data and its geometry. In land acquisition this geometry is called self processing sequence SPS and contains information about shot points, receivers, static corrections etc. In marine acquisition this is called UKOOA and similarly it contains information about source location, receiver location etc. Apart from these we have an observer report which gives some additional information about the geometry. As we start processing the seismic data our first job is to merge the corresponding geometry with the data. However it doesnt change the data anyway but puts header values in it and by doing this we can excess the data in desired format, either FFID or CDP shorted etc. Once geometry is merged the data is ready to be processed. Our next job is to analyze the given data I will start from here with the real data provided. The raw data display is shown in figure (2.1). Seeing the data we can analyze that.. This data has gone through some very initial processing because since this is a deep marine data, this data must be full of swell and cable noises but here we dont see anything like that. Hence this data must have gone a low cut filtering that eliminated these noises. Some straight lines at the upper left corner are the direct arrivals. This is not desired in our output seismogram. Elimination of these direct arrivals is very easy task we simply mute the part over water bottom applying a top mute. Since this dissertation is mainly concerned about multiple elimination I will not be describing this procedure and I will apply this mute somewhere in between the processing. Water Bottom starts at 2 second approximately. And since it is a long offset data primary reflections associated with water bottom multiple go straight as a linear event at longer offset because there occurs refraction. The refraction ending in between 8 and 9 second (the upper dark one) is to be removed .We will attenuate this by F-K filtering. We can see first order and second order multiples of the primary events at 2 second, near 4 second and 6 second. These are also continuing up to long offset. We will try to eliminate this in two parts. The near offset part by 2D SRME and the far offset part by muting in parabolic Radon transform domain. Apart from these we observe a series of multiples in between primary and first order multiple. These multiples are short period multiples and can be treated by predictive deconvolution. However predictive deconvolution is not enough effective to eliminate all these. But it can work well after 2D SRME. We do not see significant linear noise as it is a deep marine data.

Figure 2.1: the raw seismic record however with a low cut filter.

Chapter 3: F-K Filtering


3.1: Principle- Data in T-X domain is transferred to Frequency Wave number domain
using FFT. Noises and multiples are separated because of their different dip as well as frequency. Once multiples are separated and we know the region where these are lying we can subtract that part from data in f-k domain. The remaining data is inversely transformed into T-X space.

3.2: f-k AnalysisBefore applying f-k filter we need to do an interactive f-k analysis to know where exactly the multiples are in f-k domain then only we can do the elimination. After executing the corresponding module an interactive panel is displayed having data in four domains tk, t-k, f-x and f-k. as shown in the figure 3.1.

According to our convenience we can change the display and get it displayed into t-x and f-k domain only. As shown in figure 3.2.

It is the property of the software as we draw a line in t-x domain a line appears in f-k domain showing its position in the same. So by dragging multiples and linear refractions we can locate them in f-k domain and select those regions by drawing a polygon. This polygon is saved in the data base and is used in f-k filtering. This f-k analysis id quite interactive and the output can be seen simultaneously as many times as we please while choosing the location of correct polygon.

3.3 Effect of f-k filtering on the data

The effect of f-k filter is shown in figure 3.3(b) the f-k filtered data as compared to 3.3(a) the raw data-

Figure3.4(a)-shot gather and frequency response without f-k filtering

Figure3.4 (b)- shot gather and frequency spectrum after f-k filtering

3.4: ConclusionF-k filtering has been like a tradition in seismic processing and is a conventional tool to remove linear noises and multiples and very easy to apply. We can see the beauty of f-k filtering in the shot gathers before and after but if we study the frequency responses it is clear that the amplitudes of the reflection have been affected in an undesired way that is have been decreased. This is the drawback of f-k filtering. Because f-k filtered data is not very suitable for some special processing like AVO analysis. So we should not try to use much of this technique to remove all the noises at a time. Instead we should use other methods. Also the performance of an f-k filter in suppressing multiples strongly depends on primary and multiple reflections being mapped to separate regions of the f-k plane. This is in general the case on far-offset traces, for which the difference in move out can be large, but not on short-offset traces for which the difference in move out is small. The performance of f-k filtering, therefore, is poor at small offsets even if the subsurface geology is not very complex. This usually makes f-k filtering an undesirable option for multiple elimination. However f-k filter is an effective tool for removing linear noises if there is any. And this we can see comparing outputs f-k filter has treated direct arrivals as a linear noise and the response can be seen.

Chapter 4: Predictive Deconvolution 4.1: Principle- The attenuation of short-period multiples (most notably reverberations from relatively flat, shallow water-bottom) can be achieved with predictive deconvolution. The periodicity of the multiples is exploited to design an operator that identifies and removes the predictable part of the wavelet (multiples), leaving only its non-predictable part (signal). The key assumption is that genuine reflections come from an earth reflectivity series that can be considered random and therefore not predictable (Yilmaz, 1987). In general, for other than short-period multiples, only moderate success can be achieved with this simple, one-dimensional procedure. The main goal of the predictive deconvolution is the suppression of multiples. The desired output is a time advanced (parameter lag) version of the input signal. To suppress multiples choose a lag corresponding to the twoway-travel time of the multiple. If the input signal is mixed-phase a spiking deconvolution or wavelet shaping may improve the result of the following predictive deconvolution. There are certain assumptions to de followed for deconvolution as The earth is made up of horizontal layers of constant velocity. The source generates a compression plane wave that impinges on layer boundaries at normal incidence. (Assumptions are violated in both structurally complex areas with gross lateral Facies change.) The source waveform does not change as it travels in the sub surface ; i.e. it is stationary. (In reality it changes because of divergence and absorption) The noise component is zero. (In reality there are several types of noise like wind, commercial activities etc.) The source waveform is known. Reflectivity is a random series. (This implies that the seismogram has the characteristics of seismic wavelet in that their Autocorrelation and amplitude spectrum are similar. Convolution model is the mathematical depiction of the recorded seismogram:

Signal S (T) = Input Wavelet W (t) * Earth Reflectivity R (t) + Noise N (t) Deconvolution or converse of Convolution is an attempt to obtain the earth reflectivity from signal measured:

Predictive improves the temporal resolution of seismic data by compressing the basic seismic wavelet .Sometimes it can remove a significant part of the multiple energy from the seismic section which is our purpose here. Deconvolution compresses the basic wavelet in the recorded seismogram, attenuates reverberations and short period multiples, thus increases temporal resolution and yields a representation of subsurface reflectivity. Predictive deconvolution attempts to predict and remove only the tail of the input wavelet. The tail consists of reverberations that are intruded into the down going seismic

wavelet by multiple reflections (multiples). For predictive deconvolution we want to predict


the multiples and to attenuate them. Therefore sometimes predictive deconvolution is also called as error filtering

4.2: Pre-conditioning for Deconvolution: Wide band pass filter for removing random noise True amplitude recovery Spherical divergence correction Mute

4.3: Parameters of Deconvolution: Prediction distance or gap - the part of the wavelet to preserve (the primary reflection). Operator Length - the length of the filter - defines how many orders of the multiple the operator will address Design window - data window for which the autocorrelation is determined where reverberations are most prominent White Noise: Addition of white noise to data (auto corrologram) during operator design to prevent Operator instability (divisions by zero while calculating wavelet inverse ) Equalizing the amplitude in addition to the signal. The amount of white noise to add will generally be in the range of 0.1% to 1%. Too little white noise may cause the deconvolution operator to become unstable, decrease the S/N ratio of the data. Too much white noise may:-Decrease the effectiveness of the Deconvolution Narrowing the bandwidth of data. 4.4: Determination of Operator length: For determination of operator we keep a prediction distance constant (say PD=8) and vary operator length such as 140,180,240,280,320,360 etc. and after analyzing their output and frequency responses we decide the optimum operator length. For example I am comparing four combinations of operator lengths with constant predictive distance=8.

Figure 4.1(a): Deconvolution Gather and frequency response OL=140, PD=8

Figure 4.1(b): Deconvolution Gather and frequency response OL=240, PD=8

Figure 4.1(c): Deconvolution Gather and frequency response OL=320, PD=8

Figure 4.1(d): Deconvolution Gather and frequency response OL=360, PD=8

We see that operator length 320 is optimally more effective, so we choose this operator length and now we analyze the following combinations of different predictive distances with this operator length.

Figure 4.1(d): Deconvolution Gather and frequency response OL=320, PD=2

Figure 4.1(d): Deconvolution Gather and frequency response OL=320, PD=4

Figure 4.1(d): Deconvolution Gather and frequency response OL=320, PD=8

Figure 4.1(d): Deconvolution Gather and frequency response OL=320, PD=12

Figure 4.1(d): Deconvolution Gather and frequency response OL=320, PD=16

Figure 4.1(d): Deconvolution Gather and frequency response OL=320, PD=2

Figure 4.1(d): Deconvolution Gather and frequency response OL=320, PD=28

Finally we see that predictive distance 16 is giving better result comparatively. So up to now we have decided OL=320 and PD=16. Now we see white noise generally it is taken in between 0.1% and 1%.

Chapter 5: Parabolic Radon Transform


Radon transforms work on the basis of move out difference between primary and multiples. The data is transformed into tau-p domain after NMO correction where multiples and primaries are separated, because even after normal move out correction multiples have some move out and that makes them separated from primaries. Parabolic Radon transform attenuates long period multiple and in general short period multiples are supposed to be attenuated by predictive deconvolution. So we take Decon data as the input of radon transform. Before doing radon transform we need velocity of that data .Hence before Radon transform we pick velocity in the CMP domain .With this velocity we go in an interactive Radon domain and select a Radon mute which is basically the line separating primaries from most of the multiples .We examine the effect of Radon transform in Figure 5.1(b) CMP gathers after Radon transform, compared to Figure 5.1(b) CMP gathers before Radon transforms.

Chapter 6: 2D Surface Related Multiple Removals


The Radon demultiple is currently the mainstay of marine demultiple processing flows (Figure 3). It provides a high degree of attenuation, especially on long period multiples found in deep water. It works similarly well in all areas, but experiences difficulties when the move out differential decreases, such as with peg-leg multiples or multiple energy found on the near traces. Diffracted and 3D multiples also pose problems due to their distorted move out behavior. Surface Related Multiple Elimination techniques are based on work done at Delft University (Verschuur, 1992). They focus on attenuating all multiple energy relating to the surface through an entirely data driven process. The principle rests on the concept that generating all surface related multiples affiliated with one reflector is simply a matter of propagating the recorded data down to that particular reflector. In practice, the recorded data itself is used as a first estimate of the primary wave field. The process then becomes a series of cross-convolutions of common midpoint (CMP) gathers. The SRME algorithm generates a pre-stack multiple model that can then be subtracted from the data using an adaptive subtraction or pattern recognition algorithm. New innovations involve true 3D algorithms (although difficult and expensive) and non-iterative versions, so called Partial SRME (Hugonnet, 2002). 2D SRME does not require any a priori information other than primaries. In convolution based 2D SRME primaries are convolved with themselves to model the multiples. These modeled are then subtracted from the data which is combination of primary and multiple by the means of adaptive subtraction and the output is primary only theoretically. SRME predicts and constructs the multiple via a convolution model. Prediction or surface multiples by convolution in space and time of seismic data P (z0) with itself. M (z0) = P0 (zi)*P0 (z0) Adaptive subtraction of the predictive multiple from input data P(z0) = P(z0) A(0)M(z0) The first describes a spatial convolution along the surface of seismic data with the multiple prediction operators. i.e. seismic data without multiples. For each source receiver traces from input data related to source location in this case shot record are combined at the surface with traces of multiple prediction operator data belongings to desire receiver position. Cross convolution of these two set of data and adding the result will produce multiple prediction trace for the source receiver pair. By this summation (Kirchhoff) only the multiple events will be added constructively thus the full multiple models will be predicted .This will be subtracted from the original data to get multiple free data.

Figure 6.1: Shot Gathers and frequency spectrum of raw data without 2D SRME

Figure 6.2: Shot gather and frequency spectrum after 2D SRME

Chapter 7: Conclusions