You are on page 1of 26

8

DELPHI Stepwise Approach to AVO Processing


Dirk Jacob Verschuur, Aart-Jan van Wijngaarden, and Riaz Ali Delft University of Technology, Delft, The Netherlands

Abstract
The data set provided for amplitude versus offset (AVO) inversion has been subjected to the three steps which are involved in the DELPHI consortium program: preprocessing, structural imaging, and lithologic characterization. In the preprocessing step, a major problem is the presence of strong surface-related multiples. With an integrated surface-related and Radon multiple elimination procedure, it was possible to remove the multiples in a satisfactory way without distorting the primary AVO characteristics. Once the multiples were removed, structural imaging could be done in a fairly straightforward way, in which prestack migration techniques were used to get a good macro velocity depth model for the poststack depth migration. In the lithologic inversion stage, anomalies in the compressional to shear wave velocity cp / cs ratio, which are related to hydrocarbons, were detected by inversion of prestack data. The inversion result shows that the shallower reservoirs have larger anomalies than the deeper (Jurassic) reservoirs. This is in agreement with the provided well data. Finally, using wave equation-based depth extrapolation, a shot record at the well was transformed into a pseudo vertical seismic profile (VSP). The

pseudo VSP facilitates an accurate comparison between real VSP data and surface data. Integration of real and pseudo VSP data may provide a new way to predict lateral reservoir variations.

Introduction
The DELPHI consortium at the Delft University of Technology is carrying out a research program on the stepwise inversion of seismic data. The three principal steps are: 1) Preprocessing 2) Structural imaging 3) Lithologic characterization As such, the Mobil AVO data set is a good candidate to test the application of the DELPHI processing approach. For this data set, preprocessing is an important step, as the data suffer from distortions due to surface multiple energy. The DELPHI surface-related multiple elimination method (including the latest developments on integration with the parabolic Radon method) will be applied to supply the best possible primaries-only data set for the imaging and characterization steps. The amplitudes of the primary events

139

Verschuur, van Wijngaarden, and Ali


should be preserved after this preprocessing stage. Because of moderate dips, the structural imaging step is accurately performed with conventional stacking and poststack depth migration. However, the initial depth model derived from stacking velocities has been updated using the prestack areal shot record migration method described by Berkhout and Rietveld (1994) and Rietveld and Berkhout (1994) to obtain a consistent depth model. In the lithologic characterization step, a gradient section was derived to get an overview of the general AVO behavior of the data. After that, the preprocessed data were used as input for our linear constrained inversion process to estimate the seismic contrast parameters. Well information was used to constrain the inversion. The contrast parameter section was then transformed to lithology/hydrocarbon information. The final section of this paper describes how the seismic shot records around one of the wells were transformed to pseudo VSP data for comparison with the real VSP data. ior. These fluctuations are not as strong as the receiver amplitude fluctuations and, therefore, they were left in the data. 6) Interpolation of missing shots and near offsets, Application of the surface-related multipleelimination method requires full coverage of the shots and receivers up to zero offset. Among the selected shots from the line, six shot records are missing (shots 549-551 and 859-861). They were interpolated in the common offset plane. As the nearest offset was 263 m, approximately ten near-offset traces are missing in each shot. They were created using a parabolic Radon transform to extrapolate the data to zero offset in the CMP domain (see Kabir and Verschuur, 1993). In Figure 1, five shots (503, 603, 703, 803 and 903) are displayed without any processing (except for direct-wave mute). For display purposes, NMO correction is applied to the shots. This gives some idea about the structure of the primary reflections. The downward curving events can be considered to be multiples. In Figure 2, the same five shots (503, 603, 703, 803 and 903) are displayed after the five basic preprocessing steps. Deconvolution has done a good job with respect to sharpening the events. Note that near traces have been created up to zero offset. Actually, due to the many multiples, the interpolation results are a little noisy for the larger times (larger than 2 s). Later we will see that we fail to do a good job of multiple removal on the interpolated near offsets. However, they are only meant to yield a better multiple estimation and will be omitted in the later processing (i.e., stacking and migration).

Preprocessing
To process the Mobil line, shots 354 to 1053 were selected, with 1024 samples per trace. The preprocessing consisted of the following steps: 1) Direct wave mute. Careful direct wave muting was performed so that the reflection data were undisturbed. 2) 3-D to 2-D spherical correction. A simple time gain was applied to simulate line source instead of point source responses. 3) Replacement of bad traces. On careful inspection of the shots it appeared that several channels in each shot contained data that are not consistent with their neighboring traces (phase and amplitude distortions). They were killed and reinterpolated from the good traces using a rough normal moveout (NMO) correction and spline interpolation. 4) Wavelet deconvolution. Predictive deconvolution was applied with a gap of 20 ms and a filter length of 240 ms. 5) Receiver sensitivity correction. Even after interpolation to replace bad traces, the receivers appeared to show a consistent sensitivity behavior throughout the shot records. Leastsquares inversion techniques were used to correct these amplitude fluctuations. Also the sources showed a fluctuating amplitude behav-

Multiple Elimination Method


After the basic preprocessing sequence, the data are ready for the most important preprocessing step: multiple elimination. As can be seen in Figure 2, a lot of multiples are present in the data, especially in the deeper part of the sections. Therefore, effective multiple elimination that preserves the primary amplitudes is necessary to obtain reliable AVO inversion results. We will compare and integrate two methods, the surface-related multiple-elimination method and the parabolic Radon tranform multiple-elimination method.

140

DELPHI Stepwise Approach to AVO Processing


shot number

503

603

703

803

903

Q)

3.b..L,;==

Fig. 1. Shot records 503, 603, 703, 803, and 903 before preprocessing (NMO correction applied).
shot number

503

603

703

803

903

Q)

:0::

Fig. 2. Shot records 503, 603, 703, 803, and 903 after basic preprocessing and interpolation of missing shots and traces (NMO correction applied).

141

Verschuur, van Wijngaarden, and Ali

Surface-related and Radon multiple elimination


The surface-related multiple elimination method, described by Verschuur et al. (1992), is based on wave theory. It can be proven that by taking temporal and lateral auto-convolutions of the seismic data, an accurate prediction of the surface-related multiples is obtained. Subsurface information is not needed, but information on the free surface reflectivity (assumed to be 1 for marine data) and the source wavefield is required. As the latter is not known in practice, the method is applied adaptively. By eliminating the multiples, the source signature is estimated as well. The generalized Radon transform multiple suppression method originally was described by Hampson (1986). By adding the events in a seismic record (e.g., a CMP gather) along curved paths, each event maps to a restricted area in the transform domain. In the case that primaries and multiples have different moveouts, they map to different parts in the Radon domain. In this domain, the multiples are selected and an inverse transform is applied. Next, the predicted multiples are subtracted from the input data, yielding the primary gather. From an implementation point of view, the parabolic Radon transform is more efficient than a hyperbolic transform since the procedure can be applied in the Fourier domain for each individual frequency component (see also Kabir and Verschuur, 1993). By applying a partial NMO correction to the input data, the residual moveouts often can be assumed to be approximately parabolic. Using least-squares matrix inversion to transform the data to the Radon domain, optimum resolution can be achieved. If the offset geometry of the subsequent data gathers is (approximately) constant, a very efficient implementation of the Radon transform can be achieved (Kelamis and Chiburis, 1992.)

put of another multiple elimination method. This makes the method more efficient. 2) Each iteration is carried out as a linear leastsquares optimization step, yielding faster and better results (i.e., no local minima). In addition, the restrictions on the estimated wavelet deconvolution filter can be relaxed. Therefore, varying signatures for different sources or even source directivity are included in a more convenient way (Verschuur and Berkhout, 1993). In conclusion, we propose to make use of an efficient multiple removal procedure to get an initial guess of the multiple free data and use that as input for an iterative formulation. Then only one or two iterations will be needed for the final surface-related multiple elimination result. The examples will show that the Radon multiple elimination method provides a very good initial guess for the multiple free data.

Multiple Elimination Results


The initial multiple elimination results obtained with the parabolic Radon transform method are displayed in Figure 3. (Note that the Radon transform method is applied on CMP gathers, although shots are shown here). Comparing the results with the input data of Figure 2 shows that the Radon transform method does a very good job in the shallow data (above 2 s) but below 2 s some multiples still remain. Using more severe muting in the Radon domain may distort primary reflection energy. This result was used as an initial multiple prediction operator (as the upper part is multiple free) for the surface-related multiple-elimination method. Only one iteration had to be applied. The surface-related multiple-elimination method also provides an estimate of the residual source signature deconvolution operator. This filter was applied to the output to simulate zero-phase results. The remaining distortions of the wavelet are due to propagation and absorption effects only; (i.e., subsurface-related effects). Additionally, a dip filter was used to remove those dips from the data that do not contribute at the target. For the shots under consideration, Figure 4 shows the final result. It appears that, partly due to deconvolution, the result has higher frequencies and is noisier than the Radon result. But in the deeper data, it was possible to predict and remove the multiples that were left by the Radon method. For the upper part, the Radon result might even be better, although

Integration of surface-related and parabolic multiple elimination


Following the theory of surface-related multiple elimination (Verschuur and Berkhout, 1994), it can be shown that the adaptive surface-related multiple elimination procedure can be written as an iterative process. For this iterative procedure, an initial estimate of the multiple free data is used as multiple prediction operator. The iterative formulation has two major advantages: 1) The iteratitive procedure begins with a better guess for the multiple free data, e.g., the out-

142

DELPHI Stepwise Approach to AVO Processing

Fig. 3. Shot records 503, 603, 703, 803, and 903 after parabolic Radon multiple elimination (NMO correction applied).

Fig. 4. Shot records 503, 603, 703, 803, and 903 after surface-related multiple elimination and wavelet deconvolution.

143

Verschuur, van Wijngaarden, and Ali


at stack level this difference was not visible. Both the Radon and the surface-related method have some difficulties at the very near offsets. Therefore, a near-offset mute was applied before stacking. Figure 5 shows the stacked section after basic preprocessing, but before multiple elimination. Figure 6 shows the stack after multiple elimination and application of the estimated residual wavelet deconvolution filter. Figure 6

800

900

1000

_. 111

1100
l-

Q)

.0

E ::l c
Q.

1200

"0

1300

1400

1500

1600

Fig. 5. Stacked section after basic processing.

144

DELPHI Stepwise Approach to AVO Processing


shows an improvement in the multiple removal for both shallow (e.g., first-order multiples at 1.1 s) and deeper multiples (e.g., around CMP 1300 at 2.8 s). The effect of multiple removal after migration is shown in Figure 7. Deeper structures are better imaged.

800

900

1000

1100
(])

....

.0

E
:::l

1200

0"0
()

1300

1400

1500

1600

Fig. 6. Stacked section after multiple elimination and residual wavelet deconvolution.

145

Verschuur, van Wijngaarden, and Ali

800

900

1000

1100 .... Q)
.l

E
c
::J

1200

-0

0..

1300

1400

1500

1600

depth (m)
Fig. 7. Poststack depth migration of stack after multiple elimination and residual wavelet deconvolution.

146

DELPHI Stepwise Approach to AVO Processing

Constrained Linear Inversion


After preprocessing and multiple elimination, we used linearized inversion on CMP gathers to estimate the relative elastic contrast parameters. We assumed a locally flat and low contrast medium. In our algorithm, the nonlinear Zoeppritz equation for the elastic angle dependent plane-wave P-P reflection coefficient R() is linearized. Following Aki and Richards (1980), R() is written as a weighted sum of the relative contrasts in the elastic parameters P-wave velocity cp , S-wave velocity cs and density
2 c2 cs cp cs 1 1 s R ( ) = 2 sec2 4 2 sin2 + (2 2 2 sin2) cp cs cp cp

cp Z = 1 c Z p

(4)

and
cp = 2 cp .

(5)

Lithology and hydrocarbon indicators


Since the estimates are derived from a linear combination of the data, any linear combination of the original parameters can be taken. This means that we can, for example, estimate the relative P-wave velocity contrast cp/cp and the relative shear modulus contrast / and compute, from a linear combination of those two, a contrast deviation factor
cp k cs D = c . p 2 cp

(1) where equals the angle of the incident P-wave, cp equals contrast in the P-wave velocity over an interface and cp equals the average P-wave velocity over the interface. Similar definitions apply for cs, cs, and . This can also be written in terms of tan2 and sin2:
R ( ) =

(6)

The contrast deviation factor D shows deviations from an empirical linear relation between P-wave and S-wave velocities (Castagna et al., 1985),
cp = (k ) cs + c

1 Z 1 cp 2 + tan 2 2 2 sin (2) 2 Z 2 cp


= c2 s
cs =c . p

(7)

with
Z = cp , and

(3)

where k [(kg/m3)-0.5] and c [m/s] are constants. Well-log velocities are used to determine the constants k and c. Differentiating this relation to obtain a relation in relative contrasts gives
cp k cs cp = 2 c . p

Available well data can be used to estimate the relation between the P-wave velocity and the S-wave velocity . In the inversion algorithm, the NMO-corrected CMP data are converted to the reflection coefficients (scaled with the wavelet) for a number of angles using the P-wave macro model. The relative contrasts can be found by leastsquares inversion. Using Equation (2) we estimated the relative contrast in acoustic impedance Z, in Pwave velocity cp and in the shear modulus at each time sample by minimizing the difference between R() given by Equation (2) and the NMO-corrected CMP data. The output of the inversion consists of three time sections: one for each estimated contrast. In our algorithm, a Bayesian approach is used in the statistical inversion. The available well data are used in the following relations to stabilize the inversion:

(8)

Note that interfaces satisfying Equation (7) will show a contrast deviation factor D equal to zero. Equation (6) needs the trend in the relation k ()[cs/cp] at every point in the subsurface to compute deviations from Equation (7). In our algorithm, we compute the ratio from the data using the relation
cp cp (x ,t) = (x ,t) (x,t)

(9)

and average the ratio in time and in lateral direction to get <(x,t)>. This averaged ratio is used to compute deviations from the trend in the ratio between the P-wave velocity contrast and in the shear modulus contrast. This

147

Verschuur, van Wijngaarden, and Ali


anomaly indicator or contrast deviation factor section is defined by
cp (10) IND(x,t) = c (x,t) < (x,t) > ( x ,t) . p

acoustic impedance contrast with a polarity reversal in amplitude versus offset of about 2.7 s. The highly faulted area at CMP 1100 at 2800 m shows also anomalies. One should note that the assumptions of a (locally) flat layered medium are violated here.

Inversion results
The depth migrated section was interpreted to show the main fault structures. The large reflector near the base Cretaceous or X unconformity as well as the large fault blocks in the Jurassic section are easily recognized in Figure 8. We also tried to follow the main reservoir sands. At well A, we made crossplots of the P-wave velocity cp and the shear modulus around the three hydrocarbon bearing sands at 2000 m, 2300 m and 2600 m. The crossplots are shown in Figures 9 and 10. From these figures we conclude that the relation between cp and is approximately linear for shales and nonhydrocarbon sandstones. The oil-bearing sandstones have a slightly lower cp/ ratio. The gas-bearing sandstones have a larger deviation in cp/ ratio from the trend, which is clearest in Figure 9. From this well data analysis we conclude that it should be possible to detect the gas sands as an anomaly in the ratio of the relative P-wave and contrasts (Equation 10). The first output section from the inversion is the estimated contrast in acoustic impedance. A poststack depth migrated section from 900 m to 4000 m is shown in Figure 1. The estimated relative contrasts in cP and are combined as stated in Equation (10), and the result is shown in Figure 12. The overlaid interpretation is the same as in Figure 8. The indicator (Figure 12) shows clearly the strong AVO anomalies in the area below the X-reflector between well A and B. From well data, we know that at well B, this layer does not contain any hydrocarbons. The gas and oil sands at well A around 2600 m are not clearly present as an anomaly in the indicator. Comparing the crossplots of the well data in Figures 9 and 10, we could already have concluded that it would be difficult to see this reservoir as an anomaly. The area indicated by Q at 2000 m around CMP 1375 shows also a very strong anomaly. The NMO corrected CMP data are shown in Figure 13. Here we see a strong increase in amplitude versus offset, which might be related to a gasfilled sandstone. The area indicated by P at 3100 m around CMP 1234 shows an anomaly in the same layer, in which at Well B a gas sand has been found. The NMO-corrected CMP data is again shown in Figure 14. Here we can see a small

VSP
The Mobil data also contained three-component zero-offset vertical seismic profiles recorded in wells A and B. We used the VSP data for Well B to investigate how they tie to the surface data. In this section, we address: Preprocessing of the raw VSP data (Well B) Pseudo VSP generation from a shot record along the line near Well B Comparison of the Pseudo VSP data with the preprocessed real VSP data

VSP preprocessing
Some results will be shown on the preprocessing of the raw three-component VSP data (Well Bstarting at the sea bottom at 500 m depth). Here we introduce a fast and efficient method to suppress the noisy and spiky parts in the VSP data registrations. The method will be applied to the zero-offset vertical seismic profile for Well B. The registration tool used to record Well B consisted of four detectors each measuring three-component data. Figure 15 illustrates the raw VSP data registrations for the four detectors. Only the vertical component is shown. Figure 16 shows again the registrations for detector 1 together with a blowup of a selected part of the data. The blowup of the data registrations is shown here to give a better view of the noise in the VSP data. As can be seen from the raw VSP data registrations, there are many bad traces. Furthermore, one can see that several registrations were made at each depth level, in which many traces are noisy. The objective is to remove the bad traces in a fast and efficient way before common depth level stack so that we obtain only one clean trace per depth level. In the following, we illustrate an efficient sorting and stacking method, the so-called alpha-trim stack. For an extensive discussion and applications of this process, the reader is referred to Scheick and Stewart (1991) and Frinking (1994). Figure 18 illustrates the alpha-trim stack procedure of sorting and stacking at a certain depth level. For each time sample, the data are sorted in ascending order of amplitude. This is repeated for all time samples. Next a window

148

DELPHI Stepwise Approach to AVO Processing

well A

900

1000

1100

.c E ;::)
c::
"'0
()

.... (])

1200

Q.

1300

1400

1500

well 8 1600

Fig. 8. Interpreted depth migration.

149

Verschuur, van Wijngaarden, and Ali

Fig. 9. Crossplot of the P-wave velocity versus at Well A around 2000 m and 2300 m.

150

DELPHI Stepwise Approach to AVO Processing

Fig. 10. Crossplot of the P-wave velocity versus at Well A around 2600 m.

151

Verschuur, van Wijngaarden, and Ali

800

1000

1200

1400

1600

Fig. 11. Depth migration of acoustic impedance contrast (from 900 m).

152

DELPHI Stepwise Approach to AVO Processing

~
Ii ....

~
~ ....

.. .. 8 .. 51 ..
Q

.... ....
8
~

.. .8 ... ....51
~

.. .. 8 .. ..

i ...
5! ... ...

po

.51 ..
i...
~ ...
Q

i...
~ ...

...
~ ...

...
::! ...
Q

...

~
I ...
Q

I ...

I ...

Fig. 12. AVO anomaly indicator (in depth) with structural interpretation and well locations overlaid.

153

Verschuur, van Wijngaarden, and Ali

Fig. 13. NMO-corrected CMP gather at CMP 1375 showing event Q at 2 s.

154

DELPHI Stepwise Approach to AVO Processing

Fig. 14. NMO-corrected CMP gather at CMP 1222 showing event P at 2.7 s.

155

Verschuur, van Wijngaarden, and Ali


depth[m]

a)

b)

c)

d)

Fig. 15. Raw VSP data registrations (Well Bvert. component; detectors 1 to 4).

156

DELPHI Stepwise Approach to AVO Processing

Fig. 16. (a) Data registrations for detector 1 and (b) partial blowup.

Fig. 17. Alpha-trim sorting and stacking procedure applied to VSP data (several registrations at one depth level).

157

Verschuur, van Wijngaarden, and Ali


depth [m]

~
Ql

~
Ql

+=

+=

a)
500 1500

depth [m] 2500

3500

Fig. 18. VSP data after alpha-trim stacking for different values: (a) = 0, (b) = 0.7, (c) = 1, (d) VSP data after manual trace editing.

158

DELPHI Stepwise Approach to AVO Processing


is selected within which the data are stacked. The width of the window depends on the value of , which varies between = 0 and = 1. = 0 selects all traces at each depth level before stack (i.e., a mean stack of all traces per depth) and = 1 corresponds to selecting only 1 trace (the median filtering technique). Increasing the parameter a reduces the number of traces (data points) used for stacking. The value of must be chosen such that the stacked result contains at least one clean trace per depth level. Figure 18 shows the result of applying the alpha-trim stacking for a values 0, 0.7, and 1 (Figures 18a, b and c, respectively). Afterward we regularized the data in depth and we interpolated missing traces. With increasing values of we remove more and more noisy data points. We chose = 0.7 (Figure 18b) as optimal for removing noisy data points and preserving the amplitudes of the useful data. The lower part of Figure 18 compares the alphatrim stacking procedure for (Figure 18c) with a conventional method for VSP preprocessing (Figure 18d). The conventional method for VSP preprocessing is manual removal of noisy traces before common depth level stacking. This is a very time consuming method. Finally the alpha-trim method is compared with the median filtered version, which shows that the alphatrim method looks even better. Note that with manual trace editing, a complete trace is removed and useful information within this trace is lost. The alpha-trim stacking procedure determines for each time slice which samples will be rejected. In this way useful information will be preserved that would otherwise be rejected as a bad trace. The same procedure for suppressing the noisy parts of the raw VSP data was applied on the two horizontal components recorded in Well B. Figure 19 illustrates the result of alpha-trim stacking on the horizontal-1 and horizontal-2 component of the data. The optimal value for was found to be = 0.6. The conversion from the direct P- to an S-wave can be clearly identified, although reflections are very difficult to distinguish. (1994). First we will show the result of the pseudo VSP generation from the shot record with all multiples included (Figure 20). Acoustic two-way wavefield extrapolation operators are used in this pseudo VSP generation. (Only reflected wavefields in the shot record are used as input; the direct source wavefield is not included.) The seismic data are affected by very strong multiples, and the primaries are not clearly visible because of these strong multiples. Therefore the pseudo VSP was also generated from the same shot record after adaptive surface multiple elimination was applied. A blocked version of the true velocity log used for the generation of the pseudo VSP is displayed next to the pseudo VSP to show its relation in depth with the migrated section. Figure 21 shows the pseudo VSP generated from the same shot record (822, i.e., at Well B) after surface-related multiple elimination. The primaries are more identifiable compared to the pseudo VSP in Figure 20. Note the downgoing multiple reflections from the sea bottom (at 500 m depth). The transformation of the surface data into pseudo VSP data provides a better understanding of complex events (e.g., internal multiples). Reference arrows show the relation of the different data sets in the different planes (x-t, z-t, x-z). This facilitates following an event from the shot record to the VSP data and tracing it back to the intersection with the direct source wavefield at the original reflector depth. In fact the pseudo VSP data can be used as a tool to map a time event in the shot record into depth. The generation of the pseudo VSP data provides an unambiguous tie between seismic events on a time section and their geologic interface in depth.

Comparison pseudo VSP and real VSP


Here we give some final remarks regarding the value of the pseudo VSP data. First we would like to discuss the comparison between real VSP and pseudo VSP data (generated after surface-related multiple elimination). Note that only primaries should be compared. Because the acquisition of the real VSP data is completely different from the surface data, the data have different frequency content. The pseudo VSP data have a lower frequency band, and therefore a lower resolution, than the real VSP data. On the other hand, after proper preprocessing (this may include a thorough study of the sources and detectors that are used in both situations), the pseudo VSP may have a better signal-to-noise ratio. Hence, in practical situations, both VSPs may enhance each other significantly. Of fundamental importance in the pseudo VSP gen-

Pseudo VSP generation from surface data


So far we have described the preprocessing of the real VSP data. In the following, we will show the generation of pseudo VSP data from a real shot record (shot point 822; CDP No. 1572) at Well B. To avoid the influence of interpolated near-offset traces, the pseudo VSP was generated at 200 m offset. For an extensive overview and discussion on the generation of pseudo VSP data from surface data, see Ali and Wapenaar

159

Verschuur, van Wijngaarden, and Ali


depth [m]

500

1500

2500

3500

a)

b)

Fig. 19. (a) Original VSP data (Well Bhor. 1-component) and (b) after alpha-trim stacking ( TS) = 0.6. (c) hor. 2-component) and (d) after TS = 0.6.

160

DELPHI Stepwise Approach to AVO Processing

III
'I

11 1 1
..

I ,
I

depth [m]

Fig. 20. Integrated shot record /pseudo VSP / migrated section (all multiples included).

161

Verschuur, van Wijngaarden, and Ali

depth [m]

Fig. 21. Integrated shot record / pseudo VSP / migrated section (after surface-related multiple elimination).

162

DELPHI Stepwise Approach to AVO Processing


eration method is that we can walk away from the well with the optimally determined matching parameters at the well and extend our geologic knowledge laterally in all directions. We expect that the inherent simplicity of pseudo VSP data will allow a more detailed interpretation of lateral variations in the reservoir. Castagna, J. P., Batzle, M. L., and Eastwood, R. L., 1985, Relationships between compressional-wave and shear-wave velocities in clastic silicate rocks: Geophysics, 50, 571-581. Berkhout, A. J., and Rietveld, W. E. A., 1994, Determination of macro models for prestack migration: Part 1, Estimation of macro velocities: 64th Ann. Internat. Mtg., Soc. Expl. Geophys., Expanded Abstracts, 1330-1333. Frinking, P. J. A., 1994, Integration of L1 and L2 filtering with applications to seismic data processing: Delft University of Technology. Hampson, D., 1986, Inverse velocity stacking for multiple elimination, J. CSEG, 22, No. 1, 44-55. Kabir, M. M. N., and Verschuur, D. J., 1993, Parallel computation of the parabolic Radon transform: applications for CMP-based preprocessing: 63rd Ann. Internat. Mtg., Soc. Expl. Geophys., Expanded Abstracts, 193-196. Kelamis, P. G., and Chiburis, E. F., 1992, Land data examples of Radon multiple suppression: First Break, 10, No. 7, 275-280. Rietveld, W. E. A., and Berkhout, A. J., 1994, Determination of macro models for prestack migration: Part 2, Estimation of macro boundaries: 64th Ann. Internat. Mtg., Soc. Expl. Geophys., Expanded Abstracts, 1334-1337. Schieck, D. G., and Stewart, R. R., 1991, Prestack median f-k filtering: 61st Ann. Internat. Mtg., Soc. Expl. Geophys., Expanded Abstracts, 1480-1483. Verschuur, D. J., and Berkhout, A. J., and Wapenaar, C. P. A., 1992, Adaptive surface-related multiple elimination: Geophysics, 57, 1166-1177. Verschuur, D. J., and Berkhout, A. J., 1993, Integrated approach to wavelet estimation and multiple elimination: 63rd Ann. Internat. Mtg., Soc. Expl. Geophys., Expanded Abstracts, 1095-1098. Verschuur, D. J., and A. J. Berkhout, 1994, Multiple technology, Part 1: Estimation of multiple reflections: 64th Ann. Internat. Mtg., Soc. Expl. Geophys., Expanded Abstracts, 1493-1496.

Discussion and Conclusions


The anomaly indicator used in this article shows AVO anomalies in the data. This means that anomalies in cp/ ratio, which are related to hydrocarbons, can be detected by inversion of prestack data, but nonhydrocarbon-related AVO anomalies are also present. The general trend in the relation between cp and cs must be determined to find the anomalies. But the trend can be estimated from the data. For AVO inversion of the deeper data, it is important to remove multiple reflections, to obtain the correct angle dependent reflectivity. We demonstrated the generation of pseudo VSP data. The pseudo VSP facilitates an accurate comparison between real VSP data and surface data. In addition, pseudo VSPs can be used for lateral prediction.

Acknowledgments
This research was performed under the direction of the international DELPHI consortium project. The authors would like to thank the participating companies for their financial support and the stimulating discussions at the DELPHI meetings.

References
Aki, K., and Richards, P. G., 1980, Quantitative seismology: W. H. Freeman and Co. Ali, R., and Wapenaar, C. P. A., 1994, Pseudo VSP generation from surface measurements: A new tool for seismic interpretation, J. Seis. Expl., 3, No. 1, 79-94.

163

You might also like