You are on page 1of 297
VU CO Te WML UCM TLC Fae ae ee Ee An Introduction to: Velocity Model Building lan E Jones © 2010 EAGE Publications by All rights reserved. This publication or part hereof may not be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without the prior written permission of the publisher. ISBN 978-90-73781-84-9 EAGE Publications by PO Box 59 3990 DB HOUTEN ‘The Netherlands Table of contents Preface Acknowledgements 1. Introduction: from recorded data to images What migration sets-out to do ‘Time versus depth Classes of migration: waves versus rays Integral versus differential techniques Domains of application Evolution of migration schemes Anisotropy Multipathing ‘One-way versus two-way wave propagation ‘The migration operator and impulse response Algorithm noise in integral techniques Summary Why do we need a detailed velocity model? ‘The limitations of time migration and benefits of depth migration What does the migration algorithm ‘see’: honouring the velocity field What algorithm where? Summary How detailed can we get in building a velocity model? Precision and accuracy Uncertainty, non-uniqueness, and ambiguity Limits on resolution Quantifying error Is it correct? - imaging pitfalls and velocity model QC Summary u 13 13 14 18 20 24 ae 27 27 30 33 36 39 45 45 51 56 56 59 59 62 73 = cE ‘Table of contents Velocity Model Representation and Picking Layer-based, gridded, and hybrid models Density of picks and automation Picking methods Stack-power and semblance Differential semblance AVO-tolerant picking Horizon-correlation Locally coherent event picking CRS picking and multifocusing Picking pitfalls Summary Inversion and tomography What is inversion? What is tomography? Why we need tomography: sub cable velocity variation Resolution scale length ‘Types and domains of tomography Traveltime (ray) tomography Traveltime (ray) tomography in the migrated domain Waveform (diffraction) tomography ‘Tomography issues Summary Incorporating Anisotropy How is it manifested? Relationship to other parameters Building anisotropic models Starting from isotropic velocities Starting from scratch Azimuthal anisotropy and fracture detection Azimuthal heterogeneity and multi-azimuth tomography Summary Velocity Model Update Through The Ages Isotropic model building and ‘depthing’ Model updating: picking and inverting Evolution of non-tomographic techniques Map migration Coherency inversion The Deregowski loop Wavefield extrapolation and focusing analysis CRP gather scanning and image scanning techniques Evolution of tomographic techniques Stereo Tomography CFP analysis Summary 83 83 84 87 90 91 91 94 98 101 102 106. 107 107 109 a. 7 118 119 127 130 131 135 137 137 139 145 145 149 152 155 159 161 162 162 164 164 167 167 170 172 175 180 180 185 9. 10. ‘Table of contents Iterative tomographic update ‘The iterative model update loop Including a well data base What does tomography need to accomplish? Layered, gridded, and hybrid tomography Styles of layer constraint for hybrid tomography Salt and basalt model building Finite offset versus parametric inversion Inverting for heterogeneity with azimuthal tomography Summary Near-surface and near-seabed effects Land environments: topography and statics Traveltime sampling and short wavelength statics Marine environments Simple versus complex water layers Deep water near-sea-bed anomalies Geomechanical modelling Summary The future? Addressing the ‘full wavefield” Incorporating non-seismic data Full waveform inversion, waveform tomography, diffraction tomography Wavepath tomography and WEM-VA. Model independent imaging Path integral migration Seismic interferometry Interferometric imaging Inverse scattering migration CRS picking and multi-focusing Summary Glossary Bibliography Index 187 187 189 190 190 194 199 2u Qu 216 217 217 218 220 220 226 235 236 239 240 240 241 249 253 254 254 256 258 258 259 261 267 293 Dedication To Dad: who always encouraged curiosity. Preface This book has its origins in the EAGE continuing education course on veloc ity model building I first taught in 2009, The objective was to give participants an intuitive (rather than mathematical) understanding of the kinematics of migration and how we go about building a model of the earth’s subsurface, in terms of velocity and anisotropic parameters for use in imaging seismic data. The book expands on the original course material, assessing why a detailed model is needed, the conse- quences of not getting it right, the sources of uncertainty and the limits on resolu- tion in velocity model building. An historical overview of velocity model building techniques over the past 30 years is presented to give the reader a feel for how the black art of model building has evolved in tandem with the increase in computer power, and the emergence of powerful interactive graphics. The movement from 1D vertical update to true 3D tomographic update is discussed, as is the evolution from a purely linear compartmentalized industrial process for velocity estimation and image creation, to a fully interactive multidisciplinary approach to iteratively building a reliable subsurface velocity model with prestack depth migration. What is not covered? For the most part, the material considered relates to compressional (P) wave surface seismic data and its inversion to produce a velocity model for depth imaging. Shear wave velocity models, and non-seismic techniques (such as EM and gravity) and non surface seismic data are mentioned only briefly, and the overall bias in the techniques presented is towards marine data. Acknowledgements This book builds on a body of previous work, from which I have stolen liberally, especially from the editors’ text of the SEG reprints series publication ‘Prestack depth migration and velocity model building’ (Jones et al., 2008). My thanks to the SEG and my co-editors John Etgen, Biondo Biondi, and Robert Bloor, for their blessings to recycle that material. If the reader wishes to further extend their know!- edge in migration as well as model building by reading some of the key reference works, then this SEG reprint volume provides a good resource. My sincere thanks for their helpful suggestions and detailed proof-reading to: Borge Arntsen, NINU, ‘Irondheim; Mike Bacon, PetroCanada, London; Paul Farmer, LON-GXT, Houston; Mike Goodwin, ION-GXT, Aberdeen; Helmut Jakubowicz, Imperial College, London; Geoff King (and colleagues), Schlumberger Research, Cambridge; Tijmen Jan Moser, Zeehelden Geoservices, Delft; and also to Maud Cavalca, Schlumberger UK; Jianxing Hu and Robert Bloor, ION-GXT, Houston, for review of the tomography chapter. In addition, I'd like to thank the following colleagues for their kind help in seek- ing permission to show their results: Andrew Arnold, Chevron Australia; Guus Berkhout, Delft University, Netherlands; Biondo Biondi, SEP, USA; Sergey Birdus, CGGVeritas, France, Dimitri Beve, Fusion, USA; Robert Bloor, ION GXT; USA, John Brittan, PGS, UK; Howard Crook, BG, UK; Paul Farmer, ION GXT, USA; Juergen Fruehn, ION GXT, UK; Bin Gong, ION GXT, USA, Robert Hardy, University of Dublin, Ireland, Craig Hartline, ConocoPhillips, Norway; Clare Goodall, ION GX'l; UK, Patrice Guillaume, CGGVeritas, France; Keith Hawkins, CGGVeritas, UK; Charles Jones, BG, UK; Steve Kelly, PGS, USA; Jan Kommedal, BP Norway; Zvi Koren, Paradigm, Israel; Tijmen Jan Moser, Zeehelden Geoservices, Netherlands; Don Pham, CGGVeritas, Singapore, Gerhard Pratt, University of Western Ontario, Canada; Juergen Pruessmann, TEEC, Germany; Andrew Ratcliffe, CGGVertias, UK; Paul Sava, Colorado School of Mines, USA; Paul Sexton, ‘Total, France; Laurent Sirgue, BP, USA; Mick Sugrue, ION GXTT, UK, James Sun, CGGVeritas, Singapore; Henning Trappe, TEEC, Germany; Ivan Vasconcelos, ION-GXT, UK, Eric Verschuur, Delft University, Netherlands, Vetle Vinje, CGGVeritas, Norway; Mark Wallace, ION GXT, USA, Phil Whitfield, WesternGeco, UK. Acknowledgements My thanks also to ION-GXT for the resources involved in preparing this mate- rial, and to the following companies and institutions for kind permission to show data used here: BG UK, BP Norway; BP USA, Chevron Australia; ConocoPhillips Norway; EON Norway, Gaz de France UK; Hess Denmark; KerrMcGee UK; Reliance Industries India; Shell UK; Spring Norway, Statoil Norway, Total Angola, ‘Total Nigeria, Total UK; TU Delft (Delphi Consortium); University of Calgary (CREWES Consortium). Special thanks to Gemma and the boys for their tolerance whilst I completed this work, and again to my wife Gemma for the book-cover art work. Any errors or inaccuracies in this work my own! 12 1. Introduction: from recorded data to images Although meant to be an introduction to velocity model building, itis instructive to commence with a review of migration theory and a description of what migra- tion sets out to do. Hence, we begin with a review of the background issues related to migration, including a description of the two main theoretical descriptions of wave propagation, namely ray theory and wave theory, together with the different migration techniques based on these two paradigms, These theoretical descriptions underpin the techniques we use to both create synthetic data and also to reposition recorded data to its ‘true’ subsurface position via migration. There are numerous migration schemes in widespread use, but for the most part the minutiae of their workings are not important here, except where their limitations affect model build- ing. The glossary at the end of the book briefly defines some of the commonly used acronyms encountered throughout this work What migration sets-out to do Figure 1.1 shows an experiment for sound waves reflecting from a dipping layer, for shots and receivers laid-out on the surface of the earth, If we plot the recorded trace position at the mid-point between the shot and the receiver, we note that the reflec- tion event in the seismogram does not appear at the location where the reflection actually came from. In this case, the real reflection position is located off to the right, of its apparent location. In order to move the reflected waveform back to its ‘true’ spatial location, we need to invoke a process known as migration. Migration is the process that builds an image from the recorded seismic data, by (ideally) repositioning the recorded data into its ‘true’ geological position in the sub- surface. Migration should account for the dip of the reflector as seen from the take-off angles at the surface shot and receiver positions, and any subsequent ray-bending that the wavefront undergoes on its travel path down to the reflector and back to the sur- face. ‘There are two main approaches to performing migration: Time Migration, and Depth Migration, both of which can be performed either after stack or before stack. As explained in the following chapters, both time and depth migration need an estimate of the subsurface velocity field, in order for the migration process to proceed. We'll also note that as we move to the more demanding process of depth migration, that we need a more accurate velocity model in order to produce a more accurate image. First the concepts involved in imaging will be briefly discussed, and the major differences between time imaging and depth imaging highlighted, in order to give interpreters and others with a geological background some insight into the reasons behind why depth imaging is important in providing a reliable image of the reservoir and surrounding structures. 13 Chapter 1 Figure 1.1: The two-way raypath from the surface source location A to subsurface reflec- tor position B, and then back to surface receiver location C, has total travel time t,. Position D is located below the midpoint between A and C, and is where the recorded energy sits on our seismic field records after reflecting from true reflector position B. ‘Throughout this work, both depth migration and depth imaging will be referred to: by the former, we refer to the actual process of running a migration, whilst with the latter, we mean the (usually) iterative process of velocity model update which requires the use of migration. Time versus depth ‘The significant difference between time and depth migration is that time migration ignores all of the lateral derivatives of velocity whilst depth migration honours the lateral velocity changes to at least first order. In other words, on a length scale simi- lar to the depth of the reflector, time migration assumes the velocity to be laterally invariant (which implies at least locally 1D stratification, which can include vertical compaction velocity gradients). Time migration only seeks to “move” (migrate) images closer to their true spatial position, and we never pretend that time migrated positions represent “ground truth”. Conversely, depth migration is ambitious and does claim to put images in their true spatial position. However, as we proceed it will be seen that we have to back off from that ambition somewhat, particularly when dealing with uncertainty in our ability to estimate the velocity model, and when the anisotropy of the velocity of the Earth is acknowledged. But for the moment the “que spatial position” ambition is where we begin. Detailed analyses of the under- lying assumptions behind time migration can be found in Hubral (1977), Tieman (1984; 1995) or Black and Brzostowski (1994). 14 Introduction: from recorded data to images Figure 1.2: Snell’s law of refraction. The index ‘i’ denotes the incident medium, and “r' denotes the medium into which the ray is refracting. ‘Think of the analogous situation of light bending (refracting) as it passes through an interface between two different materials with different refractive indices, say air and water. Sound also bends (changes direction) as it passes through an interface (at some angle away from normal) between two different materials that have different sound-speed. The degree of refraction is described by Snell’s law (Figure 1.2). sing, _sin8, (1) a, where @, and 0, are respectively the angles of the incoming (incident) ray, and emer- gent (refracted) ray, with respect to the normal (perpendicular to the surface), and v, and v, are the velocity of sound in the upper and lower media respectively. Ray bending can take place wherever there is a velocity change, both when across a boundary and where there is a velocity gradient. The process of depth migration is designed to compensate for the effects of this bending, so that the image of the subsurface appears in its correct (geological) position. As noted earlier, on a length scale similar to the depth of the reflector, time migration assumes the velocity to be laterally invariant. Consequently, for a dipping interface, time migration treats Snell’s law as if the interface is locally flat at the point where the ray hits the dipping interface (Figure 1.3). So, we note that it is not suf- ficient to say that time migration ignores Snell’s law, but rather that it follows Snell’s Law for an incorrect version of the subsurface model (Robinson, 1983). Hence, for a 15 Chapter | Figure 1.3: At a dipping interface, time migration treats rays as if the inter~ face was locally flat, ‘Time migration would ‘see’ the dipping interface as if it were composed of many step-like horizontal. seg- ments, ‘correctly’ invoking, Snell's law for the ‘locally flat’ segments, The time migration result produces a refracted event with emergence angle @r’ ray impinging normal to a dipping interface (Figure 1.4) time migration would treat the ray as if it was impinging on a horizontal interface, and produce a refraction which was inappropriate, as follows: 2 aresin(~sin 8) (1.2) where 6, is now the dip of the refracting interface and Ojuny is the angle of the emergent (refracted) ray resulting from time migration, with respect to the normal (perpendicular to the surface). Conversely, with depth migration for a normal inci- dence ray-path at the dipping interface, the correct ray path would emerge without refracting. Thus we would find: 8, hr ceth- mig) (1.3) ‘The above descriptions were given in the context of rays: a depth migration based on rays invokes Snell’s law for the correct subsurface model. For wavefield extrapolation techniques, we have an equivalent issue to solve in dealing with lateral velocity variation, Many WE schemes deal with lateral velocity variation implicitly, but some have a specific correction term that accounts for ray bending called the ‘thin lens term’, named after the corresponding descriptions from geometric optics (e.g. Claerbout, 1985). Another difference between time and depth migration is that time migration out- puts the image in two-way travel time (so we still need to perform some subsequent time to depth conversion of our mapped time horizons to estimate structural and reservoir depths: see for example Armstrong, 2001; Armstrong et al., 2001; Bartel et al., 2006; Cameron et al., 2008; Iversen and ‘Iygel, 2008). Conversely, depth migra- tion outputs its image in apparent vertical depth, and if all relevant earth param- 16 Introduction: from recorded da 10 images Or time mig) = Sin“'( v, Sin = Sin( v, Sin{Dip}) vi Vi Figure 1.4: If'a ray path encounters a_ dipping imerface_perpendicular- ly, then depth migration would not refrac the ray, but perversely time migration would. ®ydepth mia) = 8; eters are used correctly in the migration (velocity, dip, and anisotropy), then the resulting geophysical image should correspond to the geological reality. However, a depth migration which does not account for anisotropy will not in general produce an image in true geological depth, and will still require some subsequent calibration stretch from geophysical to geological depth (anisotropy is discussed in Chapter 6). Likewise, a depth migration that does not correctly handle lateral velocity variation will result in both lateral and vertical positioning error, and still fail to match the true geological depth. It was noted earlier that migration sets out to reconstruct an image from the recorded seismic data, by repositioning the recorded data into its ‘true’ geological position in the subsurface. Figure 1.5 shows the geometry of a reflector and the sig- nal recorded at the surface from an incident sound wave with coincident source and receiver locations. After migration, the recorded signal from the dipping segment of reflector is moved (or migrated) back to its geological position (Chun and Jacewitz, 1981). During migration, a segment of input recorded surface seismic data (CD) is re-positioned to its correct subsurface position (AB). During migration (both time and depth) the process results in a shortening of the segment length (AB8,), stich that sin 6, = tan ,. We can think of migration as swinging the recorded element at location C in Figure 1.5, up through an arc of radius r, to location A. In time migration, for zero offset recording, the arc shown in Figure 1.5 will be circular, for non-zero offsets it will be elliptical. This process within the migration creates what we call the migra- tion operator. For 2D time migration, the operators are symmetric circular arcs for zero offset and symmetric elliptical arcs for other offsets (with the source and receiver at the loci of the ellipse), In 3D the response would be a hemispherical bowl and an elongated ellipsoid, respectively. For depth migration, the operators are 17 Chapter 1 more complex, as the travel times have to be converted to distances via a spatially variable velocity field (rather than using the simplified locally 1D vertical functior assumed by time migration). As will be seen later, the main difference between time and depth migration is that the time migration substantially ignores lateral velc change giving rise to its simpler migration operator. ‘The principles of migration are described in many excellent text books (see for example, Claerbout, 1976; Berkhout, 1985; Bancroft, 1997; Fagin, 1999; Yilmaz 2001; Robein, 2003; Biondi, 2006): here we will only dwell on how some migration algorithms achieve this task, and the consequences of some of the approximations made by those algorithms. Classes of migration: waves versus rays ‘There are two broad categories of migration algorithm, the integral methods (includ- ing Kirchhoff, equivalent offset, common reflection angle, and beam techniques), and the differential methods, which use wavefield extrapolation to solve the migration equations (these include reverse time migration (RTM) which despite its name is a type of depth migration, and wavefield extrapolation migration (WEM), also referred toby some as being ‘wave equation migration’, which isa bit misleading as ll the meth- ods attempt to solve the wave equation). Both time and depth migration can be per- formed with either integral (ray) or differential (wavefield extrapolation) techniques. «7° Pre-migration length CD > post-migration length AB Figure 1.5: The raypath for zero-offset source and receiver separation from the surface location S$ to subsurface reflector position A has traveltime t: is also defines the time to the position C directly below S. Location C is where our recorded energy sits on our seismic field records after reflecting from true position A. For a medium with constant velocity V, we could draw the diagram for length ry, with r, = V.ty2. The recorded data segment length CD (shown in grey) is greater than the migrated (actual) reflector seg- ment length AB (in black). 18 Introduction: from recorded data to images However, the distinction can become blurred, as the Kirchhoff integral can be implemented with continuation techniques and visa versa. In the above introductory paragraph, we spoke of integral and differential techniques. This description can be expanded to encompass the concepts of rays and waves, As sound propagates through the earth, it does so along an expand- ing wavefront, which would look something like a hemispherical bowl that was continuously expanding, with the amplitude at the expanding wavefront decreas- ing as it spread-out. As with a ripple spreading-out on the surface of a pond, there will be a characteristic wavelet spanning the leading edge of the ripple. For a constant sound-speed medium the wavefront will be a hemisphere. When the velocity of sound in the medium is not constant, then the wavefront gets distorted in peculiar ways. Itis possible to model the ripples on a pond using wave theory. This requires starting with the source (e.g. a pebble hitting the water), and calculating how this disturbance changes in space and time. Similarly, it is also possible to calculate the reverse of this process: given the size and location of the ripples at any time, calculate them at earlier times or previous locations. Since the wave motion is described by differential equations, these methods are also referred to as “dif- ferential” techniques. Also, since the waves then appear to move backwards, this process is called a “backwards” or “reverse” extrapolation. Likewise, for waves propagating through the earth, modelling the propagation, or backing out propa- gation effects during migration, can be done by considering the difference in posi- tion and amplitude from one depth slice to the next in the earth, An alternative description of the expanding wavefront would be to consider the normal to the expanding wavefront and to plot (or track) the evolution in time of these normal vectors. ‘These vectors are described as ‘rays’ and give an indication of the direc- tion of motion of the wavefront, and also the arrival times of the wavefront along the associated ray-path, Whereas waves can be described in terms of sine or cosine functions with a particular frequency, size (amplitude) and value at zero time (phase), in their simplest form rays do not inherently need information on either ampllitude or phase. In other words, ‘rays’ are a simplified description of the proc- ess of wave propagation. Ray description can tell us how long it takes a wavefront to travel from one point to another, and/or the direction the wave moves in. This information is sufficient to perform forward modeling (i.e. to make synthetic data) and also to perform a rudimentary migration. However, ray descriptions can be extended to encompass dynamic (amplitude) effects by considering the behaviour of neighbouring rays as well as the main ‘central’ ray (Cervenj, 1981), and con- temporary ray-based migration and modeling schemes do this. Using rays to describe how a wavefront progresses is an acceptable approxi- mation if the wavelength of the sound wave is several times larger than the scale- length of the velocity variations encountered (Figure 1.6). Once the velocity anom- aly is small in comparison to the seismic wavelength, then the wave scatters rather than refracting when it encounters the anomaly (Figure 1.7). For this reason, ray techniques are some Chapter 1 Seismic wavelength much smaller than the anomaly we are trying to resolve The propagating wavefront can adequately be described by ray-paths bending at interfaces Figure 1.6: Resolution scale length - velocity anomaly scale length greater than the seismic wave length - ray theory works. more like a scatterer than a Small scale-length & simple refracting surface element velocity anomaly g to describe the propagation behaviour as ‘rays’ obeying Snell's law, is no longer appropriate Seismic wavelength larger or similar to the anomaly we are trying to resolve Figure 1.7: Resolution scale length - velocity anomaly scale length comparable to the seismic wave length - ray theory fails and diffraction (scattering) theory is better for describing the propagation of waves. Integral versus differential techniques “The terminology that has been used recently is also worth mentioning. Some authors have defined the abbreviation "WE’ to refer to the term ‘wave equation’ in a way so as to exclude integral schemes. However, all migrations are meant to be solutions of the ‘wave equation’ and so it is perhaps confusing to exclude all integral schemes from 20 Introduction: irom recorded data to images this terminology. Here we refer to the differential techniques as wavefield extrapola- tion (WE), as this abbreviation is interchangeable with the commonly used abbrevia- tion for Wave Equation migration. Integral techniques such as Kirchhoff, equivalent-offset, common reflection angle, and beam migration, solve a high frequency approximation of the wave equation, whereby each arrival is treated as a spike-like event, and the summation of these events with appropriate amplitude scaling, reconstructs the final image through superposition of stationary phase components, A fundamental feature of integral techniques is that the image can be computed for a subset (e.g., for a gather, a depth slice, an image line, etc.) and this makes them very cost-effective for producing gathers for velocity analysis (e.g. Gao et al., 2006). The dip limita- tion also is specified readily in integral techniques during travel time computa- tion or during the summation step, which is usefuul for reducing some classes of coherent noise and also reducing cost. In addition, integral techniques are very well suited to imaging the steepest dips. ‘The most widely used integral technique is the single-arrival Kirchhoff’ integral, which usually is implemented in the time-space domain but can be implemented in the frequency-wavenumber domain (e.g., Etgen et al., 1997). In Kirchhoff migration, the migration process is separated into two stages: compu- tation of the travel times along ray-paths through the velocity model (Nichols, 1994; Nichols et al., 1998), and summation of information associated with these travel paths. Although Kirchhoff schemes can deal with many possible raypaths from the source-receiver pair to a given reflecting element, it is more common for a single raypath to be used (described later in this chapter in the ‘multipath- ing’ section). Other techniques include the equivalent-offset scheme introduced by Bancroft (Bancroft and Geiger, 1994; Bancrofi et al., 1998) and various beam migrations (see for example the September 2009 special edition of ‘The Leading Edge’). The common reflection angle migration technique (CRAM) introduced by Koren et al. (2007; 2008), computes ray paths from each point in the subsurface (rather than from the surface to points in the subsurface as with Kirchhoff migra- tion), Weighting functions are designed to emphasise contributions that construct coherently to form the image, hence the technique resembles a beam scheme in some respects, but delivers many useful volume attributes, such as the local reflec- tor dip and azimuth, and the emergence opening angle and azimuth for rays at the reflecting point. For beam schemes, the objective is to link the surface take-off (emergence) angles at both the source and receiver locations to the possible ray paths that impinge on a given subsurface reflector segment. This is done for all subsurface segments, and an image computed using only contributions close to this ray corridor. To convert the initially measured time-slopes to angles, a veloc- ity field is required, and this is updated in an iterative way, as for other migra- tions (although as mentioned later, the slope information can be inverted directly without the need for iteration of the migration step). The Gaussian beam technique pioneered by Popov in the Russian literature (Popoy, 1982; Babich and Popov 1989, Popov et al., 2007) and others (Cerveng, 1981; Cerveng et al., 1982; Hill, 1990, 2001; Cerveny, 2001) is more complicated to imple- a Chapter 1 ment, but does have the advantages of dealing with multipath arrivals and of keeping costs down by computing operators only in the vicinity of a narrow trajectory (Wang and McClay, 1995). This technique also can be implemented in different domains (eg, Lazaratos and Harris, 1990). Beam migration can be conceived of as having three stages: measurement of the time-slopes present in the input data on common- shot, receiver, or offset gathers, computation of travel paths associated with these time slopes, and summation of information associated with these travel paths. The most complete of these schemes is the Gaussian beam technique, but more approximate schemes have been developed, and go under various names such as ‘fast beam’, ‘parsimonious beam’, and ‘controlled beam’. Inall integral techniques, once the travel times or ray angles have been computed, we then need to select samples that will contribute to each image point. For Kirchhoff migration, we collect the samples within some aperture and dip limit for which the travel times have been computed (Figure 1.8). For beam migration, we collect data samples in the vicinity of the computed ray tube (or ‘beam’), such that the length of the raypath changes by less than a quarter of a wavelength across the beam width (i.e. a Fresnel zone), so only coherent energy is summed to form the image. In some beam schemes, a representative wavelet is used to emulate the data at each contributory picked dip segment, and these wavelet contributions summed to form the image. For some beam implementations, the time, tau, and apparent velocity or time slope (often denoted by the parameter ‘p’) of locally coherent events is measured in the input shot and offset domains (Figure 1.9). The local tau-p measurements are then com- bined with coherency thresholding to select the dominant constituents of the data. ot Mid-point Raypath of the arrival in the subsurface| ‘Output migrated trace for this offset "\ zone where migration makes a USEFUL contribution ce Actual observed travel time of the input event Figure 1.8: Kirchhoff migration copies energy from the input trace to all locations along the impulse response. The impulse response is computed up to some speci- fied maximum dip and maximum lateral aperture (often referred to as the operator radius). However, only a small part of this contributes anything useful to the image: the rest can produce noise. 22 Introduction: from recorded data to images Receiver Shot > t t Figure 1.9: Local time-slope measurements are made on shot, receiver, or common off- set gathers. These slopes are related to the surface emergence angles at the source and receiver locations. Using the surface velocity, the time-slopes ($) are converted to angles (®) to use for ray tracing. By ray tracing only in the vicinity of the emergent angle ray paths, an image contribution is computed only for the actual reflecting segment. Raypath of the arrival in the ubsurface “Output migrated trace Migration 7 for this offset Actual ee Post-mig. Fresnel zone where observed migration makes a USEFUL travel time contieanvon of the input event Figure 1.10: Beam migration copies energy from the input trace only to locations near the actual reflecting segment. Thus less algorithm noise is produced than for a Kirchhoff scheme. Ray paths are computed from the surface source and receiver positions, and travel times along these paths analysed to determine the intersections of the path from the source and receiver sides, so as to find the image point for this particular ray path, Energy associated with this image element is then summed into the output i space taking account of the Fresnel zone (Figure 1.10). Chapter 1 ‘Table 1.1 summarizes the combinations of time and depth migration, performed using either ray or differential techniques, There are also various hybrids versions of the ray and differential migration schemes with names such as ‘phase screen’, ‘split- step Fourier plus interpolation’, etc. Time Migration Depth Migration Ray (integral) high fre- | Kirchhoff, Beam, Kirchhoff, Beam, com- quency approximation | Equivalent offset mon reflection angle techniques migration (CRAM) Wavelield extrapolation | Finite difference, Finite difference, Phase (differential) techniques | Phase shift, Gazdag shift plus interpolation, (v(® phase shift RIM ‘Table 1.1: lime and depth migration, Rays and Waves. Domains of application The ‘Time and depth migration techniques can be applied in various ‘domain domain of application isa separate issue from the type of description we are using (i.e. waves or rays). The common domains are time-space (t,x.y), frequency-space (Fx,y), frequency-wavenumber (fk,,k), and zero-offset-time and ray-parameter (tau-p). The reason for selecting one domain over another is simply to exploit some property of that domain that will save computation time or reduce a class of noise. For example, for data with a usable signal bandwidth of 5-55Hz, then an (fx,y) implementation of wavefield extrapolation can reduce cost by migrating only up to 55Hz, and ignoring all frequencies above this. If the same class of algorithm was implemented in the (t,x,y) domain, this cost saving could not be exploited, nor could any high frequency noise in the input data be readily excluded during migration. In addition to the domain of application for the migration, there is also the issue of input data ensemble to consider. Surface seismic data are usually acquired using a source ‘shooting’ into many receivers and this process is repeated for many individual sources. The resulting large collection of data can be rearranged into various different ensembles, grouped on the basis of some sorting criterion, The algorithm in use (e.g. a wavefield extrapolation in the (f,x,y) domain) can be applied to the input data in these different sort orders e.g common shot which is the way the data are usually acquired, common receiver were we collect all traces contribution to a given receiver from all possible shot locations, common offset were we collect all source-receiver pairs whose offset is similar, etc. ‘There are various reasons why we might want to use one sort order over another: e.g. ease of throughput for data access, or adherence to the requirements of some algorithmic approximation, Evolution of migration schemes Full solution of the elastic wave equation is not something that we usually set out to achieve. In practice, various simplifying assumptions involving a progression of 24 Introduction: from recorded data to images solutions ranging from the simpler to the more complex are made. Not surpris- ingly, this progression has moved in tandem with the increase in computer power, and development of interactive velocity model update tools. An overview of these simplifications, and their relationships to the fundamental equations of elastic wave propagation can be found in the SEG reprint series publication ‘Classics of Elastic Wave Theory’ (Pelissier, et al., 2006), which charts the development of the equa- tions of motion from the 17" to 20" century, In the more specific context of depth migration, these simplifications are also discussed in the SEG reprint series edition ‘Prestack Depth Migration and Velocity Model Building’ (Jones, et al., 2008). Commencing with Stokes’ formulation of the Navier equations, which deal with elastic wave propagation in solids, and ending with some computationally tractable algorithms, the first simplification of these equations is to drop the shear terms. Stokes’ formulation is a more general rendition of what Cauchy's relations obtain for the isotropic case, Christoffel’s do for the anisotropic case, and Navier’s do for a formulation with a single elastic constant (Pelissier, et al., 2006). Current-day tech- niques solve what is called the acoustic wave equation: that is to say, they ignore all shear modes and mode conversion at interfaces (this is equivalent to treating all the rocks in the earth as liquids!). This progression over the past three decades of industrially implemented solu- tions of the elastic wave equation can be summarized with the following approxima- tions and restrictions: 1, Dropping the shear terms to restrict the problem to the P-wave only solution 2. Separating the solution into upgoing and downgoing parts and decoupling them to yield a one-way wave equation solution 3. Avoiding the need to measure vertical pressure derivatives at the surface (a req- uisite boundary condition for the solution of a second order partial differential equation), and seeking a solution for near vertical incident angle propagation by adopting a paraxial (parabolic solution). Prior to the early 1990's the limitations on computer power effectively limited migration to the poststack time domain, Stacking is a process to reduce the volume of data by adding together the individual traces associated with a common mid-point location after correcting for the traveltime differences between the various offsets (moveout correction). Following stacking, the data are nominally at an equivalent zero-offset position, i.e. as if they were acquired with coincident source and receiver locations. Poststack time migration was used as the main migration tool until the mid 1990's when depth migration came into use. At that time, a common means of performing poststack depth migration (postSDM) of 8D seismic data was via the use of frequency domain implicit finite difference (FD) algorithms, first introduced in a geophysical context by Claerbout (1976). To facilitate solution of the 3D wave equa- tion with FD schemes, a technique called ‘splitting’ was invoked (Jakubowicz and Levin, 1983; Gibson et al, 1988), whereby an independent 2D solution was imple- mented for the in-line (x) and cross-line (y) directions. ‘This involved separating a square-root equation (containing the spatial variables x and y) into two independent square root terms one for each of the two spatial variables. It was this splitting, or separation of the x and y components in the data, which resulted in ‘numerical ani- Chapter I sotropy' - yielding an impulse response which did not possess the requisite circular x-y section for a constant velocity medium. (The name arises by analogy with physi cal anisotropy, which results in waves propagating at different velocities in different directions, resulting in a non-spherical wave front). Each resulting square root term was then approximated by a series expansion, the truncation of which led to an incorrect positioning of energy beyond a certain dip in the migrated output. A better dip response can be obtained by using higher- order expansions in approximating the square root term, but this greatly increases the cost of the migration. Such series expansion approximations do not have an inherent dip limiting cut-off for the steeper dips where the approximation is no longer valid, but simply misposition energy beyond this limit. Consequently, a form of noise was introduced appearing as energy travelling at impossibly high velocities for a given propagation angle (thus falling into the evanescent zone of the solution space - in other words terms which would have given rise to a negative term within the square root). For the most part, migration prior to the late 1990's was isotropic However, if attempting to deal with non-elliptic anisotropic media, we face an addi tional problem with FD solutions to the acoustic approximation, as anisotropy can only be correctly described for elastic media. Hence another class of algorithm noise appears for non-elliptic anisotropy for the FD acoustic approximation (Bale, 2007). Also, using finite differencing techniques to solve the second-order differential term of the wave equation results in a slight mispositioning of energy as a function of frequency, with respect to the sampling grid of the data. This gives rise to a phenom- enon resembling dispersion, in that different frequencies appear to travel at differ- ent speeds. During migration a single dipping event will split into a suite of different events each of different frequency content and dip (Diet and Lailly, 1984). However, the introduction of explicit continuation schemes, free from the FD artefacts, led to steep dip high fidelity postSDM algorithms seen routinely in use by the mid 1990's (Hale, 1991a, b, Soubaras, 1992, 1996). In expanding the contents of the square root term, we also have to consider the sign of the solution. A square root can give both a positive and a negative solution. In the majority of the migration schemes used industrially until very recently, only one of these two possible solutions was dealt with. This is what is known as a one-way solution of the wave equation. Physically, taking just one root corresponds to dealing with only upcoming energy, and ignoring propagation in the other direction, Hence multiples, source and receiver ghosts, double bounce arrivals (prism waves) and refracted reflections (turning rays or diving rays), all of which involve both upcom- ing and downgoing propagation either from the surface to the reflector, or from the reflector back up to the surface, can not be imaged using a one- ray scheme. For poststack migration the ‘exploding reflector model’ explicitly deals with only the upcoming reflected energy. However, in the prestack migration case, the situation is a bit more complex in that the source wavefield is downgoing and the receiver wavefield is upcoming, but the interaction between them is simplified so as to ignore changes in direction from the source to the reflector and/or the reflector to the receiver. This is described in more detail later in this chapter in the discussions ‘on one-way versus two-way propagation. 26 Introduction: from recorded data to images For the most part, these FD techniques fell into abeyance as postSDM was superseded by Kirchhoff preSDM in the mid to late 1990's, when 3D prestack depth imaging became computationally feasible due the appearance of efficient first-arrival traveltime solvers. Furthermore, the ability of Kirchhoff (and other integral methods) to produce limited subsets of the image made industrial applica~ tion affordable, especially given the fact that we had to apply the imaging methods iteratively to build velocity models, made them very attractive. Limitations of the early Kirchhoff migration codes rapidly became evident for complex media, but at that time 3D prestack wavefield continuation migration remained unaffordable. So, industrial efforts went towards the improvement of Kirchhoff migration, both in terms of amplitudes and handling different branches of the arrival times (discussed later in this chapter in the ‘multipathing’ section). In the early 2000's, as computer costs became less of an issue, there was a resurgence in one-way WE techniques, but for the prestack domain. Given that we have access to a reasonable FD algorithm, wavefield extrapolation implementations of the one-way scalar wave equation are relatively simple to write compared to an integral scheme, but in principle are more costly if pre-stack migrated data have to be produced or if many iterations are needed for construction of the velocity model. With integral migrations, it is routine to output the data for model update sorted by source-receiver offset at each surface location, resulting in familiar-looking migrated gathers, variously referred to as common reflection point (CRP) or com- mon image point (CIG) gathers. For shot domain and other wavefield extrapolation techniques, various additional approximations are required to produce gathers for velocity analysis Anisotropy ‘The next degree of complexity introduced into both time and depth migrations, was handling of anisotropy: this is discussed in more detail in Chapter 6. Anisotropy describes a phenomenon where the propagation velocity varies with the direction of propagation at a single location. In an anisotropic medium, migration needs to understand and account for anisotropy to ensure accurate positioning of events. For horizontally stratified media, the wave propagation velocity tends to be greater horizontally than vertically. Seismic waves recorded at the surface from sources also at the surface have a large component of lateral propagation: hence seismically derived velocities tend to be higher than the vertical velocities measured in a well. ‘As a consequence, a depth migration that ignored such anisotropic effects would produce an image at a greater depth than is correct. (We could not simply reduce the migration velocity during isotropic migration, as this would fail to collapse dif- fractions and result in a blurred image, albeit at the ‘correct’ depth). Other higher order moveout effects (due to refraction at interfaces and vertical velocity gradients) are dealt with in Kirchhoff prestack migration by using ray tracing for a LD medium for preSTM, or ray tracing for a generally complex medium for preSDM. Multipathing Multipathing refers to the fact that energy can propagate from the surface to a reflecting element in the subsurface via several possible routes (Figure 1.11). A conventional commonly used single-arrival Kirchhoff migration scheme computes 27 pter I In complex environments, there can be more than one path from a surface location to a subsurface point ee A Kirchhoff scheme usually only computes travel times for one ray path... what happens to the energy from the rest of the ray paths from input data? Figure 1.11: Multipathing — there is more than one possible route from the surface to the reflector when there are large velocity contrasts associated with irregularly shaped or lenticular geobodies. only one possible ray path associated with the velocity model (e.g., Nichols 1994; Nichols et al., 1998), hence is restricted in its ability to construct an accurate image in regions where multipathing occurs. The failure of a given migration technique to handle all arrivals also compromises model building scheme. Multipathing often occurs beneath salt bodies. This is because salt typically has a complicated geom- etry together with a very different (usually higher) velocity than the surrounding medium. The resulting ray bending can then allow several paths through the salt to a reflector beneath it. Below a salt body, a single-arrival Kirchhoff migration is inappropriate as it will not capture all the required image energy, and part of the energy not correctly captured will appear in the gathers as a class of noise. Remember that this multipathed energy is present in the input data and looks just. like any other event in terms of its moveout behaviour. Migrating such data with a single-arrival migration scheme does not cause this energy to disappear, but rather it appears in the output gathers and image as spurious events and/or noise. Herice using such corrupted gathers as input to a model update scheme will yield an unreliable velocity model: the autopicking of these poorly behaved gathers will produce bizarre results, and the subsequent inversion will yield novel and unusual values of velocity! Figure 1.12 shows a Kirchhoff image of a North Sea salt dome, where the image below the overhanging flanks of the dome is poor. Conversely, in the wavefield extrapolation migration (WEM) image in Figure 1.18, there is better definition of the steep subsalt reflectors. A WEM technique is able to image all arrival paths because the entire input data set is used in the continuation process in the migration, rather than only the subset represented by the traveltimes in the single arrival Kirchhoff scheme. 28 Figure 1.12: Anisotropic 8D Kirchhoff preSDM is inadequate below the salt body du multipathing. In addition, some of the multipathed energy from the dat \oise as it is improperly handled by the migration. Chapter 1 One-way versus two-way wave propagation Numerical solution of the wave equation involves solving a square root term ionship of the wave equation. As for all square roots, there is a positive and a negative root. In the context of wave propagation, the physical interpretation of these roots corresponds to energy coming up and energy going down in the earth. A simple rendition of migration (the one-way propagation schemes) will deal only with energy that is upcoming. As seen later, this excludes many arrival paths; namely those that have undergone double bounces or have undergone continuous refraction (turning or diving rays). ‘Two-way propagation refers to ray paths that change direction either on their way from the shot down to the reflector, or coming back up from the reflector to the receiver (Figure 1.14). Again, it is worth distinguishing between poststack migration which inherently deals with upcoming primary reflection energy, and prestack migration which involves only downgoing energy on the source side and upcoming energy on the receiver side. Standard shot-based one-way wavefield extrapolation (WE) preSDM techniques image the subsurface by continuing (extrapolating) the source and receiver wave- fields for each shot. At each continuation step (depth slice), the shot-side and receiver-side terms are combined to produce a contribution to the output image for this depth slice: this is referted to as the ‘imaging condition’. The imaging condition is invoked by cross correlating these two wavefields at each depth level, and then summing the contributions from all shots in the aperture to form the image. One of the assumptions made in using this technique is that the waveficlds travel along the direction of extrapolation only in one direction: downwards for the source wavefield, and upwards for the receiver or scattered wavefield. In practice, each of these wave- Conventional one-way Two-way propagation: requires a Propagation as assumed by ‘more complete solution of the wave standard migration schemes ‘equation to migrate such arrivals Nor from the reflection point baci UP "The direction of propagation changes either on the way down from the Se ‘Surface to the reflection point, or from the way from the surface the reflection point back up to the down to the reflection ‘surface point Figure 1.14: Examples of one-way and two-way travel paths. Introduction: from recorded data to images fields will generally travel both up and down when the velocity model is complex, when turning (diving) ray-paths are involved, or when multiples are being gener- ated. In addition, approximations in the one-way extrapolation techniques usually limit the dips in the final image to less than about seventy degrees. Steeper dips and reflections from turning rays are usually imaged using Kirchhoff techniques (turn ing ray Kirchhoff migration solves for a subset of two-way propagation, namely the continuously refracted events), but these fail to deliver acceptable images once we have a multi-pathing problem. Recently there has been renewed interest in the two-way wave equation (Wapenaar, et al., 1987), both with reverse time migration (Whitmore, 1983, Yoon, et al., 2003, Bednar et al., 2003, Farmer, 2006, Zhou, et al., 2006,) and other more approximate wavefield extrapolation techniques (Shan and Biondi, 2004, Zhang et al., 2006). Reverse time migration (RTM) properly propagates the wavefield through velocity structures of arbitrary complexity, correctly imaging dips greater than 90 degrees. It even has the potential to image with internal multiples when the boundaries responsible for the multiple are present in the model. Although until recently considered economically impracticable, enhancements to computing capac- ity, both in terms of CPU speeds and highly efficient b now made RTM commercially viable. rdware infrastructure, have Perhaps the main difference between RIM and other migration techniques is in the way the propagator handles the data. For the better known one-way migra- tions, a WE algorithm will take the recorded data, and model the source with a band-limited wavelet, and propagate them both through the supplied earth model. The extrapolation step required for this is only dependent on the trace spacing of the recored data and chosen output sample rate (and the sample rate depends on the maximum frequency). RTM inchides time domain forward mod- elling of data for the source. Extrapolating a given frequency typically requires several samples per wavelength (Alford et al., 1974; Levander, 1988); doubling the frequency being modelled requires twice as many samples per unit length, and as there are three spatial dimensions that a wave propagates in, the cost of modelling will increase in proportion to the cube of the frequency. A similarly fine ¢ sampling rate must be used as well, so the overall cost. creases in propor- tion to the fourth power of frequency, unless some computational efficiencies are exploited (e.g. Holberg, 1988). An intermediate route to addressing turning wave energy using a one-way WE scheme is to employ the two-pass one-way scheme. In this approach first one of the square root solutions is downward continued, but saving the evanescent wavefield that corresponds to the complex square root solution, ‘The saved complex root terms are then migrated but reversing the direction of propagation, Such a scheme uses roughly double the CPU time of a one-way scheme, and can image turning waves and prism waves (double bounce arrivals: Bernitsas et al., 1997, Cavalea and Lailly, 2005). However, it cannot handle multiple bounce events. Alternatively, a conventional dip limited one-way WE algorithm can be solved in a tilted co- ordinate reference frame. A sum over migration components run in several tilted frames can successfully image turning wave arrivals (Shan and Biondi, 2004). 31 Figure 1.15: 3D WEM (one-way migra salt body ) RIM using same input dat velocity model. Introduction: from recorded data to images Solving the full (acoustic) two-wave equation, using for example RTM, could in principle image multiples and double bounce arrivals, if the velocity model was accurate enough and boundary conditions could be adequately dealt with. From a model building perspective, it would thus not make much sense to use RTM ina complex environment with a model built using one-way wave propaga- yn assumptions. Figure 1.15, shows a conventional one-way WEM image from a deep water West African salt province while Figure 1.16 shows the corresponding RTM image. Both images used the same model. It is clear that the WEM result is missing the steeply dipping salt flank since this is illuminated only with turn- ing and double bounce arrivals. Also, the WEM has a class of noise in the image which is probably the result of the two-way arrivals in the input data being mispo- sitioned in the one-way image. In this case, the model was built using ray-based tomography and WEM images to pick the salt geometry. If an approach more in keeping with what RTM can achieve had been used, then the RTM result could have probably been further improved. The migration operator and impulse response Itis instructive to introduce some more terminology here: namely the migration operator and the migration impulse response. For example, for a 2D time migra- tion, the basic migration operator is a symmetric arc which for zero offset source and receiver separation is circular, and for non-zero separation is elliptical. If this time migration was performed on a zero offset plane with laterally varying veloc- ity, at each surface location the operator would be a circular arc, but the radius of this arc would change for each surface location, as the arc’s radius is proportional to the velocity at that surface location, The impulse response would be the sum off all such arcs, producing the envelope of the arcs, which for laterally varying velocity would have an asymmetric shape. So, it is important to note that although the time migration operators are individually symmetric, the overall impulse response will not be if the velocity varies laterally. Alternatively, the process of building the impulse response could be described as a sum along a diffraction trajectory which places the result of this sum at the apex, The curvature of this diffraction trajectory would also change shape if the velocity varies laterally. The surface location is usually denoted in terms of the position of the common mid point (CMP) between the source and receiver pairs. In Figure 1.17 we sec a single live input sample on a 2D zero-offset plane at CMP location 200 and two-way-time 2s. The output migrated image of this single live sample will be formed by summing along all possible hyperbolic diffraction tra- jectories that fit in this 2D offset plane: in the simplest time migration case the shape of these trajectories is determined by the rms velocity to the apex of the diffraction. For example, the trajectory with apex at CMP 50, would collect and add all samples along this hyperbolic corridor, and place the result at the apex at 800ms two-way travel time. The majority of such trajectories for this particular input data will only add zeroes together: the only live contributions resulting from this process will be for hyperbolic diffraction trajectories whose diffraction tails happen to intercept the single live sample, When all possible output sums have been computed, the locus of the results constitutes the migration impulse Chapter 1 cMP50 ‘cMP200 Figure 1.17: Schematic showing the principle of summing along a suite of hyperbolic corridors to form the migration output. A single live sample is present in this offset plane (the blue star at CMP 200, and time 2s). Any energy captured in a given hyper~ bolic corridor is placed at the vertex of that corridor (denoted by a circle) to constitute its output contribution. ‘The sum of all such contributions forms the final image. Figure 1.18: Kirchhoif preSDM of deep water 2D synthetic data using an 80 degree dip response. Introduction: from recorded data to images response, and would be symmetric for a constant velocity medium (and also for a 1D laterally invariant velocity function which changes only vertically), but would be asymmetric for laterally varying velocity. All migration algorithms are implementations of an approximate solution of the wave equation, and one or more of these approximations usually have the effect of limiting the maximum dip that can be accurately reconstructed in the output image. For example, in a Kirchhoff scheme, a maximum specified for performing the ray tracing. If there was signal in the input data emanating from reflectors with steeper dips, this information would be effectively filtered out of the resulting image by the dip limitation in the ray tracing. An example of this effect is shown in the following figures for deep water synthetic data. Figure 1.18 shows the preSDM image of a steep-sided sea-bed canyon, with ray tracing performed for dips up to 80 degrees, whereas Figure 1.19 shows the result for ray tracing only performed for dips of about 10 degrees. ‘Iwo effects are noticeable: the steep dipping segments of the canyon walls are absent, and also, a type of migration noise caused by the now inadequate cancellation of the migration operators is present. Conversely, schemes such as finite differencing do not explicitly limit the dip of the operators, but are progressively more in error at steeper dips. So in that case, the steeper events may be visible in the output image, but would be systematically mispositioned. ae a Figure 1.19: Kirchhoff preSDM of deep water 2D synthetic data using a 10 degree ip response: the steep slopes of the canyon are severely degraded, and the image corrupted with algorithm noise re operators (indicated in the ellipses ting from incomplete cancellation of incomplete 35 Chapter | ‘A comparison of migration impulse responses is show for a very simple case in Figure 1.20. Creating an ‘impulse response’ is a simple way of assessing the general behaviour of a processing step, whether it be a band-pass filter or a more complex procedure such as a migration. ‘To create an impulse response an input trace con- taining a single spike-like wavelet is input to the migration. For a simple bandpass filter, the amplitude spectrum of the output impulse response would for example show the spectral content of the output from the filter. For migration, inputting an ensemble of mostly blank traces, but containing one trace with a single wavelet (or several wavelets at increasing times down this trace), will help to assess the output dip range of the migration algorithm, and also show how closely the migra forms to the expected theoretical output behaviour. ‘The impulse responses in Figure 1.20 are computed for a constant velocity of 2km/s (hence a time and depth image will appear the same, and the response should be semi-circular), with a 50Hz wavelet at 4ins input sampling and a 10m inter-trace. On each of the eight responses shown are denoted the 45° and 70” dip angles, For the responses which significantly deviate from the correct semi-circular shape the correct semi-circular response is superimposed in yellow. Figure 1.20a is the result of a phase shift algorithm, so gives a perfect semi-circular shape ~ the dips extend up to 90°. However, this would be degraded in an unusual way if we have lateral velocity vz tion (in which case some algorithms would interpolate between responses for a suite of laterally invariant results). Figure 1.20b is the Kirchhoff result with an explicit dip limit of 70°. Figure 1.20c is a high-order FD RYM result which is near-perfect, whilst Figure 1.20d is a lower order RTM result which has compromised the dip response somewhat, and is beginning to show some dispersion indicated in the black ellipse. In this case, rather than a clean waveform along the impulse response, an extended ripple is visible, as the different frequencies within the input wavelet have been (incorrectly) separated by numerical dispersion. This effect is caused by the FD scheme not honour- ing each frequency component correctly on the regularly sampled output grid giving the appearance of dispersion - whereby cach frequency travels at a different speed. The 70° and 50° explicit FD results are shown in Figures 1.20e and 1.20f respectively, and finally results for 80° and 15° implicit FD are seen in Figures 1.20g and 1.20h. This latter class of algorithm was common throughout the 1980's but is no longer used due to its assorted artifacts. The results shown for these particular 2D FD migrations clearly show that the equations used to model the 2D semi-circular wavefronts are not very circular, and do not simply end at the requested dip limit, but continue to create ‘output contributions beyond the usefull parts of the response. Hence it can be seen how impulse responses can be of use in revealing imperfections in a migration algorithm. Algorithm noise in integral techniques All algorithms will create some kind of noise in the output image, as they are not implementing perfect solutions of the wave equation. Mostly this created noise will be insignificant, but for some algorithms it will be worse than for others. For example, Kirchhoff migration builds an image by copying a sample of input data out along the 3D impulse response curve for the velocity model associated with the corresponding part of the subsurface, modifies the sample with an amplitude and phase factor that depends on the location in the subsurface relative to the original trace location, and sums all such responses to build the output image 36 Introduction: from recorded data to images Figure 1.20: Comparison of migration impulse responses for different approxima- tions: a) is the result of a phase shift algorithm; b) is the Kirchhoff result with an explicit dip limit of 70°; ) is a high-order FD RTM result which is near-perfect; d) is a lower order RTM result which has compromised the dip response somewhat, and is beginning to show some dispersion indicated in the black ellipse; e) 70° exp! result; ) 50° explicit ED result; g) 80° implicit FD result; 15° implicit FD result. The latter two (implicit results) are highly dispersive. (Figure 1.8). Some of the energy spread along this impulse response will interfere constructively if within the Fresnel zone of the actual reflector (this is the principle known as stationary phase). This part of the response generates the output image, but the remainder of this energy does not contribute if the correct amplitude and phase factors are used, and should destructively interference, so as to cancel out. Substantial protection against this possible noise contamination is afforded by filtering-out aliased energy from the migration operators prior to summing to form the image (e.g. Gray, 1992; Lumley et al., 1994; Abma et al., 1999). In practice, some of the migration operator remains in the output image as a form of steeply dipping (sometimes aliased) noise. Different techniques have dif- ferent characteristics for the residual noise: wavefield extrapolation techniques 37 Figure 1.21: Kirchhof migration w steeply dipping noise Inveduction: from recorded data to images will leave less noise, and a beam migration (although similar to Kirchhoff in that it uses ray tracing) will also have less noise, as the beam technique only computes a contribution to the output image in the vicinity of the Fresnel zone (Figure 1.10). In Figures 1.21 and 1.22 we see a real example comparing a Kirchhoff image and a beam image: the former has a type of dipping noise that tends to make the reflec- tors look ‘choppy’ or ‘broken-up’. A similar class of noise is created by irregularly sampled input data: this can be mitigated by interpolating and regularizing the input offset volumes to produce regularly sampled input data. A second real example from offshore Vietnam (Figures 1.23 and 1.24), cour- tesy of Don Pham and James Sun, CGGVeritas (Pham et al., 2008), shows similar results with the beam migration being less noisy. In this case previously obscuréd faults below the basement are more clearly visible. The velocity model is slightly different from that used for the Kirchhoff migration, but the noise aspect is what is being considered here. The final real example is from offshore mid-Norway. The input data are very noisy and contaminated with remnant multiple, hence the steep faulting below the main unconformity is difficult to identify in the conventional Kirchhoff preSDM (Figure 1.25). In the beam migration, the reduction in algorithm noise results in the steep faults being visible (Figure 1.26). In addition, the corresponding CRP gathers are cleaner, making autopicking of the gathers for model update easier. Summary ‘The discussions here tried to familiarize the reader with the main limitations of vari- ous migration algorithms (Robein, 2003). Some of these limitations for both the inte- gral (ray based) and differential (wavefield-continuation) techniques are summarized in Table 1.2. It should be noted that the algorithm we intend to use for producing the final image should be linked in performance to how we build the velocity model. For example, if we have steep geological structures, and we correctly selected a migration algorithm with a good dip response for the final imaging step, then it would be foolish touse a model building route that was in some way dip limited, as it could not correct- ly represent the steep structures in the velocity model. Hence the subsequent migra- tion would be in error, even though the migration algorithm itself had the potential to image the structures. Or for noisy data, where picking of residual moveout in the CRP gathers for model update becomes difficult, a beam migration might be more advantageous then a Kirchhoff scheme. As more complete migration algorithms have evolved over the years, so the associated complexity in the model building has also evolved (but invariably with some considerable lag). Table 1.3 outlines the evolution of industrially available migration algorithms over the past few decades. It should still be kept in-mind however, that al the schemes in use within the industry today are solutions of the acoustic wave equation hence none of them deal in any meaningful way with mode conversion, or correctly treat the partitioning of tans- mission and reflection energy, and for the most part they also ignore absorption (Q), although some recent developments have begun to address absorption compensation during migration (see for example, Mittet, 2007; Wang, 2008; Xie et al., 2009). 39 Chapter 1 Figure 1.23: KirchhofT migration with horizons showing a ‘choppy’ appearance due to algorithm noise. Image courtesy of CGGVeritas (Pham et al., 2008). Figure 1.24: Corresponding beam migration with less noise. In this ease previously obscured faults below the basement are more clearly visible. ‘The velocity model slightly different than that used for the Kirchhoff migration, but the noise aspect is what is being considered here. Introduction: from recorded data to images Figure 1.25: Kirchhoff migration of very noisy data from offshore mid-Norway, with heavy remnant multiple contamination. Imaging of steep faults below the unconformity is poor. Figure 1.26: Corresponding beam migration for the very noisy offshore mid-Norway data. The image has less algorithm noise and the steeply dipping faults are better imaged. 41 Chapter 1 Integral Methods Differential, Extrapolation or Continuation Methods ~ Kirchhoff, Gaussian beam, and fast (controlled) beam are the best known, ‘The less well known common reflection angle migration scheme is also in this category. Usually implemented in the time domain, but can be in the frequency domain. Distinguishing feature is separation of calculation of travel times from imaging: thus a subset of the image can be computed without needing to image the entire volume - Finite difference wavefield continuation is the best known, in conjunction with ‘phase shift plus corrections’. Each depth slice of the wavefield is computed from the previously computed slice, thus the entire image volume needs to be formed. Dip response is dependent on the order of the expansion used (thus potentially costly) Strengths: ~ delivers sub-sets of the imaged volume, including offsets or angle gathers enabling target orientated maging (thus cost effective for iterative ‘model building) - good dip response - Can handle irregularly sampled data, but needs careful amplitude and antialiasing treatment for this. Strengths: + images all arrivals pler amplitude treatment, but still involves an approximate treatment of amplitudes ~ can be extended to two-way solutions of the wave equation (e.g. RTM) Weaknesses - Inherently kinematic (but can be readily adapted to include amplitude treatment) - Kirchhoff ray tracing must be performed for each arrival path of interest, but is usually only performed for one arrival path. Beam migration and CRAM inherently handles multi-pathing - velocity field coarsely sampled for travel time computation, then arrival times interpolated back to seismic spacing, which can mis-represent rugose high velocity contrast boundaries (such as top salt) Weaknesses - images whole volume (thus costly) - obtaining good dip response is expensive - does not readily produce prestack data - thus difficult to achieve cost- effective iterative model building without ‘restrictive’ assumptions (eg mono-azimuth) ~ Requires uniformly/regularly sampled data, and the data usually needs to be padded out to a rectangular box ‘Table 1.2: Integral versus differential methods. (adapted from: Jones and Lambaré, 2008) Introduction: from recorded data to images Common domain and type of application 2D postSTM Finite Difference (FD) (x0) and (x) Initially with 15°, then 45° and later 60 ° dip limits 1978-1988 | 2D DMO + 2D postSTM Dip moveout (DMO) introduced to remove some aspects of the dip dependence of velocity prior to stacking (Gherwood, et al., 1976)) 1980-1988 | 2D postSDM FD (x0) : Initially 45° and later 60° dip limits 1985-1995 | 3D DMO + 3D posSTM 3D DMO (Jakubowicz, et al., 1984; Jakubowicz, 1990) yf) time migration 45° and later 60° dip limits 1990-2001 [3D DMO + 3D zero-offset_ | 3D DMO + constant velocity phase constant velocity preSIM, | shift (Stolt) zero offset preSTM, and followed by a de-migration | subsequent de-migration, in conjunction of the stack and then 3D. with FD (x,y,f) postSTM postSTM 1990-1995 | 2D full-offset preSDM FD focusing analysis interactive (x,)) 1993-1997 | DMO + 3D zero-offset ‘Constant velocity phase shift (Stolt) constant velocity preSTM, | zero offset preSTM, and subsequent followed by ade-migration | poststack de-migration, in conjunction of the stack and then 3D with FD (x,y.0) postSDM postSDM 1995-present | Full-offset v(x,y.2) 3D Kirchhoff (x,y,7) isotropic preSDM 2000-2003 | Full-ofiset v(x.y,0) 3D Kirchhoff (x,t) straight ray preSTM 2002-present | Full-ofiset v(x,y.t) 3D Kirchhoff (x,t) curved and turning ray preSTM and anisotropic 2000-present | Full-offset v(x,y.2) 3D Tsotropic wavefield extrapolation (WE) presDM cither with for example FD, SSFPI, and non-WE beam 2000-present | Full-offset v(x,yz2) 3D TT Kirchhoff (x,y,2) anisotropic turning preSDM outputting gathers | ray 2005-2008 | Full-offset v(x.y.z) 3D VIT wavefield extrapolation, with for preSDM outputting gathers | example FD, SSFPI, and alternatively non-WE beam. 2006-present | Fulloffset v(x,y.2) 3D VT two-way wavelield extrapolation preSDM using reverse time migration, or two- pass one-way extrapolation 2008-present | Full-offset v(x,y.2) 3D VTT beam or two-way wavefield preSDM extrapolation using reverse time outputting gathers migration 2009-present | Full-offset v(x.y.2) 3D TTT beam or two-way wavefield preSDM extrapolation using reverse time outputting gathers migration Table 1.3: Time-tine for evolution of industrial techniques. (adapted from Jones, et al., 2008) 43 2. Why do we need a detailed velocity model? The limitations of time migration and benefits of depth migration “Lime migration does not correctly honour Snell’s law, leading to lateral misposition- ing of energy. As noted earlier, a consequence of this approximation is that on a length scale similar to the depth of the reflector, time migration assumes the velocity to be laterally invariant. Depth migration is meant to overcome these limitations by honouring the refraction caused by lateral velocity variation. However, to employ a depth migration to correctly deal with ray bending the velocity field of the subsur- face must first estimated, so that the depth migration can deal appropriately with the ray paths. In this Chapter, several examples of time migration will be shown to highlight the limitations of the time migration method in comparison to depth migration. Also, the limitations of some depth migration schemes will be described. As was mentioned in Chapter 1, ray-based techniques face limitation on what scale of veloc ity anomaly they can resolve, either due to the breakdown of ray theory, or due to. inadequate parameterization of ray spacings or slope estimation. In addition, some WE techniques also face restrictions in the degree of lateral velocity change they can correctly handle (e.g. Stoffa et al., 1990; Li, 1991; Ristow and Ruhl, 1994) In terms of the impact on exploration, the sketch in Figure 2.1 shows the true location of a target lying beneath a lens shaped high velocity body. A time migra- tion could misposition the target such that it would appear at ‘B’. Depth migration would correctly position the target at location °C’. This lateral positioning error in time migration is the result of improperly handling Snell's law ray bending in the overburden. ‘There are software packages available that will attempt to correct for this ray distortion effect on a time migrated image, but this is better done by simply using an actual depth migration. Trying to estimate the amount of error analytically is very difficult if more than one dipping layer is involved and if the velocity in each layer is not constant, so the most practical way to assess such error involves forward modeling using a representative velocity model. Some examples of these procedures will be shown in Chapter 7, as part of the historical review. fo contrast the behaviour of time and depth migrations in a quantitative sense, consider the simple synthetic deep water example, with sea-bed canyons, constant velocity layers, and a flat deep event shown in Figure 2.2. In the prestack time- migration (preSTM) result (Figure 2.3), the image is acceptable for the sea bed and the shallow reflectors. However, passing through the modest velocity contrast dipping layers, significant ray bending occurs, and the Kirchhoff preSTM becomes unacceptably distorted, especially for the deepest flat layer in the model. In contrast, 45 Chapter 2 Surface location Recorded energy needs to be relocated to its ‘true’ position using an appropriate approximate solution to the elastic two-way wave equation (and what is ‘appropriate’ depends on the objectives). But first an estimate of the interval velocity is required. Why do we need a detailed velocity model? Figure 2.3: Kirchhoff preS™M of deep water synthetic data (converted to depth) with the RMS version of the correct model, The deepest event should be flat Figure 2.4: Kirchhoff preSM of deep water synthetic data with correct interval veloc- Wy model 47 Chapter 2 the Kirchhoff prestack depth-migration (preSDM) image is a reasonable represent tion of the modelled geology (Figure 2.4). Both these migrations used the ‘known! velocity model shown in Figure 2.2 (i.e. that used to create the synthetic data). Figure 2.5 shows the velocity model and preSDM result from a real North Sea example (courtesy of Tuscan Energy, Goodall et al., 2004) for relatively simple geol- ogy with flat-lying overburden sediments over a structural high on For this simple structure, the 3D preSTM result in Figure 2.6 looks acceptable, and would be suitable for a regional appraisal. Using a slightly more detailed velocity model (namely that derived during an iterative preSDM project) the preSDM image in Figure 2.7 shows clearer fault imaging, and the faults are displaced about 200m lat- erally with respect to the time image. So if'a well was to be located adjacent to the fault indicated in the figures, then the preSTM would be an unacceptable product. From this comparison it is clear that our objectives should dictate what level of technology is appropriate for the solution of a given problem, It is not that the time migration is ‘wrong’; it has its place for many applications, but could be inappropriate for others. n unconformity. The velocity model from a second real example, this time from the Norwegian sector of the North Sea (courtesy of ConocoPhillips Norway, Farmer et al., 2006) is shown in Figure 2.8. The Ekofisk field, renowned for its seismically obscured area (due to gas leakage from a major oil field in the Ekofisk chalk) is difficult to image with surface seismic data, and the time migration shown in Figure 2.9 typi- fies this issue, showing a large seismically obscured region over the crestal structure. However, careful and detailed velocity model building, incorporating several smal Figure 2.5: 8D preSDM of North Sea data with interval velocity superimposed. 48 Why do we need a detailed velocity model? ee a ‘ Tee ae Figure 2.6: Kirchhoff 3D preSTM of North Sea data with simple flat lying overburden. ‘The velocity model used was a smoothed version of the preSDM model. ‘The vertical line indicates the location ofa fault. Data courtesy of Tuscan Resources (Goodall et al., 2004). Figure 2.7: Kirchhoff 3D preSDM of North Sea data converted to time for comparison with the preSTM. The faults are sharper and displaced about 200m laterally compared to the preSTM, even for this simple structure (as indicated by the shifted position of the vertical line). 49 Chapter 2 - —_ 1 ea 2 = 3 ia bo) = Figure 2.8: Interval velocity model for Norwegian North Sea data. Figure 2.9: Kirchhoff 8D preS!M of Norwegian North Sea data with gas leakage “obscuring the target. Data courtesy of ConocoPhillips Norway (Farmer et al., 2006). 50 Why do we need a detailed velocity model? scale low-velocity gas-charged sand pods, followed by depth migration, can improve the image (Figure 2.10). It could well be that a time migration with the improved depth migration model would give a better result than the preSTM of Figure 2.9 (which used the original model), but using only time migration techniques would be unable to build the more reliable model in the first place. What does the migration algorithm ‘see’: honouring the velocity field Lateral velocity variation is addressed for the most part by abandoning time migration and moving to a depth migration approach to the imaging problem. As mentioned in the discussion of time versus depth migration, it was noted that if the effects of significant lateral changes in the velocity of the subsurface are to be honoured, then depth migration must be used. However, some depth migration schemes are themselves limited in the degree to which they can honour lateral veloc- ity change. The issue of honouring the velocity field? is a subtle one. ‘The velocity field we supply to the migration algorithm might be very detailed, and having spent a lot of time and effort building a representative velocity model, perhaps picking very detailed constraint horizons, it would be nice to think that our velocity model survived the transition from model building software to migration algorithm. This wish is indeed fulfilled for many wavefield extrapolation techniques (finite dif- ference, WEM, RTM, etc). However, for ray-based high frequency approximation Figure 2.10: Kirchhoff 3D preSDM of Norwegian North Sea data with gas leakage problem partly resolved, as detailed model building followed by preSDM can address some of the ray bending issues. 51 Chapter 2 igration schemes, such as Kirchhoff and beam, the migration algorithm does not in fact ‘see’ a velocity model, and there are various intermediate steps which convert velocity information to travel time or slopes associated with ray-paths used during the migration. Itis not only ray-based schemes that can face limitations. For example, a phase- shifi technique has a very good dip response, so forms the basis of several migra- tion techniques, but is only valid for a laterally invariant velocity field. To adapt this scheme to laterally varying media requires interpolations between sets of individu- ally laterally invariant results. Conversely, finite difference schemes are well able to handle lateral variation, but for one-way schemes are in general more dip-limited (as they use a truncated series expansion for the square root terms in the migration operator). Kirchhoff and beam techniques handle lateral velocity variation very well, as long as the spatial wavelength of these changes is much longer than the seismic wavelength (and both have a good dip response as well). However, for lateral veloc- ity variation on a length scale similar to the seismic wavelength, ray techniques are no longer appropriate — this will be discussed in more detail in the chapter on tom- ography, and was shown pictorially in Figures 1.6 and 1.7. ‘The most widely used ray-based technique is the single-arrival Kirchhoff integral migration, which usually is implemented in the time-space domain. In Kirchhoff migration, the migration process is separated into two stages: computation of the travel times along ray-paths through the velocity model, and summation of infor- mation associated with these travel paths. In practice, the travel time calculation is performed by considering a 2D surface acquisition position grid sampled at about 125m x 125m, representing both the source and receiver positions, and a 3D subsurface output volume sampled at about 75m x 75m x 50m, depending on the complexity of the velocity field, From each surface location on the 2D grid, one- way travel times are computed to each of the nodes in the 3D subsurface volume (Figure 2.11). Given that an input trace’s shot and receiver location will not gener- ally lie on the surface nodes used for calculation, the travel time tables associated with the nearest neighbours must be accessed and then interpolated. Also, given that the desired output samples will not lie on the 3D volume nodes, we must also interpolate those values between nearest neighbors. These interpolations introduce errors, and in addition if the ray spacing is too coarse, we could conceivably miss some detail from the velocity model. ‘The ensuing Kirchhoff migration uses these computed travel times from a set of (usually pre-computed) tables for this velocity model. However, if the input velocity model contains some detailed interpreted horizons with features at a scale length less than the surface travel time grid spac- ings, then we can lose resolution during our travel time computation if we are not careful (Jones and Fruehn, 2003). Also, there is an inherent limitation in ray tracing concerning the degree of spatial variation in velocity (and its first derivative) that can be honoured by the ray tracing. Consequently, ray tracing algorithms usually employ some degree of smoothing to ‘precondition’ the model. Once the migra- tion algorithm has read-in the appropriate travel times, it then performs assorted interpolations to try to get times representative of the actual recorded data’s shot and receiver locations (rather than those computed on the regular surface grid), and also to get the times to the desired subsurface points (on the actual output spacing 52 Why do we need a detailed velocity model? DX di dz y ax Figure 2.11: Rays are traced in the velocity model from a coarse grid of surface loca- tions to a finer mesh of subsurface image points, BUT the actual shot and receiver locations are not on the surface grid nodes and the subsurface points are not the image points. say at 12.5 x 12.5 x 5m) rather than the computed locations. These interpolations can introduce further error. Figure 2.12 shows the velocity model for some relatively flat-lying data with a series of small gas charged lenses in the overburden of the Ekofisk field (courtesy of ConocoPhillips Norway). Figure 2.13 shows a 8D anisotropic Kirchhoff preSDM result obtained using this velocity model, and Figure 2.14 shows the corresponding result obtained using a wavefield continuation migration using the same veloc- ity field. The latter technique has honoured the small scale high velocity contrast features in the model. The gas lenses which are about 200m wide have an interval velocity of about 1400m/s in a background velocity of about 2000m/s, hence a ray tracing procedure has difficulty preserving this detail. Converted mode arrivals can be migrated using a simple variation on the Kirchhoff scheme described above. For a conventional Kirchhoff scheme, the ray paths from the surface to the subsurface are computed from a grid of regularly spaced surface locations. The one way travel times derived in this way are then used to represent both the shot-to-reflector and the reflector-to-receiver travel times. Under the assumption that we know where conversions will take place, this scheme can be modified slightly by using two velocity models and performing ray tracing for each model. A P-wave model is used to compute the downgoing travel path times 53 Chapter 2 4 km North Sea dat Figure 2.12: Interval velocity model for Norwe small scale details based on dense well control Figure 2.13: Flat data with gas lenses: the Kirchhoff ray tracing cannot honour the short wavelength veloci nomaly. Data courtesy of ConocoPhillips Norway. Why do we need a detailed velocity model? Figure 2.14: Wavefield extrapolation migration with the same input data and model is better able to image the small features. for all sources, and then an S-wave velocity model is used to compute representative upcoming travel path transit times for the converted leg of the overall travel paths. Ifwe know which path segments to combine, then a travel time response for a PS converted arrival can be computed to drive the migration For beam migration, there are three basic stages in the process: 1) measurement of the time-dips present in the input data (related to the source and receiver surface emergence angles) for all shot and receiver locations for all locally coherent events present in the gathers, 2) use of the current velocity model to compute the surface location take-off (emergence) angles and via ray-tracing in conjunction with the trav- el times associated with these time dips, to find the output locations of the associated image contributions, and 3) summation of information associated with these travel paths just within the Fresnel zone associated with the output location, In compari- son with Kirchhoff migration, beam techniques have the advantages of dealing with multi-path arrivals and of keeping costs down by computing operators only in the vicinity of a narrow trajectory. However, depending on the beam scheme employed, this dip representation may be sparse (designed to characterize only the significant features of the data) and the ray tracing associated with the selected sub-set of events might not encompass all the detail in the velocity model, and indeed may be missing some events completely if they are not characterized in the slope tables. ‘The first stage of a beam migration scheme commences by picking time-slopes on the input data gathers. To determine these slopes, say on a shot gather, the shot gather could be divided into narrow vertical strips, and a slant-stack (tau-p) analysis Chapter 2 performed to estimate all the locally coherent slopes at the central trace of the cur- rent narrow vertical strip. Parameters are specified to govern the width of the vertical strip for the slant-stacking, and also the number and overlap of these vertical strips. Picks of low coherence events can also be rejected based on coherence thresholding to leave a sparse slope field representation. These parameters will govern how well the data are represented in terms of the slope field measurements, Subsequent ray tracing at each shot and receiver position is performed for this table of slopes, so if some slopes are missing (due to sub-optimal slant-stack fitting or over-aggressive threshold rejection), then the data space is inadequately sampled for constructing a fully representative image. What algorithm where? It is instructive to question what kind of depth migra required for a given geo- logical environment, Once it is decided that time migration may not be appropriate for the complexity of the problem in-hand, the kind of preSDM algorithm itself needs to be selected. ‘Table 2.1 summarizes the options in terms of the complexity of the distant overburden and also the structure in the vicinity of the target. The meth- od of velocity mode! building should ideally be tailored to the type of depth migra- tion algorithm to be used: the commonly available approaches to depth migration nowadays include: Kirchhoff, beam, one-way ‘wave equation’ extrapolation (WEM), and two-way reverse-time migration (RTM). In Table 2.1, a simple overburden refers to strata that might be flat lying with slowly varying velocities, whereas a complex overburden might be one exhibiting steep dips and/or rapid lateral velocity change. Deeper in the earth, and associated with the region of interest for the imaging, the notion of a simple or complex target refers to the immediate area having smoothly varying velocity and low relief structure, or conversely rapidly varying velocity and complex structure. In Table 2.2 shows an outline of the different migration issues (dip response, multi-pathing, etc) versus each type of algorithm’s performance. This table is only meant as a rough guide. Naturally, if more effort was put into the development of an algorithm, then its performance can be enhanced: e.g. higher-order expansions for ED will provide better dip response, and better amplitude behaviour can be achieved with Kirchhoff with more computational effort. Summary If velocities vary laterally on a scale length comparable to the depth of the target or the length of the acquisition cable, then depth migration will produce a more accurate image than time migration if the velocity model is representative of the subsurface. However, the decision is not simply a matter of time migration versus depth migration: it must first be decided if such lateral positioning accuracy is required or whether cost and time considerations predominate. Also, even if a depth migration approach has been selected, the type of depth migration scheme should also be considered vis a vis the level of geological complexity. 56 Why do we need a detailed velocity model? Apart from the technical aspects related to how algorithms function, there is also a difference in the way we need to work. Historically, when time migration was being used, the oil company interpreters would at best monitor the processing, and wait until the preprocessing was finished, the velocities picked, and the time migration run, before beginning the interpretation process. What was then passed on to the interpreter was the final produet from the view point of the geophysicist. Interpretations of layers from the time migrated volume would be made and later converted to depth using wells for calibration. Thus, the process was purely sequen- tial. Conversely, depth imaging is an iterative multi-disciplinary effort, involving ongoing input from the oil company interpreter during several of perhaps many iterations of model update and (depth) migration. The interpretation may evolve during this process, as understanding of the prospect changes and is refined. Conversion from geophysical depth (i.e. the depth seen in the final preSDM image) to geological depth (i.e. the depth actually measured in a borehole) may still need to be made either on the interpreted depth horizons, or the depth volume, depending on whether anisotropic effects and localized heterogeneities have been adequately addressed. Hence the complexity of the velocity model can evolve not simply because of the inversion update process being used, but also due to changes in any preconceptions that the interpreters might have, and additionally their practical geological insight may also rule-out implausible inversion results. Due to the various limiting assump- tions of the migration schemes available, it is important to couple the complexity of the algorithm to the complexity of the geological problem. As a final comment, we need to be aware that different migration algorithms are based on varying math- ematical simplifications of the acoustic wave equation, and make differing assump- tions about the behaviour of the subsurface. These limiting assumptions may have unacceptable consequences if we are using a given algorithm as part of the model update loop in an imaging project. We need to match the performance of the algo- rithm we select to the complexity of the subsurface model we expect to build. Overburden Simple ‘Complex = | Simple Kirchhoff WEM or beam a preSTM preSDM Complex | Kirchhoff or beam WEM or beam preSDM preSDM but RTM is preferred ‘Table 2.1: When to use what algorithm? Chapter 2 z = ela EE gee oe | wo | & 3/2 we Algorithm ee |e) ee 2/2/32 gleisia/é 3/2/23 ele eee | ee | a | Z/F se Fle] Ele isle a] Slrel;sl/egilgyleis Ela ifl2|2 silajzié Ayejzlela l/s js ls ls Kirchhoff prestM 2 s/o/2/o}/s]o/3]s wy | Single-arrival 3 | Kirchhoff 2/3 |s|o|/s/o/s/3]/3]/s8 & | preSDM a | Fast Biss |5)e|0|5 is isla. beam Gaussian — 3/3 /3}/3]/s}o;/3]|3]3]3 FD ae Pls itl2/o|olelilile 3 | Phase shift + 2 | interpolation a) 2 | 2s | 0 0 a | 2 & | WEM 2 & | Phase shift + * | ep WEM 27) 2) 27) 3 |) 0 0 2 a |e RIM Sissies) s)ela ria ‘Table 2.2: Algorithm performance. 0 = not available; 1 = worst or hardest; 3 = best or easiest 58 3. How detailed can we get in building a velocity model? In this chapter the background for our expectations will be set: what can be obtained from measurements made on surface (or other) seismic data, and how ‘precise’, ‘accurate’, and ‘certain’ is the model built from these measurements? As a corollary, some of the limitations of algorithms used to construct the image were already mentioned. It was noted in the previous chapter that to accomplish depth imaging (to put events in their correct lateral positions), a relatively detailed and accurate velocity model was required. Thus the question naturally arises as to how accurate a model can be made (Landa et al., 1998; Clapp, 2008; Glogovsky et al., 2009). How accurate are our measurements, how accurate is the inversion of those measurements, and finally, how accurate are the migration algorithms that use the parameters that were derived? Inversion theory tells us that for incomplete or inaccurate data, a derived model is non-unique, and the sparser or more erroneous the measurements being invert- ing, the less unique the resulting model will be. The degree of parameter uncertain- ty is characterized by the ‘null-space’ in inverse theory, and its extent is determined by the set of observations that we don’t have, but need, in order to invert to uniquely define our resulting model. The ‘null space’ contains all those models which explain the observed data equally well. The degree of uncertainty can be quantified in a gen- eral sense using some of the by-products of inversion theory (namely the resolution matrix) which will be discussed in Chapter 5. Precision and accuracy ermninology is also important in this context: the terms accuracy, precision, robust- ness, etc, being often heard. The difference between accuracy and precision can be distinguished via the simple analogy of a dart board as shown in Figure 8.1. If the darts are uniformly scattered around the bulls-eye, then the attempt at hitting the cen- tre is accurate, but not precise (the average position of this collection of hits is centred on the bulls-eye, but the scatter of hits is large giving a large variance). Conversely, if all the darts are clustered together, but off to one side of the bulls-eye, then the positioning is precise, but not accurate. In this case the results are suffering from a bias. Now, in the context of velocity model building, when the velocity is estimated with many measurements, only the precision of that estimate is improved, but not the accuracy. Thus, if the values coming out from a velocity estimator were all erroneous, but consistently erroneous, then we would simply have a very precise estimate of that inaccurate result. For example, a second-order moveout curve can (incorrectly) be fited to data displaying anisotropic (higher-order) behaviour, yielding very precise velocity estimates. But these estimates are all wrong (although consistently wrong: in other words biased), as the wrong shape has been fitted to the moveout curves. 59 Chapter 3 Accuracy versus Preci: Precise, but inaccurate shooting (there Is a large bias) Accurate, but imprecise shooting (there Is a large variance) What is the meaning of an ‘error bar’ in this context? Figure 3.1: Dart-board analogy for explaining the difference between precision and ‘accuracy and the associated properties of bias and variance. ‘This is demonstrated in the following synthetic anisotropic example, with move-out behaviour that exhibits 4" order (hockey-stick’) effects. Fitting an hyper- bola to the arrival time curves (i. using conventional 2" order velocity analysis cover the full offset range for these data), would be trying to fit an hyperbola to a non-hyperbolic trajectory, which is inappropriate. Increasing the density of velocity analysis locations would in no way make the inappropriate curve fitting more appropriate, but would simply provide a very precise estimate of the wrong answer! Figure 3.2 shows such a gather after NMO with the near-vertical velocity hence it is ‘flat’ to second order (over the first half of the cable), leaving the 4* order effects visible to be assessed for 4" order parameter estimation. This is an appropriate treatment for these data. Conversely, Figure 3.8 shows the gather with 2 order NMO using a velocity estimated over the full offset range (thus having allowed the 4" order effects to bias the 2" order fitting). Figure 3.4 shows 4% order correction moveout applied to the gather, so that it is now more correctly flattened over all offsets (the data are not migrated, so events in the gather are still not truly flat). As an aside, the effects of measurement error for the anisotropic parameter n (Alkhalifah, 1997, discussed in Chapter 6) can be noted on Figure 3.2. The 4% order curvature is more pronounced on the far offsets, and thus 9 is more readily estimated when large offsets are available. An estimate of can be made by meas- ureing the arrival time of the far offsets following 2 order moveout correction. If At is the deviation of the moveout trajectory from t, following NMO (as indi- cated on Figure 8.2), then: How detailed can we get in building a velocity model? Synthetic anisotropic data | |2"4 order moveout correction | estimated using only 3km offsets. e J i : nm lat over near offs: iii 1 of Measured from At? at 6km = 14% 1 eft Measured from At? at 5km = 11% Figure 8.2: 2 order NMO of anisotropic data with the near-vertical velocity (Vu) correctly flattens the near offset, leaving the higher-order moveout curvature clearly visible and measurable. Synthetic anisotropic data [i= Ee To r n ras m :|2°¢ order moveout correction estimated using 6km offsets i = (i.e. ignoring anisotropy) iW tim tm Full offset (2"4 orde: cannot get it flat I Figure 3.3: 2” order NMO of anisotropic data with velocity estimated over all offsets: this isan incorreet thing to do, as the near offsets are now biased (curving down) giving an overall characteristic ‘U’ shaped appearance. 61 Chapter 3 Synthetic anisotropic data eee mde mee ss. = 4 order moveout correction applied re ene _ eme- t i = mee — we Figure 3.4: 4" order moveout correction of anisotropic data. 1g = ae i ine #27) GB.) 28 (8 — Ar?) where x is the offset where the measurement is being made, v,.,, is the measured stacking velocity, n,,is the cumulative (or effective) value of 1, (if 7 is the interval value of the 4" order parameter, then nq is analogous to the rms velocity in that it is the cumulative quantity that is actually measured for a given reflector, being an average of all preceding interval 1 values). Making the measurement of At’ at an offset of 5km yields » = 11%, whereas measuring At’ at 6km offset yields » = 14%. These differences are primarily due to measurement error in picking the arrival time of the wavelet after NMO correction. “Figure 3.5 shows a real example of anisotropic data after migration with an iso- tropic model with velocities designed to flatten the gather over its full offset range (left) and after anisotropic migration using a more correct anisotropic model (right). We can clearly see the manifestation of the bias in the isotropic migration as the events in the gather are not flat, but rather have a slight ‘U’ shape to them, bowing down slightly at the mid-offset range. Uncertainty, non-uniqueness, and ambiguity When we derive a velocity field for migration, we assume that the derived values are. meaningful in some way (Landa et al., 1998; Kosloff and Sudman, 2002). However, 62 How detailed can we get in building a velocity model? Isotropic preSDM Anisotropic preSDM Figure 3.5: Real data CRP afier isotropic preSDM (left) show the characteristic “U’ shaped appearance instead of being flat. The anisotropic preSDM (right) is correctly flat- tened over all offsets. Both gathers have been converted back to time for comparison before we can assess the ‘meaning’ of these values, either in terms of accuracy or ision, we need to understand what assumptions and approximations have been made in both their derivation, and in the underlying physical model we have taken to represent the process of elastic wave propagation. It should be noted that there are two components to the uncertainty: there is a bias, due to the nature of the numerical approximations we make, plus the variance in the measurements we have made. In general we can thus say that: uncertainty = bias + variance. We mentioned in the previous chapters some of the limitations involved in migration schemes, and the impact of choice of migration algorithm on the resultant image, in terms of what set of simplifications the algorithm is based on, and the physi- cal consequences of using such a scheme, vi the images it will produce. The uncertainties resulting from various approximations in our algorithms are forms of bias. There are also uncertainties introduced by imperfect forward modeling, such as the artifacts of ray tracing, grid noise and diffractions for finite-differences, and so on (for example, ‘Tirantola (1987) and ‘Tarantola and Valette (1982) give a gen- eral framework to incorporate all sources of uncertainty in geophysical inversion). At the outset of any imaging project, as we set-out to build a velocity-depth model of the subsurface, there is an expectation that we are measuring something meaningtul in order to produce a reliable image. It is instructive to step back and assess what we are actually measuring in our ‘velocity’ estimation procedures, what the limits on its accuracy are, and how the subsequent imaging algorithm sets-out to 63 Chapter 3 use these values to construct an image. The work of Al-Chalabi (1973, 1994, 1997) gives great insight into the meaning of the term ‘velocity’ and how the measured quantity relates to both the underlying rock properties, and the processing param- eters we need for routines such as stacking and also for migration. Itis clear that what we call ‘velocity’ as measured from surface seismic data bears little direct resemblance to the speed of sound within a localized volume of rock. In Figure 3.6, we see a well sonic profile and an overlay of the seismically derived migration velocity function, The well-logging tool measures transit times (in microseconds per foot) on a length scale of a few centimeters, in the direction of the well bore (assume for simplicity that it is vertical). The seismically derived velocity on the other hand, is based on a large scale nergy propa- nd average over many cubic kilometers of rock through which the seismic gated: the direction of propagation changes continuously as the wavefield refracts. reflects; hence the perceived velocity as a function of direction also changes. For ar sotropic media (Thomsen, 1986, Alkhalifah 1997, Alkhalifah and ‘Tsvankin 1995), we are thus at worst averaging vector quantities to derive a scalar, and at best deriving a simplified version of the directional properties of ‘velocity’ (via Thomsen’ anisotropy coeflicients). Hence it should not be surprising that well-log velocities and seismically derived velocities differ, and it can also be seen that the ‘migration velocity’ in the anisotropic case is not simply the ‘interval velocity’ Interval velocity 2000m Depth Figure 3.6: Well velocity and tomographically derived seismic during several iterations of model update. ‘The latter is not sion of the former. We observe bulk sI jerval velocity obtained imply a smoothed ver- 's due to directional anisotropic effect almost certain that the propagating seismic wave will not have passed through the rock traversed by the well track. How detailed can we get in building a velocity model? In its simplest form, velocity analysis involves fitting a hyperbolic curve to a moveout trajectory observed on a CMP gather. If the data were continuous (analog) measurements, and spanned a large offset range, then this curve fitting might be very precise. However, contemporary industrial practice works with digitally sampled (discrete) data: we thus need to consider the consequences of moving from an analog form of the signal, to the digital representation of that signal, and the inherent limitations on resolution brought about by this discrete sampling. Sampling theory tells us about the effects of discretizing analog data: we limit the resolution, and introduce the transfer functions of the sampling procedures into our data (Bracewell, 1978). These put limits on the precision of what we can measure. A cartoon of a parabolic Radon transform (Yilmaz, 200) is shown in Figure 3.7 prior to sampling and in Figure 3.8 after sampling and offset truncation, ‘This is similar in nature to the transforms involved in velocity analysis where we look at moveout behavior as a function of offset (or angle). Having sampled: we smear. Having smeared, we limit resolution. In certain parts of the data, the window functions severely limit our resolving power: at carly arrival times in a gather, direct arrivals obscure the events of interest, so if we use a harsh mute (which is a lateral windowing function) to remove the unwanted events we also limit the number of traces available to analyze, unless we are able to remove the dispersive direct arrivals with pre-processing. In the deeper data, the acquisition lateral aperture window limits the angular coverage, so the resolving power of the velocity analysis decays with depth (or time), as will be seen next. Curvature analysis problem (analogue data) the forward and the inverse ry ? o Figure 3.7: In an analogue measurement with compete continuous representation, the transfer function to an alternate domain will produce a well resolved representation, Chapter 3 Curvature analysis problem (discrete data): the forward and the inverse X=om 4000m ot w mo) — é ' Figure 3.8: Once we have discretized the data in one domain, we introduce the influ- ence of the sampling processes: typically a window function to truncate the data and a discretization ‘comb’ function to sample discretely. In the transform domain used for velocity analysis this results in smearing. Limits on resolution Our ability to specify velocity, as measured by stacking velocity analysis, will be lim- ited by the frequency content of the seismic wavelet, as well as offset and arrival time (Ashton et al., 1994; Chen and Schuster, 1999; Tom Armstrong, pers. comm.). Such analysis can be employed to assess potential stacking velocity error. Additionally, the influence of time-picking errors can also be assessed on a statistical basis (Powell, 1984; Roy White, pers. comm.). The nature of the error for time as opposed to depth migration also differs (Liu and Bleistein, 1995, Chen and Schuster, 1999). Where velocity analysis takes place following migration (which will have dealt with ray bending at interfaces), it should be unnecessary to use higher order expansions _ in the derivation of the error equations to cater for long offsets thus residual migra- tion error can be approximately assessed as a second order phenomenon, at least in terms of intrinsic resolution. If we perturb the NMO velocity for an event from V,. t0 (V,.-+AV,,J, and assess the time difference AT,,., on the far offset trace, x, resulting from the velocity change, AV gy Such that: More = Terry ~ Ternary (3.2) then for V,,,>>AV,,, we obtain to 2" order in offset x: How detailed can we get in building a velocity model? AV Tg = : (3) T,Vinw vhere: 1, is the zero offset arrival time of the moveout trajectory being analysed : is the maximum offset, for the event commencing at time I, 1... the approximation of the stacking velocity for the event Ifwe now adopt a resolution criterion, say the thin bed approximation for a wave- et of time duration (period) + ms, using w/4 as the discernable time shift at a maxi- num offiet x, and given that t ~ 1/f, (the dominant frequency of the signal), then we obtain the following expression for intrinsic rms velocity error to 2 order in x: av, = Tobin, (3.4) Af, Where: is the dominant frequency of the event under analysis And AV,,, is the velocity difference being resolved Figure 3.9 shows a CMP gather of synthetic data created using a 30Hz Ricker vavelet, with a reflection event at 2400ms arrival time, with moveout velocity 2500nys. Figure 3.10 shows these data following NMO correction with a velocity vhich is in error so as to produce a moveout shift AY, of 1/4 on the far offsets (we only show the near and far traces in this plot of the NMOd gather, so that the wave- ets can be seen more clearly). Here NMO using Vrms=2513m/s has been applied, ather than the correct value of 2500m/s. From the expression for AT,,., given in (3.3) his velocity error results in an 8ms shift on the far trace. In terms of picking resolu- ion in a velocity analysis spectrum computed using a semblance scan (which in one ‘orm or another will be representative of what is done by velocity analysis routines Taner and Cook, 1969), it can be appreciated that it will be difficult to resolve such small errors. This is shown in Figures 3.11 and 3.12 where the velocity spectrum and SMP gather are shown for the event NMO’d with the correct velocity and also with he erroneous velocity. It would be difficult to distinguish this magnitude of error if eal (noisy) data were being analysed. A North Sea salt dome preSTM example is shown in Figure 3.13, and the intrin- ic resolution estimate of rms velocity error as defined in (3.4) shown in Figure 3.14. fo obtain the result in Figure 3.14, a frequency analysis was run on the imaged data © determine the dominant frequency, f,, in a sliding window corresponding to all ‘alues of time T,, In conjunction with the maximum available offset (determined vy the fold of coverage at each 7, resulting from the pre-stack muting) and the corresponding rms velocity, the value of AV, was computed. The shallow section is roorly resolved due to the effects of the mute. It can clearly be seen that the central dlank zone (resulting from the presence of a salt dome) has enormous uncertainty: n fact the result here is meaningless as there are no significant reflection events to |wantify the error for (and the frequency estimate used in the error equation is thus only representative of noise). It is clear from this result that we can not use velocity 67 Chapter 3 Ricker: 1s, 50Hz, 300ms, 1900m/s Ricker: 2.4s, 30Hz, 300ms, 2500m/s Figure 3.9: Moveout trajectory in a synthetic CMP gather. The wavelet for the arrival at 24s has a peak frequency of 30Hz and rms velocity of 2500ms. Ricker: 2.4s, 30Hz, 300ms, 2500m/s (with NMO using 2513 m/s) PEPEGRRRTE EURTEERE Figure 3.10: Near and far traces only, from the gather in Figure 3.9, after NMO correc- tion with the incorrect velocity of 2513m/s instead of the correct value (2500m/s). How detailed can we get in building a velocity model? 2" order NMO with actual RMS velocity of 2500m/s ee a — ERI ATT TTT Figure 3.11: Velocity spectrum and CMP alter NMO with correct velocity 2°¢ order NMO with maximum semblance pick velocity of 2513m/s En es a A A ee mi Figure 3.12: Velocity spectrum and CMP after NMO with the perturbed velocin 13nys): the small moveout error will be difficult to pick, expecially on real noisy data, Chapter 3 Filtered preSTM Figure 3.13: North Sea salt dome preS1M image after bandpass filtering to remove high frequency noise. preSTM Vrms error Figure 3.14: V offset and fold. Poor in the deep as there is insufficient offset in the gather to provide ty. Meaningless in the centre of the salt dome and below, as there is velocity sens Tittle useful data, 70 How detailed can we get in bu 1g a veloc analysis of shallow marine data to estimate the sound speed in water, as the intrinsic error in the estimate is much too large. Consequently, a value for this velocity is usu- ally assumed (typically between 1480 and 1500 m/s) In the shallow part of the section, the uncertainty is large (due to the limited off- set following muting), and the deep section’s uncertainty is also large as there is lim- ited velocity resolution due to the available offset range coupled with higher velocity and lower frequency content of the data (due to anelastic absorption and other propagation effects), However, in practice the situation is not as bad as it looks, as geological constraints are often applied to the velocities derived. For example, the sea-bed reflector might be picked and combined with a reasonable velocity in the water layer rather than relying on velocity picks for a low-fold sea-bed arrival. * “These limits on resolution and their associated measurement errors, especially in the shallow overburden, have an effect on vertical and lateral positioning in depth migration, Figure 8.15 shows a suite of migrations from a simple three-layer model with ahigh velocity near-surface layer, such as chalk, where the velocity of just the first layer has been altered by about 8% and then 8%. The 3% error would not be atypical for the very shallow section, where the mute reduces the useful offset range, and the effects of the velocity error can be significant. Figure 3.16 shows the migration with the ‘correct’ model, and Figures 3.17 and 3.18 are the results with the 3% and 8% velocity scaling in the shallow layer. These scalings result in both vertical and lateral shifts in the positions of the ‘target event’. In general, it can be concluded that there will always be positioning uncertainty in any migrated image, and often the best that can be done is to minimize and understand the errors made in the measurements and the subsequent migrations. 3-layer mode! ri Figure 3.15: Synthetic input time data, for a model with a 30 degree dipping shallow layer and a flat ‘faulted? target layer at 3s (5km depth). a Chapter 3 V1=3500, v2=2500, v3=4000 {inn ee Ut NN KG ee Figure 8.16: Migration with the correct model. v1 poo (+3% ee v2=2500, v3=4000 i it 5200 Figure 3.17: Migration with 3% velocity ror in the shallow layer only, which results ally. in positioning shifts of 50m vertically and 50m ka How detailed can we get in building a velocity model? V1=3800 (+8% error), v2=2500, v3=4000 400 | ateral shift at target depth’ due to ~8% increase in V1 (horizon wh 30 degree ci), is sbout 160m yy Ae NK 5200 Figure 3.18: Migration with about 8% velocity error in the shallow layer only, which results in positioning shifts of 120m vertically and 150m laterally. Once measurements representing the velocity of the subsurface have been made, with all the limiting assumptions mentioned above, and if these measurements are successfully inverted to obtain interval velocities, then we are faced with the approxi- mations inherent in the migration algorithms used: each of these approximations will introduce a bias into the migrated image, which contributes to the overall uncer- tainty in image position. Quantifying error Inaddition to the underlying component of velocity uncertainty, in order to quanti- tatively assess the positioning uncertainty in the migrated image, a complete under stand the bias resulting from the migration approximations would be required, as well as inclusion of the residual velocity errors unresolved during the tomographic inversion of the residual moveout error measurements. These errors would then need to be translated into a (migrated) position error. This process is non-trivial, which to a large extent explains why we don’t see error bars on migrated images. Various studies have been conducted over the years to quantify the degree of uncertainty in both parameter estimation (Al-Chalabi, 1997) and the quality of the final migrated image, as well as the more general issue of the limiting factors on spatial resolution (Vermeer, 1997). For example Hajnal and Sereda (1981) extended the simple rms analysis described in the previous section to the case of interval velocity estimation, and Rathor (1997) looked at the effect of reflector dip on veloc ity uncertainty. Cognot et al., (1995) developed a ray-tracing based approach that 73 Chapter assessed potential positioning error by ray-tracing through a suite of perturbed models incorporating well data to constrain structural uncertainties, thus giving a range of possible positions with associated variance (see also Thore and Hass, 1995; ‘Thore and Juliard, 1999; Thore et al., 2002). For example, we might have performed a depth migration with an estimate of the velocity field and have picked surfaces and intersecting fault planes from this imaged volume, but may also want to know where the surface and faults would have appeared if we had migrated with a different velocity field. To address this question we could perform a suite of re-migrations of the data or map-migrations of picked surfaces associated with the data (described in more detail in Chapter 7). This latter map-migration approach has been widely used as a sensitivity analysis tool: e.g. after depth migration with our ‘best’ velocity field, we assess what would happen to the position of events of interest if we perturbed the overburden velocities by say +- 1%. This can be achieved by de-map-migrating the picked horizons with the original migration velocity field, and then re-map-migrating with the perturbed velocity field many times to produce a scan over perturbed velocities. The change in posi- tions associated with this scanning would give a corridor of uncertainty surrounding the original fault position (as shown in the example in Figure 3.19). proposed well entry Fast Estimate Fault Position More work Slow Estimate aa Fault Position risk the “» proposed well Figure 3.19: Map of a horizon showing position of fault intersecting the hor migration using high and low velocity estimates produce different fault positions which serve as a sensitivity guide to the fault position in the presence of velocity variation. “The proposed location of the well should be adjusted in light of the observations made in this re 4 How detailed can we get in building a velocity model? A similar approach was developed by Chitu et al., (2008), which also tried to put error bars on velocity-depth models using tomographic inversion: this leads into the use of attributes derived from inversion theory (Jackson, 1972; Etgen, 2008), which will be discussed in more detail in Chapter 5. In the specific context of the sensitivity of migration to velocity error, the works of Grubb et al., (2001), Zhang (1989), and Pon and Lines (2005) are of interest. Is it correct? - imaging pitfalls and velocity model QC Once we can accept that no particular velocity model will be unique, we would still like to know if the model we obtain is reasonable, in the sense that it explains the observed data, and makes geological sense as well. Hence we need to consider what QC procedures can be put in place to check that we meet these criteria Current industrial practice relies heavily on automated procedures, whether they be the picking of residual moveout, or the tomographic back projection of measured error in order to update the interval velocity. The danger with any such ‘black box’ approaches is that the user loses sight of what they are trying to achieve, and what physical behaviour actually makes sensc. Hence the adoption of rigorous QC pro- cedures is essential. There are several criteria on which to base our acceptance of a velocity model and resulting gathers and images, and they include: a) Flat gathers emerging from the migration: if the velocity model betongs to the set of possible models that explain our observed data, then the gathers should be flat b) Good well ties for the key structural markers, especially at high velocity contrast horizon ©) Spatial coincidence of the model and the image produced using it words, does the velocity model overlay the seismic image? d) Sensible RMO measurements from the CRP gathers: we need to check that the autopicker is not selecting multiples. It may be possible to spot multiples on the basis of unusual interval velocities being found in the autopicking. QC of the autopicked RMO velocity field is essential, as it is this information that is fed in to the tomography. ©) Believable spatial distribution of the velocity field output from the tomography: unless we have a pronounced geological trend with a specific direction (e.g. a series of parallel channels or structural ridges) then we do not expect a depth slice through the velocity field to show parallel stripes of velocity. In general, the degree of structure in the velocity field should be similar in both the x and y directions, i.e. we should not accept a preferred directivity in the velocity field unless there is good geological reason to do so. Filtering techniques such as geostatistics are well suited for this kind of QC. A geostatistical approach sepa- rates the low spatial frequency trend from the velocity field, and then analyses the residual high spatial-frequency component for unacceptable trends. If we have striping or very rapid variation on a depth slice in the high spatial-frequen- cy component of velocity, we filter it out, and then add this filtered component back to the low frequency component. 1) Realistic structure: depth migration works with the interplay of arrival time and velocity to produce the resulting image in depth, and there is an inherent In other 15 Chapter 8 ambiguity in the possible combinations we may arrive at, sometimes giving rise to solutions which are possible but highly unlikely (Lines, 1993). For example, if'a previous time migration shows a well behaved flat reflector, then it is highly unlikely that in reality the reflector will undulate significantly in the earth. So, if a depth migration were to show this same event as being undulating, we would need a ‘conspiracy’ between the real earth lateral velocity distribution and geological structure, which would combine so as to give a constant arrival time producing the flat event in the time migration. This could happen, but it is not likely. In general, we should expect a depth migration to simplify the image, and not to make it more complex than seen in a time migration. ‘The simplest velocity model QC is to verify that the model overlies the horizons in the associated migration. Figure 3.20 show a correctly migrated synthetic data set with the velocity model overlain, whilst Figure 3.21 shows a comparable image after the velocity of the first two sediment layers has been increased by a few percent. In this lat- ter case, the velocity model and migrated image do not overlay each other correctly. When near-surface velocity anomalies such as incised channels are present, a pull-up or pull down might be seen in the time migrated image (or in a depth migration using a smooth model as in Figure 3.22). However, if the velocity anomaly is correctly incorporated into the velocity model then a depth migration has the capability of removing this image distortion (Figure 3.23), so verification that such Figure 8.20: Synthetic deep water model and superimposed preSDM result, The model horizons and migrated horizons coincide correctly 76 How detailed can we get in building a velocity model? Figure 3.21: Synthetic deep water model incorporating a velocity error superimposed » the corresponding preSDM result. ‘The model horizons and longer coincide due to the velocity error. ’igrated horizons no pulup or pull-down has been resolved in the depth image is another useful QC step. However, there can be an element of subjectivity in this analysis, as we are assuming for example, that the pull-up features are indeed artifacts and not real structure that just happen to lie beneath the channel, ‘The cquisition pattern from a seismic survey can ofien leave a ‘footprint’ on both the seismic data and the associated seismically derived velocity field. Ifwe see apparent structure in the velocity field which coincides with the acquisition pattern, then this ‘structure’ can sometimes be an artifact resulting from the acquisition nprint. This is often manifest for marine data as ne (or depth) slice through the 3D velocity volume paralleling the acquisition direction. Spatial filter- 1g of time slices can help remove such apparent structure, and geostatistical analy- sis is one possible way of analyzing and removing these undesirable features. An example of geostatistical anal Struct for velocity QC and subsequent filtering of al lineament anomalies is shown in the following images. The time slic- es in Figure 3.24 outline the geostatistical work-flow. Starting from the initial veloc- ity field (Figure 8.25), the lower spatial-frequency content is extracted (the ‘trend? igure 3.26) and subtracted from the input to produce the ‘residual’ (Figure 3.27). s residual is analyzed using geostatistical variogram analysis (which indicates the scile-length and directionality of uncorrelated features in the data), and filtered (o remove undesirable components. The filtered residual (Figure 3.28) is then added back to the trend information to produce the filtered velocity field (Figure 3.29). 7 Chapter 3 Figure 3.22: A near surface channel with high-velocity fill (indicated by the arrow) causes severe pull-up in the migrated result when the associated velocity anomaly is not incorporated into the velocity model, or ita me migration is used. Figure 3.23: A near surface channel with high-velocity fill (indicated by the arrow) Once the velocity anomaly is adequately incorporated into the velocity model a depth migration can correct for the pull-up effects, producing a more real ic image. 78 How detailed can we get in building a velocity model? _ Filtered residuals Figure 3.24: Synopsis of geostatistical filtering on a time slice through a velocity field. ‘An analysis of the variation in velocity as a function of azimuth is performed to detect ‘striping’ in the input (‘raw’) velocity field. ‘The geologically plausible low spat frequency trend (the ‘trend’) is first removed from the input (raw) velocity to yield the ‘residual velocity field. ‘This residual velocity time slice is then filtered with a 2D opera- tor designed to remove non-geological rapid lateral changes to obtain the “filtered residual’, The filtered residual is then added back to the ‘trend’ to produce the ‘final’ tered velocity field. ‘The difference between in the input raw and final filtered result is the ‘removed noise’. ‘The location map at the bottom of the figure shows where the inline and crossline vertical slices of the following figures are from. Summary Atthe outset of an imaging project when model building commences, we would like to think that for a given set of input seismic data any geoscientist using any model building software would obtain more or less the same velocity model. However, this is not at all evident: differences in the pre-processing, remnant multiple content, autopicker behavior and migration parameterization can easily give rise to slight differences in estimated velocity and anisotropy parameters above and beyond the expected inherent resolution limitations. Hence to some extent, it can be apprec ated that velocity model building is of more importance that any specific migration algorithm. It can thus appreciated how slight differences in image position will arise, given even small differences in velocity estimation methodology. QC procedures should be adopted so as to assess the spatial variability of the field, and to verify wherever possible the veracity and plausibility of the Chapter 3 Raw Migration Velocities Sem, mane Cron ine ~ Figure 3.25: Inline and crossline through the input (raw) velocity cube. On the crossline vertical section, the degree of lateral velocity variation is more pronounced than on the inline and is probably unrealistic. This 8D velocity field is ill suited for use migration as there is probably false apparent’) structure in the velocity field that will create false structure in the migrated image. Trend Sk, line Cross tine: ~ Figure 3.26: Inline and crossline through the low spatial frequency component of the velocity cube (the ‘trend’). In conclusion, an assessment should be made of what parameters are needed for imaging versus what are actually measured. An understanding is required of both the limits on accuracy in the measurements made, and the limitations of the migration algorithms vis a vis the parameters provided. This knowledge must be set against the objectives and expectations made for a given project, in order to assess what velocity estimation and migration techniques are ‘fit for purpose’ for the ject in-hand. A mismatch between expectations and techniques used will result in satisfaction and disappointing results. How detailed can we get in buil a velocity model? Residual Sk, inte roe tne erereeseeerrs as Figure 3.27: Inline and crossline through the high frequency component of the velocity." cube (the ‘residual’), On the erossline vertical section, the degree of lateral velo variation is more pronounced than on the inline and is probably unrealistic. Filtered Residual and erssline vertical sections now show the same degree of spatial variability in velocity. Final Filtered Migration Velocities inte Crosatine Figure 3.29: Inline and crossline through the final filtered velocity cube output from the geostatistical procedure. Both the inline and crossline vertical sections now show the same degree of spatial variability in velocity: this 8D velocity field is suitable for use for migration. 81 4. Velocity Model Representation and Picking Independently of the scheme used to update the model, or the measurements made to feed into this scheme, the issue of how the velocity model is represented has to be considered. Ideally, this representation should in some way mirror the behaviour of the earth being studied. The model representation underpins, but also invariably limits our ability to update the velocity field in a completely flexible way. ‘The model representation is linked to the style of velocity-error picking employed. For example, a layer-based model will work more easily with an horizon- based velocity picker. If a picker that delivered a sparse ‘cloud’ of picks over an entire 8D volume had been used, then some subsequent interpolation of these picks to obtain values on the horizons of the layer-based model would be needed. Conversely, a grid-based model does not require velocity information to be deliv- ered along horizons, so can profit from a dense ‘cloud!’ of autopicked velocity values scattered within the $D volume. Layer-based, gridded, and hybrid models Models themselves fall into two major categories, reflecting the underlying geological environments: layer-based, and gridded (non-layer-based). Examples of these rep- resentations and the associated problems can be found in Wyatt et al., (1992) and in Wiggins et al. (1993). In layer-based models, the velocity and vertical compaction gradients are bounded by sedimentary interfaces. Here, it is sufficient to pick seismic reflection events as the partitions between the velocity regions in the model. In other words, the 3D velocity model is represented with a series of 21D maps of interval xelocity, or in a vertical compaction gradient regime (Vu, = Vausy + 7K) @ map for Vis and a corresponding map for the compaction gradient K,,,, would be required. Conversely, some marine geological environments have velocity regimes dominated by compaction gradients that start from, and sub-parallel the sea bed. These environ- ments are often associated with relatively recent rapid deposition where water remains trapped in the sediments. In these environments, a gridded non-layer-based model is more appropriate (where the subsurface is then represented by a cloud of values, dis- tributed in small cells dividing up the subsurface velocity model into compartments). In the case of salt or shale tectonics in these young marine environments, the scenario is complicated by the presence of such irregular bodies set within the background compaction-gradient driven velocity field. Also, for complex chalk layers, as found in the North Sea, gridded models can be of use to capture subtle lateral changes in verti- cal velocity compaction gradients. In overthrust tectonic regimes such as those found in the Canadian Foothills and Rockies, the problem is further complicated by anisot- ropy with a tilted axis. In this case it can be difficult to represent the polar axis of the anisotropy, as a gridded model has no inherent layering to define surface normals, 83 Chapter 4 Picking can be quite simple in a layer-based medium when continuous coher- ent reflectors are visible as the update information can simply be picked or autotracked along reflector boundaries. The situation is less evident when the velocity field does not follow visible reflectors. In this case, we need an a-priori assumption of how the velocity field behaves. For example, for young sediments that have not de-watered (such as found in contemporary deltaic environments) the velocity distribution is dominated by the hydrostatic pressure gradient, usually sub-paralleling the sea-bed. In this case, we may estimate a compaction gradient, which commences from a given depth (usually the sea bed), with the compaction gradient and the starting velocity being in general spatially variant. In order to update a gridded velocity field, we still need to pick information associated with reflectors, but the understanding is now that the update derived from a pick is not constrained to follow an horizon, but only to need a locally coherent segment of reflector to pick on. Thus a scatter of picks is made, and the resulting ‘cloud’ of values input to the inversion scheme. The cloud of values includes the dip of the locally coherent reflector segment, and the associated moveout or residual moveout curvature estimate. These diverse geological environments present difficult challenges in model building, and in addition present challenges in the design of model updating soft- ware, as the assumptions for layer based and gridded techniques are quite different (for example, in a layered technique, the model representation in the software may require a continuous horizon to be present across the whole survey). In addition to how the geological aspects of the model are addressed, there are also the issues of computational representation of topology: whether a tetrahedral mesh is used, or some polynomial function to represent a surface. These aspects constitute an entire field in themselves (see for example, the work of the GOCAD consortium: Mallet, 1989), and are not dealt with here. ‘The layer-based and the purely gridded are the two extremes of model repre- sentation. However, a more flexible route is to adopt a hybrid approach, where the benefits of both schemes are combined: the ability of a gridded route to capture the subtle lateral or vertical velocity variation inherent to some strata, whilst keeping the sharp vertical breaks occasionally present in the earth, such as at chalk and salt boundaries (Jones et al., 2007). An example of these three possible model repre- sentations for data taken from the South Arne field (courtesy of Hess Denmark) is shown in figures 4.1 ~ 4.3. ‘This particular comparison is described in more detail in Chapter 8. Density of picks and automation Regardless of the technique employed, another limitation in the past was the spatial sampling of the information used to perform the velocity estimation. Prior to about 2000, prestack migrated velocity information (usually in the form of common reflec- tion point CRP gathers) was output on a coarse grid, often 500m by 500m. In order to improve on the limitation of spatial sampling, automated techniques for increas- ing the statistical reliability of the velocity information to be input to the chosen velocity update scheme have been introduced (Doicin et al., 1995; Jones et al., 1997, 84 Velocity Model Representation and Pick 2002 layer-based isotropic model Figure 4.1: Velocity model representation for flat lying strata over a chalk ridge using jer based representation. 2004 gridded anisotropic model 4 km. Figure 4.2: Velocity model representation for flat lying strata over a chalk ridge using aa purely gridded representation. Chapter 4 2006 hybrid-gridded anisotropic model = Stn Figure 4.3: Velocity model representation for flat lying strata over a chalk ridge using a hybrid gridded representation, combining a gridded velocity field with picked layers for key horizons. Woodward et al., 1998; Jones and Baud, 2001). The automated nature of these techniques addresses the problem of unreasonably high manpower time needed to pick very dense velocity grids (Jones et al., 2000). It is this high manpower time that has really limited us in the past in obtaining dense velocity grids (Robein et al., 2002). However, it should be kept in mind that with automation, the limita- tions of the underlying technique are in no way improved, whether that be vertical update or tomography: we merely make the best possible use of the information already available, by looking at a very dense sampling of information. In other words, when an estimate of velocity is made with many values, only the precision of that estimate is improved, and not the accuracy (as was discussed in Chapter 8) ‘Thus, if the values coming out from the velocity estimator were all erroneous, but consistently erroneous, then the result would simply be a very precise estimate of that inaccurate result. In other words, the bias remains. This was seen earlier with the example of ignoring anisotropic behaviour: fitting a second-order NMO curve to the anisotropic data can produce very consistent results, but they are all in error due to the bias of treating the fourth order curve as if it was a second order curve during residual velocity analysis. When picking velocities sparsely, we run the risk of having large holes in the data: if picking is performed on a coarse grid and a ‘bad’ pick is encountered which is then rejected on the basis of some picking criterion, then a gap will remain in the picking grid that might be unacceptably lange (leading to a form of Velocity Model Representation and Picking ikm A) 8) Figure 4.4: Horizon slice through a volume of RMO velocity error picks, with the pick- ing locations superimposed for: (A) sparse and (B) dense picking locations. aliasing in the velocity representation). The following example compares sparse manual picking with dense continuous picking for a North Sea salt structure (cour- tesy of what was then Kerr-MeGee UK, Jones et al., 1998a), The model update procedure in this case was based on CRP-scans: a series of independent migrations were run for a suite of velocity models, and the resulting associated CRP gathers assessed for best gather flatness (described further in Chapter 7). Figure 4.4 shows on the left the interval velocity on a key horizon from manual picking of model perturbations from CRP scans (with analysis locations at 250m x 600m superim- posed). On the right of this figure is the corresponding automated picking result (with picks every 25m along the velocity lines). After tomographic update of these velocity perturbation results using the same tomography algorithm for both cases, and application of a low-pass spatial filter, the interval velocity maps shown in Figure 4.5 are obtained. The dense automated velocity field has a smaller central low velocity zone (associated with a salt dome feature). The automated picking was able to track meaningful values of velocity further onto the salt flanks. The differ- ence between these two interval velocity fields is significant (Figure 4.6) showing differences in excess of 200m/s, which would translate into a significant position- ing error after migration. Picking methods With dense autopickers, there are several options for implementing the picking scheme. For example, a specific horizon throughout the 3D volume could be tracked for a set of offset cubes, and the associated arrival time maps of the offset cube sur- faces used to produce a dense RMO estimate for this single horizon. Alternatively, an autopicker that worked gather-by-gather could be used, picking all strong events in the gather. The individual gather scheme could also be extended to consider ensembles of gathers, so that the lateral (geological) consistency of picks could be evaluated as well for locally coherent event segments. 87 Chapter 4 ikm) A) B) Figure 4.5: Horizon slice through volume of interval velocity for a horizon draped dome, resulting from inversion of sparse (A) and dense (B) velocity picking. The results are very different, indicating that the corresponding images would. over differ significantly. Figure 4.6: Velocity difference between the sparse and dense pi 88 Velocity Model Representation and Picking Pethaps the most restrictive assumption in many of these approaches is that of hyperbolicity for non-migrated data or parabolicity in residual move out (after NMO or an initial migration). assumptions are common to most residual velocity analysis techniques, but from inspection of the data, the assumption is seen to be frequently Violated to some degree. This can be an important drawback, as occasionally application of the residual moveout can degrade the stack for areas where the residual moveout was not parabolic. This assumption can be relaxed if we have long-oflset data, by employ- ing a continuous higher order analysis. This was often performed in two steps, firstly determining the near-vertical NMO velocity, and thereafter the fourth order terms, but more current software fits the fourth order moveout curve directly. More general techniques measure the individual arrival times (or depth errors) at each offset so as to a complete set of measurements for a non-parametric tomographic update (e, Brittan et al., 2006 ~ described further in Chapter 8). In non-parametric techniques we do not make the assumption that the residual moveout behaviour can be described by a simple function (e.g. a parabola, characterized by a single curvature parameter) but instead measure the irregular static-like jitter of the moveout behaviour trace by trace. Measuring the trace-by-trace variation in residual moveout is difficult, but can be achieved for example by first fitting a parabolic curve to all offsets, and then using cross correlation of individual traces against the parabola to determine each trace’s deviation from the simple function. An example of this is seen in Figure 4.7 (courtesy of John Brittan, PGS). For the event with zero offset depth denoted by the yellow dashed Figure 4.7: Parametric versus non-parametric OFFSET DK Dn on Phos >> can ret I is a parametric curve, which smoothly repre- sents the gross residual moveout behaviour of the data, whilst the red curve more accu- rately represents the actual RMO behaviour. Example courtesy of Brittan et al., 2006. LRP DS DIT CS A ANDRAS DRS mT See AR CTI VT? Hi iy ee eee ))) syn) SOE Io exes HSL DARA Sc DONA DSK LEDS Ne DEPTH PSE StH 89 Chapter 4 line, we see two fitted curves: the green is a parametric-fit curve, which smoothly represents the gross residual moveout behaviour of the data, whilst the red curve more accurately represents the actual RMO behaviour. Although the parametric approach does impose a limit on resolution by ignoring small scale variations in travel time, it does have the advantage of robustness in fitting a curve over an offset range whilst minimizing fitting error, whereas the non-parametric measurement is more prone to introducing errors resulting from noise bursts, crossing remnant multiples, and cycle skipping. Stack-power and semblance ‘The basic principles behind velocity estimation are shown in Figure 4.8. For each arrival time at zero-offset, a scan over a range of hyperbolic corridors is made. Within cach of these corridors a stack could be performed, and the hyperbolic corridor corresponding to the maximum stack-power would indicate the stacking velocity associated with the event at that zero-offset time. In other words, the velocity spectrum produced in this way is a low-resolution hyperbolic Radon transform. If the analysis is made following an initial time or depth migration, then the residual moveout behaviour might be approximately parabolic, and the analysis corridor would likewise then be parabolic (and in this case the velocity spectrum would be a Figure 4.8: ‘The basic principles of velocity analysis: a) a CMP gather showing two events - a multiple with 3.88 zero offset two-way travel time and a primary reflection idor analysis in a sliding windows - for the multiple at 3.8s, the yellow corridor will have maximum analysis power, for the event at 4.7s, the grey corridor will be maximal. c) the corresponding velocity analysis spectrum. 90 Velocity Model Representation and Picking low-resolution parabolic Radon transform). However, simple summation of events (stacking) doesn’t always perform very well especially if the amplitude of the event is small, or in the presence of significant noise, so other similarity methods have been proposed (e.g. semblance: Taner and Cook 1969; eigenvector ratio: Jones, 1985). Semblance is related to the square of the stack divided by the stack of the squares, for the amplitudes within an analysis corridor (see for example Sheriff, 2002): (4.3) where f, is the j* sample of the i trace when we have M traces in the gather being analysed. The summation corridor slides down the data records and is N samples wide, centred on the # sample. Differential semblance Following migration, the validity of the velocity model used is assessed on the basis of gather flatness: if the gathers are equally flat everywhere, then the model used adequately describes the data, In order to assess gather flatness, an assessment of the amplitudes is often made by summing the traces in the gather, as was described for semblance analysis. However, for WE migration, the gather is formed by cross correlating the source-side downward continued wavefield with the receiver-side upward continued wavefield. This correlation constitutes the imaging condition of the WE migration, but it can also be used to assess the ‘flatness’ of the gathers being formed. Rather than only considering the amplitudes of the wavelets for a reflection event across a gather, an alternative approach is to assess the rate of change of amplitudes or of attributes of the data in the gather or some weighted version of it. ‘One such approach which works with the gathers being created after WE migration, is referred to as differential semblance optimization (DSO), introduced by Symes and Carazzone (1991). Here the change in imaged depth as a function of offset in the CRP gather or of angle in the common angle gather (CAG), is assessed for a given perturbation to the velocity model. The rate at which the depth of the image changes moving from one angle to the next in the angle gather is used to assess model error and to form the basis of the update scheme. AVO-tolerant picking A simple velocity analysis based on stack power or semblance would not identify a polarity reversal AVO effect (class II AVO): simply stacking the traces in the moveout corrected gather might annihilate the event completely. Hence in cases were this lass of AVO effect were expected, a technique would be required that could accom- modate this phenomenon, Swan (1991) proposed one such technique, commonly known in the industry as AVEL. Essentially this is a residual moveout (RMO) esti- mation technique which uses AVO attributes as the objective function in the RMO parameter scan. The objective function F(V,,.), used is a combination of the gradient and the zero-offset reflectivity obtained during AVO fitting: a1 Chapter 4 Fg) = Im{ AnfR] . engfG] (4.2) where: An[RJ is the analytic trace (Faner et al., 1979) of the zero-offset reflectivity Ry cnj{G] is the complex conjugate of the AVO gradient trace G, and In{...} denotes the imaginary part of their product, which constitutes the objective function FV). Similar techniques have also been presented, based on other variants of AVO objective functions (eg. Ratcliffe and Adler, 2000; Fomel, 2009). Figure 4.9 shows a synthetic CMP gather which shows a polarity reversal on the far traces (courtesy of Andrew Ratcliffe, CGGVeritas). A stack of this CMP would produce a very weak event, whereas an AVO analysis would correctly identify the reflector in the zero- offset reflectivity section and its associated AVO gradient. Figure 4.10 shows the semblance-based velocity spectrum which does not identify this event correctly, and also the RMO correction based on this semblance spectrum. Conversely, the AVO type spectrum (Figure 4.11) does correctly identify the event, and results in a correct flattening after RMO correction. 1 a offset, x iy «| wl ‘Figure 4.9: a) NMO corrected CMP gather showing a polarity reversal towards the far traces and its associated AVO behaviour. b) the gather after RMO with a semblance- based analysis: the AVO behaviour is corrupted. Example courtesy of Ratdiffe and Adler, 2000. Velocity Model Representation and Picking, Semblance VA spectrum -80 +80 ae isan es ven Figure 4.10: a) NMO corrected CMP gather showing polarity reversal and b) follow: ing RMO with velocity derived from picking on a semblance velocity spectrum (c). Semblance does not not correctly assess a polarity reversal as it relies on the stack power, Rather than showing one clearly defined peak in the semblance velocity spec- trum corresponding to the reflection event (indicated by the arrows), we can see two peaks associated with the wavelet side-lobes being misaligned due to cycle skipping in the semblance alignment. Before After AVO VA spectrum 80 {ms) +80 Salata) [iil ieee] c Figure 4.11: a) NMO corrected CMP gather showing polarity reversal and b) follow: ing RMO with velocity derived from picking on an AVO-consistent velocity spectrum (0). The AVO technique correctly identifies the polarity reversal and the spectrum now shows only one clear peak associated with the reflection event. Chapter 4 Horizon-correlation Using the near and far trace stacks from a migrated data volume in conjunction with the interpreted time horizons for a set of key marker events and the RMS. stacking velocities associated with these markers, and using the interpretation as the centre for a windowed operator, we perform a cross-correlation of the near and far stacks. ‘his yields an estimate of the residual time shift between the near and far traces. Values corresponding to low correlation coefficients are eliminated {as they are associated with noise), and replaced by interpolation from acceptable neighbouring values. Using an approximation for the parabolic residual moveout equation (e.g. Castle, 1994), this time shift map is converted to an associated RMS. velocity map for the horizon of interest. If the data have been moveout corrected with an initial ial rms velocity V,., then the observed near-to-far-trace time shift AT is related to the rms velocity required to align the near and far traces (or (4.3) the zero offset arrival time of the moveout trajectory being analysed x is the maximum offset, for the event commencing at time T, AT = (T,- T), where T, is the far offset arrival time after NMO for the event com- mencing at time T, The horizon correlation approach is limited to working ‘correctly’ only for the target horizons that have been picked, but the computed RMS velocity correction values can be interpolated between picked horizons to produce a volume update. This technique can have advantages over whole CMP ensemble autopickers in avoiding multiples. The technique can also be generalized to avoid having to pick specific horizons, by applying the correlation in a small sliding window that moves continuously down the pair of traces from the near and far stack volumes, although the benefit of explicit lateral continuity of an horizon is then lost. Figure 4.12 shows a segment of, synthetic data where the near and far traces after NMO with an incorrect velocity are displayed in pairs (hence the image looks like a stacked section with trace to trace jitter). Correlating the near and far trace pairs in a sliding time window produces the correlation-lag plot shown in Figure 4.13, with the lags being mostly about 40ms {as for this simple synthetic example, the velocity error was simply a percentage of the correct value). In the following example the horizon correlation approach using auto-tracked horizon maps is compared with the AVO-consistent CMP approach of Swan (1991) using data courtesy of BP Norway from a 3D preST'™M project (Jones and Folstad, 2002). A motivation for comparing different techniques was to ensure that spuri- ous conclusions based on limitations of an individual technique could be taken into account. In other words, an attempt was made to isolate some aspects of bias. In the horizon correlation approach, the near and far trace stacks were used in conjunction 94 Velocity Model Represent Picking Figure 4.12: Synthetic data: displayed in pairs are the near and far trace stacks for all CMP gathers actoss the section. ‘This type of display is useful for quick visual identi fication of RMO. Figure 4.18: Correlation of the near and far trace stack pairs in a small sliding, window. 95 Chapter 4 with the interpreted time horizon for a key reservoir marker and the RMS stacking velocity field associated with this marker. Results from this horizon cross-correlation approach are labelled as ‘HCC’ in the figures. Results from the AVO-tolerant tech- nique are labelled as ‘AVEL' in the figures. (Note: it is not that we expect class IT AVO in this instance, but simply that the analysis tool was available to perform the comparison). he example shown is from the Ula oil field, in the Vestland Arch in the Norwegian-Danish basin. ‘The main reservoir is capped by the ‘Top Ula event. Above this, is the base Cretaceous unconformity (BCU) horizon. ‘To give an aerial perspective of the prospect, Figure 4.14 shows the two-way time contour map for the preSTM volume at the BCU horizon, and in Figure 4.15 is shown its RMS velocity. In Figure 4.16 (left and right) is an enlargement of the RMS high resolution RMO velocity maps for the BCU, estimated with a trace sampling of 50m x 50m produced using the ‘HCC" method (left) and the AVEL method (right). The similarity of the two estimation techniques gives us confidence that any bias in the different estima- tors is acceptable. The typical difference between these vo estimates at points on this surface is less than about 30m/s. Referring back to the equation for AV,,. in Chapter 3, it can be seen that this vari is similar to the expected resolvability for measurements made on these data (ie. for V,,,~2500m/s, dominant frequency Fe~40hz, T,~2.5s, X,..=3500m, then AV,,,,~ 20m/s). Pick of the BCU time horizon from a preSTM. Velocity Model Representation and Pic ng 2 ‘267 a8 200 ‘267 73 |g am oI BB a er BB > 0 eT a m0 tam = Figure 4.15: RMS velocitics for the BCU time horizon from the preST™. "Bunce a ° rc > 4 pe * ” 4 » BEREERRRUER ERSTE EAE thm” Figure 4.16: Left graphic shows a high-resolutis versus-far trace stack cross-correlation for a picked horizon (HCC). Right graphic igh-resolution RMO estimate based on gather ensemble AVO velocity analysis. Both velocity estimates are very similar, indicating no significant differential bias between the two velocity estimation methods for these data, on RMO estimate based on near 97 Chapter 4 Locally coherent event picking Many variations of this technique exist, perhaps the best known being based on localized plane-wave destructors (Claerbout, 1992; Fomel, 2002; Hardy, 2003). Advances in this class of technique were reviewed during the 2009 EAGE annual conference workshop on ‘locally coherent events’ (Alerini and Costa, 2009). These procedures mostly work by analyzing the prestack data after an initial pass of prestack (time or depth) migration using an initial velocity model, The algorithm moves a sliding window through the data in the offset direction to track coherent residual moveout events, and in the in-line and cross-line directions to search for events with geological continuity on a small scale length. Figure 4.17 shows an inline migrated section of data, and a small window enlargement: in the window we can sce that the geological structure can be represented by a linear dipping segment. Also displayed is a corresponding CRP gather indicating residual velocity error, as there is still residual moveout. ‘The autopicker can also assess this RMO behaviour, either asa simple parametric fit (such as a parabola) or as a more general non-parametric behaviour. From the in-line and cross-ine analysis for a stacked volume (Figure 4.18), the structural dip elements are determined (Figure 4.19), along with a coherency estimate (Figure 4.20). A byproduct of this procedure that can also be created is, an ‘event skeleton’ (Figure 4.21) which is simply a plot showing where some spa- ly coherent information was detected and retained after thresholding to remove Figure 4.17: Gulf of Mexico example. In a small window (lefi) a seismic event looks like a linear dipping segment. We solve for the dip of this segment, also estimating the coher- cency of the fitting and on the CRP gather (right) we solve for a fit to the residual moveout, cither piecewise for each offset, or as a general parametric fit across all offkets. 98

You might also like