2D and 3D Land Seismic Data Acquisition and Seismic Data Processing

Kiran Kumar Talagapu
M.Sc.(Tech.) Geophysics Department of Geophysics College of Science and Technology Andhra University Visakhaptanam - 530003 Andhra Pradesh, India


This is to certify that Mr. Kiran Kumar Talagapu, a final year student of M. Sc. (Tech.) Geophysics, Department of Geophysics, Andhra University, Waltair, Visakhapatnam has participated in the project work “2-Dimensional and 3Dimensional Land Seismic Data Acquisition and Seismic Data Processing” from 10th December 2004 to 31st January 2005, at Oil and Natural Gas Corporation (ONGC) Chennai.



This is to certify that this training project report is bonafide work of Mr. T. Kiran Kumar, submitted in partial fulfillment of M.Sc. (Tech.) degree in Geophysics during the final year degree course.


Every work we do is linked directly or indirectly to many different aspects, circumstances and people. Aspects which we try to understand, work on and come to a conclusion, circumstances which motivate us and people who help us and guide us to achieve what we are intend to. Recollecting the near past events of my training period I am deeply indebted to the people who were responsible for the successful completion of my work. To begin with I am thankful to our Head of the Department Prof. A. Lakshmipathi Raju for the initiation. He took all the pains of shuffling students and assigning the projects. My equivocal thanks are due to the General Manager – Head Geophysical Services, Chennai, Mr. D. Dutta who considered our request and allowed us to go through the training in this organization. I am thankful to Mr. R. V. S. Murthy who has prepared a flawless schedule. It was him who has insisted us to go through the rigorous field work and gave us an insight of what is called “the maximum utilization of resources”. I express my special sense of gratitude to the party chiefs of the geophysical parties we have visited. The immense cooperation given by them is unforgettable. Not to forget, the party members who have been equally helpful. I am also grateful to Mr. C.M. Varadarajan ,DGM(GP) who prepared the schedule of the second phase of the project (Seismic Data Processing). My sincere thanks are due to Mr. Kailash Prasad and his group members for guiding me all through the different dimensions of the “processing” aspects. They are pin point right when they say, “Information is Power”. This was the punchline which motivated me towards going through the excellent library facility of ONGC. My heartfelt thanks are due to my other project colleagues also as they had been great all through the period of project work. Last but not the least, I am thankful to my beloved parents and my brother. The fact that I am a part of my family forms firm good reason to be thankful. – Kiran Kumar Talagapu

This is a project report submitted to the Department of Geophysics, Andhra University in partial fulfillment of the M. Sc. (Tech.) degree in Geophysics. The project work forms a paper which is evaluated for a maximum of 100 marks as a part of the academic curriculum. Under this programme, the students of M. Sc. (Tech.) Geophysics undergo training at premier organizations like, Oil and Natural Gas Corporation (ONGC), National Geophysical Research Institute (NGRI), National Institute of Oceanography (NIO), Indian Institute of Geo-Magnetism (IIGM), etc., engaged in geophysical activities. Under the above program, I had training at Oil and Natural Gas Corporation (ONGC) Chennai, CMDA Towers, Egmore, Chennai, from 10th December, 2004 to 31st January, 2005. During my training at ONGC Chennai, I had associated with the 2D and 3D Land Seismic Data Acquisition undertaken by three of the geophysical parties (identity not revealed) exploring Oil and Natural Gas in the operational areas of the Krishna-Godavari basin. One of the three field parties GP ‘X’ is acquiring 2D land seismic data in West Godavari Sub-basin and. The second and third parties (GP ‘Y’ and GP ‘Z’) are deployed in the East-Godavari sub basin of the KG basin, acquiring 3D land seismic data. I have been associated with all these parties on a tentative schedule of 10 days (from 13th to 22nd of December, 2004) with GP ‘X’, 10 days (from 23rd to 31st of December, 2004) with GP ‘Y’ and 6 days (from 1st to 6th January, 2005) with the GP ‘Z’. Further I had training in Data Processing at Regional Computer Center, ONGC Chennai, from 10th to 31st of January, 2005 thus completing the full schedule of 7 weeks training. All through these 7 weeks, I was exposed to many aspects of the seismic data acquisition and processing. With first hand information on certain technical terms, I was taken through the deeper aspects of seismic survey designing, parameters considerations based on the previous data, experimental surveys for finalizing the parameters from stages of the production work, uphole survey technique, regular survey and through various steps of processing like De-convolution, Stacking, Migration etc. This report consists of all these aspects in a brief. Starting from the Introduction to the Seismic Methods in Chapter 1, the Next chapter deals with the Modern Seismic Data Acquisition. In Fundamentals of Seismic Prospecting (Chapter 3), I tried to give a brief introduction about the basics of the different types of waves. Chapter 4 is about the

Reflection Field Equipment where I described about the seismic sources, seismic receivers and the instrumentation. Chapters 5 and 6 deal with the 2D Survey Designing and 3D Survey Designing respectively. Reflection Field Layouts are discussed in chapter 7 while chapter 8 focuses on the Reflection Field Method for the land survey. In this I have included the field parameters that were assumed in the field with a brief theoretical background of each parameter. The end part of this chapter deals with the starting of the production work with parameters decided based on the experimental studies. The next chapter (chapter 9) deals with some of the basic concepts of Seismic Data Processing. The subsequent chapters (chapter 10, 11, 12 and 13) deal with some of the important aspects of the seismic data Processing in detail. In these I have given a detailed description of the various stages of data processing with the results shown in the form of seismic records. At the end of the report there is an appendix touching some of the important aspects which could not be explained in the due course of the chapters.

– Kiran Kumar Talagapu

1 Land Energy sources 4.8 Ground Waves Characteristics of Seismic Events Refractions Reflection Field Equipment Seismic Sources 4.2 Incoherent noise Chapter 4 4.4 Mode-Converted Waves 3.1.2 Critical Reflection 3.1 Reflections 3.1 Coherent noise 3.3 Seismic Wave Fundamentals 3.1 Milestones in Seismic Industry Chapter 2 2.1.7 Direct and Head Waves 3.2 Explosive Sources Modern Seismic Data Acquisition Land Data Acquisition Marine Data Acquisition Transition – Zone Recording Chapter 3 3.1.5 Rayleigh Waves 3.4 Diffractions 3.6 Love Waves 3.2 2.1 Charge Size 4.2 3.1 Compressional Waves (P-waves) Introduction Introduction Historical Perspective Fundamentals of Seismic Prospecting 3.2 Charge Depth .2 Shear Waves (S-waves) 3.1.Contents Chapter 1 1.3 Air Wave 1.2.5 Multiples Seismic Noise 3.1 2.

2.2 4.1 Electrical Characteristics 4.1 Basics of 2D Survey Design Basic Concepts in 2D Surveys Chapter 6 6.1.2 7.1 Preliminary Parameters 3D Survey Design Sequence Land 3-D Layouts Chapter 7 7.4 Reflection Field Layouts Split-Dip and Common Midpoint Recording Spread Types Arrays Resolution Chapter 8 8.3 7.3 Multiplexer 4.2 Telemetry System Tape Drive 4.4 Basics of 3D Survey Design Why 3D Seismic Survey? Basic Concepts in 3D Surveys 6.2.3 4.1.1 6.7 Formatter 4.3 Vibrators 4.1 Geophones 4.1 Basic components 4.1 Uphole Survey (Depth Optimization) .4 7.5.2 Pre-amplifier 4.4 Main Amplifier 4.1.3 Storage Chapter 5 5.3.2 Hydrophones 4.3.3 8.1 Roll-along switch 6.5 Reflection Field Method for Land Survey The Seismic Field Party Seismic Data Acquisition 2D Survey Parameters (Before the Experimental Survey) (GP ‘X’) 3D Survey Parameters (Before the Experimental Survey) Experimental Survey A/D Converter 4.4 Other sources Seismic Receivers 4.6 Gain Controller 4.3.3 Dual Sensors Seismic Instrumentation 4.2.1 8.3.1.

8 Muting 10.6 8.7 Amplitude Recovery (Geometric Spreading Correction) 10.5.5 Seismic Data Processing Introduction Why Processing? Seismic Data Processing Objectives of Data Processing Basic Data Processing Sequence Chapter 10 Seismic Data Processing Stage I (Pre-Processing) 10.1 9.3 Fold Back Experiment (Element Spacing Determination ) 8.2 Statistical De-convolution Chapter 12 Seismic Data Processing Stage III (Velocity Analysis.6 Static Corrections 10.1.4. NMO.1.1 De-Multiplexing 10.1 Preprocessing 10.2 Velocity Analysis Normal Moveout Correctons (NMO) .2 9.5.3 11.2 Reformatting 10.4 11.7 8.1 12.2 Noise Experiment (Determination of NTO and Array Length) 8. DMO and Residual Static Corrections) 12.1.5 Geometry Merging (Labeling) 10.3 Filtering Chapter 11 Seismic Data Processing Stage II (De-convolution) 11.3 Re-sampling 10.1.1 Deterministic De-convolution 11.4 Introduction: Convolutional Model De-convolution De-convolution Methods Editing 10.4 Shot Depth and Charge Size Optimization 2D Survey Parameters (After the Experimental Survey) (GP ‘X’) 3D Survey Parameters (After the Experimental Survey) (GP ‘Y’ and GP ‘Z’) Chapter 9 9.2 Sorting 10.1 11.

2 Time Variant Filtering 13.4 Dip Moveout Correctons (DMO) Residual Statics Corrections Chapter 13 Seismic Data Processing Stage IV (Stacking. Time Variant Filtering and Migration) 13.3 12.12.1 Stacking 13.3 Migration Appendix Bibliography .

3D Cable Configuration used by GP “Y”. 3D Layout.List of Figures Figure 1(a) Figure 1(b) Figure 2(a) Figure 2(b) P – Wave Motion. … G108 represent the Geophones. Refraction of plane compress ional wave across interface. t-d plot drawn for the data obtained by the first three receivers of uphole A of GP ‘X’. The first four geophones are at an offset of 1m. Diffraction from the edge. Surface Wave Motion. S – Wave Motion. The particles move in retrograde sense around an ellipse that has its major axis vertical and minor axis in the direction of wave propagation. In a Love wave the particle motion is horizontal and perpendicular to direction of propagation. Noise Section for Shot point 2 at GP ‘X’. Noise Section for Shot point 1 at GP ‘X’. Reflection of plane compress ional wave at interface. G1. 25m. Radial lines with arrows are ray paths. 5m. Surface Geometry and Sub surface Nature and Behavior in Layout. The particle motion in the wave front of a Rayleigh wave consists of a combination of P – Wave and SV – vibrations in the vertical plane. Field Geometry and the Structure of the Bore Hole dug for doing the UPHOLE Survey (at GP X and GP Y). and 50m. Shows the generalized stratigraphy of KG Basin. Internal structure of a moving magnet geophone. Field Geometry and the Structure of the Bore Hole dug for doing the UPHOLE Survey (at GP Z). circular arcs are wave-fronts. The amplitude of the wave decreases with depth below the free surface. G2. Fresnel Zone. Figure 2(c) Figure 3 Figure 4 Figure 5 Figure 6 Figure 7(a) 2D Figure 7(b) Figure 8(a) Figure 8(b) Figure 8(c) Figure 9 Figure 10 Figure 11 Figure 12 Figure 13(a) Figure 13(b) Figure 14 Figure 15(a) Figure 15(b) . Shows a field record as obtained by the uphole survey team of GP ‘X’ for a source (1m of detonating cord) at a depth of 60m into a spread of geophones. The source a of diffracted radiation has been set into oscillation by waves generated on surface. Noise Spread (Transposed Spread). 3m. Geometric Correction for the Uphole Data. t-d plot drawn for the data obtained by the last two receivers of uphole A of GP ‘X’.

… G216 represent the Geophone strings (12 Geophones per string). Selection of Velocity Function. by GP “Y”. Surface – consistent statics model to establish the travel time model equation. Spiking Deconvolution. Seismic data volume in processing coordinates – midpoint. Seismic Data Merging. Decon and Residual Stack. after the application of filter. Raw Field record in SEG – D format. Seismic Section obtained by conducting Fold Back Experiment. offset and time. Noise Section prepared by GP ‘Y’. . Time – Variant Filtering (Record without the application of filter).Figure 15(c) Figure 15(d) Figure 16 Figure 17 Figure 18 Figure 19(a) Figure 19(b) Figure 20(a) Figure 20(b) Figure 21(a) Figure 21(b) Figure 22(a) Figure 22(b) Figure 23(a) Figure 23(b) Figure 24 Figure 25 Figure 26(a) Figure 26(b) Figure 26(c) Figure 26(d) Figure 27(a) Figure 27(b) Figure 28(a) Figure 28(b) Figure 29 Figure 30 Figure 31 Figure 32(a) Figure 32(b) Figure 33 Noise Section for Shot point 3 at GP ‘X’. Uncorrected Record. by GP “Y”. Seismic Record obtained after doing the Amplitude Correction. Seismic Record obtained after doing the Amplitude Correction and applying filter. 1st Shot gather obtained during the regular production work by GP ‘X’ 2nd Shot gather obtained during the regular production work by GP ‘X’. Seismic Section obtained by conducting Fold Back Experiment. NMO Stack. Specturm of the Raw Data and the Decon Data. Seismic Section obtained by conducting Fold Back Experiment by GP “X”. Time – Variant Filtering (Application of High Pass Filter – 816Hz). Noise Section for Shot point 4 at GP ‘X’. Amplitude Decay with time/depth. Record Obtained after Editing. Conventional processing flowchart. without the application of filter. G2. Layout for the Fold Back Experiment. Picking travel time deviations from NMO corrected gathers. G1. Raw Field record in SEG – Y format. Editing (Raw Field Record). Velocity Analysis. Seismic Record obtained after doing the Spherical Divergence Correction.

Migration Stack.Figure 34 Figure 35 Figure 36 Figure 37 Figure 38 Figure 39(a) Figure 39(b) Brute Stack. Geometrical representation of Migration. A Hypothetical stacking chart – Each dot represents a single trace with the time axis perpendicular to the plane of the page. Different types of Gathers. Aerial Network Principle (Master Slave Relation). . Final Stack.

it was used for locating enemy artillery during World War I. Mintrop devised the first seismograph. Investigation of the earth’s crustal structure within a depth of up to 100 km: the seismic method applies to the crustal and earthquake studies is known as earthquake seismology. . called the seismoscope.D. Delineation of exploration within a depth of up to 1km: the seismic method applied to the near – surface studies is known as engineering seismology. • 1914 – In Germany.1 Introduction three important/principal applications The seismic method hasnear-surface geology for engineering studies. 1. b. Fessendon patented a method and apparatus for locating ore bodies. This science developed into earthquake seismology. 100 – The earliest known seismic instrument.2 • Historical Perspective A. Hydrocarbon exploration and development within a depth of up to 10 km: seismic method applied to the exploration and development of oil and gas fields is known as exploration seismology. c. • 1848 – In France. was produced in China to indicate the direction form which the tremor came during an earthquake motion. especially by creating seismic waves with artificial sources and observing the arrival time of the waves reflected from acousticimpedance contrasts or refracted through high velocity members. Mallet began studying the Earth’s crust by using Acoustic waves. and coal and mineral a.Chapter 1 Introduction 1. • 1920 – The introduction of “refraction methods” for locating salt domes in the Gulf Coast region of the United States began. Definition by Robert E. Sheriff: Seismic survey is a program for mapping geologic structure by observation of seismic waves. • 1917 – In the United States. solid earth or crustal geophysics.

We are able to sometimes better image the sub-salt and sub-basalt targets with the 4C seismic method. and the top or base of the reservoir unit that we sometimes could not delineate using only p-waves. the technique of using reflected seismic waves. The technology improved during the 1980’s. This had a tremendous impact on the seismic exploration industry. Using the converted s-waves. it also created new challenges for itself. The late 1970’s saw the development of the 3D seismic survey. Using the multi-component seismic method. in which the data imaged not just a vertical cross-section of earth but an entire volume of earth. known as the “seismic reflection method”. not only greatly improved the productivity of seismic crews but also greatly improved the fidelity with which the processed data imaged earth structure. then process that data in a computer. Now we record not just p-waves but also converted s-waves for a wide range of objectives. we are now able to see through gas plumes caused by the reservoir below. The ability to record digitized seismic data on magnetic tape. In 1990’s depth section preparation got focused from the prevailing time section preparation after processing the data.• 1923 – A German seismic service company known as Seismos went international (to Mexico and Texas) using the refraction method to locate oil traps.1 Milestones in Seismic Industry As the search for oil moved to deeper targets.2. became more popular during World War II. 1. leading to more accurate and realistic imaging of earth. because it aided delineation of other structural features apart from simple salt domes. This is called 4D data acquisition. we are able to detect the oil-water contact. During 1960’s the so-called digital revolution ushered in what some historians now are calling the Information Age. Modern Seismic Data Acquisition could not have evolved without the digital computer. As the seismic industry made one breakthrough after another during its history. commonly known as the 4-C seismic method. In 2000’s data is being acquired with an additional parameter of “time” as the 4th dimension of the existing 3D data acquisition system. .

If a number of parallel sources and/or streamers are towed at the same time. then a three-dimensional (3D) image is possible (the third dimension being distance. The vessel moves along and fires a shot. the result is a number of parallel lines recorded at the same time. If many . a single seismic profile may be recorded in like manner to the land operation.1 Land Data Acquisition: In land acquisition.Chapter 2 Modern Seismic Data Acquisition S sea. land operations often are conducted only during daylight thus making it a slow process. If a single streamer and a single source are used. When the source is in-line with the receivers – at either end of the receiver line or positioned in the middle of the receiver line – a two-dimensional (2D) profile through the earth is generated. with reflections recorded by the streamers. If the source moves around the receiver line causing reflections to be recorded form points out of the plane of the in line profile.. So there is a land data-acquisition method and a marine data-acquisition method.2 Marine Data Acquisition: In a marine operation. 2. orthogonal to the in-line receiver-line). a shot is fired (i. The or two methods have a common-goal. imaging the earth. a ship tows one or more energy sources fastened parallel with one or more towed seismic receiver lines. so each required unique technology and terminology. Hence. the receiver lines take the form of cable called Steamer containing a number of hydrophones. 2. The majority of land survey effort is expended in moving the line equipment along and / or across farm fields or through populated communities. energy is ubsurface geologic structures containing hydrocarbons are found beneath either land transmitted) and reflections from the boundaries of various Lithological units within the subsurface are recorded at a number of fixed receiver stations on the surface. In this case. But because the environments differ.e. These geophone stations are usually in-line although the shot source may not be.

In this report.3 Transition – Zone Recording: Because ships are limited by the water depth in which they safely can conduct operations. More than one vessel may be employed to acquire data on 24-hour basis. a 3D data volume is recorded. transition-zone recording techniques have been developed to provide a continuous seismic coverage required over the land and then into the sea. marine and TZ are discussed. Techniques have been developed to use both Geophones and hydrophones in the surface area where the shore line / water edge is likely to migrate towards land and sea depending on the tide of sea a day. or shore lines. The combination of such hydrophone / geophones is called a “Dual Sensor”. Geophones that can be placed on the sea bed or used with both marine and land shots fired into them.closely spaced parallel lines are recorded. though the principle of all sorts of seismic operations like land. The advantage of why this is to see that either of the receiver of Dual Sensor pickups the surveyed from the slots recorded using a land or marine source and data gaps all along the coast within the area of prospect. the ultimate emphasis is given on the land acquisition only as the training has been in this regard. 2. . and because land operations must terminate when the source approaches the water edge. since there is no need to curtail operations in nights.

3. Particle motion in a P-wave is in the direction of wave propagation. 3. Particle motion of a shear wave is at right angles to the direction of propagation.1. a compress ional force causes an initial volume decrease of the medium upon which the force acts. The P-wave velocity is a function of the rigidity and density of the medium. The elastic character of rock then caused an immediate rebound or expansion. there is no shear wave possible because shear stress and strain cannot occur in liquids. while in spongy sand. . followed by a dilation force as shown in figure 1(a). Several kinds of wave phenomenon can occur in an elastic solid. In dense rock.1.1 Compress ional Waves (P-waves): On firing an energy source.1 Seismic Wave Fundamentals earth can be by assuming T he transmission of energy ainto the The Earth’s explainedconsidered as that the Earthelastic has the elastic properties of solid. They are classified according to how the particles that make up the solid move as the wave travels through the material. crust is completely (except in the immediate vicinity of the shot).Chapter 3 Fundamentals of Seismic Prospecting 3. form 300 to 500 m/sec. and hence the name given to this type of acoustic wave transmission is elastic wave propagation. it can vary from 2500 to 7000 m/sec. This response of the medium constitutes a primary “compress ional wave” or P-wave.2 Shear Waves (S-waves): Shear strain occurs when a sideways force is exerted on a medium.(figure 1(b)) a shear wave may be generated that travels perpendicularly to the direction of the applied force. A shear wave’s velocity is a function of the resistance to shear stress of the material through which the wave is traveling and if often approximately half of the material’s compress ional wave velocity. In liquids such as water.

6 Love Waves: The Love wave (figure 2(c)) is a surface wave borne within the LVL. This degradation causes problems during data processing. which itself can set up an air-coupled wave.7 Direct and Head Waves: The expanding energy wave front that moves along the air-surface interface outward form shot commonly is observed as the direct wave and has . such waves often propagate by multiple reflection within the LVL. 3. 3. traveling horizontally with retrograde elliptical motion and away from the energy source (shot). than the compress ional wave. This point is in the vicinity of the base of the weathering layer.607C m/sec where C = Celsius temperature. Also known as the horizontal SH-wave.1. 3.1. degrade the signal-to-noise ratio. If such waves undergo mode conversion. obscuring reflected energy content even further.4 Mode-Converted Waves: Each time a wave impinges on a boundary. a portion of the energy is reflected and the remaining transmitted. dependent upon the LVL material. a secondary wave-front in the surface layer. as shown below: V = 1051 + 1. eventually reversing in direction. Q-wave. Such converted waves sometimes. Lq-wave or G-wave in crustal studies. 3. Raleigh waves are of low frequency nature.1.This medium is known as weathering layer or low-velocity layer (LVL). Depending upon the elastic properties of the boundary. this wave is commonly known as ground roll (figure 2(b)). This wave generally travels by about 350 m/sec velocity slower.3 Air Wave: On land the energy source (shot) generates an airwave known as the air blast. theoretically no vertical motion. the speed of the airwave depends mainly on temperature and humidity.5 + 0.3.5 Raleigh Waves: It is a type of seismic surface wave propagated along the free surface of a semi-infinite medium. which has horizontal motion perpendicular to the direction of propagation with. Because the motion of the ground appears to roll.1.1. The particle motion of this wave reduces (amplitude) with increase in depth. a number of noise trains appear across the seismic record. Figure 2(a) shows surface wave motion. the incident P-wave or S-wave may convert to one or the other or to a proportion of each.1F ft/sec where F = Fahrenheit temperature V = 331.

The amplitude and polarity of reflections depend on the acoustic properties of the material on both sides of discontinuity. back up again. In that case. a wave traveling along the layer may undergo internal reflection (i. is: . and reflection coefficient Rc. Head waves are the portions of the initial wavefront that are transmitted down to the base of the weathering layer or the water bottom and are refracted along the weathering base. refracted head waves appear in the mid-to-faroffset traces before arrival of the direct wave.1 Reflections: The phenomenon in which the energy or wave from a seismic source has been returned from an interface having acoustic impedance contrast (reflector) or series of contrasts within the earth is called reflection. Huygen’s principle is commonly used to explain the response of the wave. They appear as short shingled waves. This phenomenon is pictorially represented in figure 3.2. Every point on an expanding wave front can be considered as the source point of a secondary wave front. reflected amplitude Ar. The relationship among incident amplitude Ai. The envelope of the secondary wave fronts produces the primary wave fronts after a small time increment. stay within the layer. where. Such waves are called guided waves and exhibit mainly vertical particle motion..the velocity of the surface layer through which it travels. and so on). . repeating on the shot record. Acoustic impedance is the product of density and velocity.e.2 Characteristics of Seismic Events Seismic wave created by an explosive source emanate outward from the shot point in a 3D sense. reflecting from upper interface to lower.1. The trajectory of a point moving outward is known in optics as a ray. 3. Sometimes the refracted velocity is higher than the velocity of propagation in the surface layer. They return to the surface as refracted energy or refractions.8 Ground Waves: When a layer of the Earth has an extreme density or velocity contrast at both its upper and lower boundaries. 3. 3. and hence in seismics as a raypath.

In other words. less transmission occurs and. 3. then critical reflection occurs.3 Refractions: The change in direction of a seismic ray upon passing into a medium with a different velocity.2 Critical Reflection: When an impinging wave arrives at such an angle of incidence that energy travels horizontally along the interface at the velocity of the second medium.2. a density contrast will cause a reflection and vice versa. and part is transmitted or refracted (figure 4) with a change in the direction of propagation occurring at the interface.2. divided by the initial medium velocity V1 equals the sine of the refracted angle of a ray (sin r). 3.Where velocity is constant. divided by the lower medium velocity V2. that is: when a wave encounters an abrupt change in elastic properties. . Snell’s law describes how waves refract. part of the energy is reflected. (sin i). With a large Rc. The incident angle ic. any abrupt change in acoustic impedance causes a reflection to occur. at which critical reflection occurs can be found using Snell’s Law. Energy not reflected is transmitted. hence signal-to-noise ratio reduces below such an interface. It states that the sine of the incident angle of a ray. is called refraction.

in which specialized data processing techniques are used (i. 3.2.e. However. The signal-to-noise ratio (S/N). but much of it is reflected. The reflected wave front arrives at the receivers get aligned along the trajectory of a parabola on the seismic record..4 Diffractions: Diffractions (figure 5) occur at sharp discontinuities. When the wave front arrives at the edge. a portion of the energy travels through into the higher velocity region. including coherent events that interfere with the observation and measurement of signals. travel direction and repeatability – form the basis of most methods of improving record quality. out-of-the plane diffractions events are considered part of the signal. Such diffractions are considered as noise and reduce the signal-to-noise ratio. thereby enhancing the subsurface image. Seismic noise may be either a) Coherent or b) Incoherent Another important distinction is between a) noise that is repeatable and b) noise that is non repeatable. diffractions may arrive from out of the plane of the seismic line / profile. Everything else is “noise”.2. Hence in 3D surveys. In conventional in-line recording. fault. The important distinction between long-path and short-path multiples is that a long-path multiple arrives as a distinct event whereas a short-path multiple arrives soon after the primary and changes the wave shape. the 3D seismic migration).. 3. Poor records result whenever the signal-to-noise ratio is small. such as at the edge of a bed.5 Multiples: Seismic energy that has been reflected more than once is called multiple while virtually all seismic energy involves some multiples.3. We use the term “signal” to denote any event on the seismic record from which we wish to obtain information. the diffractions are considered as useful scattered energy because the data-processing routines transfer the diffracted energy back to the point from which is generated. The properties – coherence.3 Seismic Noise The reliability of seismic mapping is strongly dependent on the quality of the records / data. is the ratio of the signal energy in a specified portion of the record to the total noise energy in the same portion. . or geologic pillow. in 3D recording.

Incoherent noise is due to scattering from near-surface irregularities and in homogeneities such as boulders. Non repeatable random noise may be due to wind shaking a geophone or causing the roots of trees to move. distant earthquakes.3. a person walking near a geophone.3. which generates seismic waves.3. refractions carried by high-velocity stringers.2 Incoherent noise is often referred to as random noise (spatially random). multiples and so forth. stones ejected by the shot and falling back on the earth near a geophone. noise caused by vehicular traffic or farm tractors. All the preceding except multiples travel essentially horizontally and all except vehicular noise are repeatable on successive shots. ocean waves beating on a seashore. Coherent noise is sometimes subdivided into: a) energy that travels essentially horizontally and b) energy that reaches the spread more or less vertically 3.1 Coherent noise includes surface waves. and so froth. . and so on. small-scale faulting. which implies not only non-predictability but also certain statistical properties. reflections or reflected refractions from near-surface structures such as fault planes or buried stream channels.

4.1. • Environment: When working in populated areas.1 Land Energy sources: The choice of energy source is critical in land data acquisition because resolution and signal-to-noise ratio quality are limited by the source characteristics. . the longer travel path to a deep reflector requires the selection of a source that has enough energy at the higher frequencies to maintain a broad reflection bandwidth.Chapter 4 Reflection Field Equipment 4. the geophysicist should select a source that has adequate energy to illuminate the target horizons. • Availability and Cost: The time of arrival of a crew can be extremely important. both high and low. A geophysicist should select a source based on the following five criteria: • Penetration to the required depth: Knowing what the exploration objectives are. there are special safety requirements to which geophysicists must adhere. They may dictate the source selection. the source must transmit a broad range of frequencies.characteristics: Different areas have different noise problems. a detonator may possess adequate energy and frequency bandwidth. Past experience can help here. For deeper reflections. For very shallow targets.1 Seismic Sources marine energy sources. • Bandwidth for the require resolution: If high resolution reflections are required to delineate subtle geological features such as a stratigraphic traps. S eismic sources can be broadly divided into two categories: land energy sources and Land Energy Sources are of two types: Explosive sources and Non Explosive sources. • Signal-to-noise.

4.1. If drilling is fast and efficient. one may have to limit the shot hole depth to as little as 2m or a surface shot may be used instead.1. Generally. Horizontal vibratos produce weak P-waves and robust S-waves. Vibrators are designed in two basic groups: Buggy-mounted and truck-mounted units. larger charge sizes cause more ground roll and air blast contamination of the record. Deeper targets usually require larger charge sizes. the explosive charge is commonly referred to as ‘powder’ and the detonators are referred to as ‘caps’ or ‘primers’.1. such as: • • • • • • • Airguns and mud guns (used in transition zone surveys) Shotgun (Betsy) Mini-Seis (Thumper) Land air gun Dinoseis Elastic wave generator (EWG) Mini-vibes . De convolution enhances the frequency content such that the bandwidth will be higher and have an improved S/N ratio compared to a record with a smaller charge size. If the drilling is really tough and expensive.2.1 Charge Size: The choice of charge size depends largely on the depth to the horizon of interest. 4. it is usually not economical to go much beyond 50m depth.2 Explosive Sources: Explosive sources produce robust P-waves. the shallower the source. single shot hole filled with explosives might be the most economical source option.2 Charge Depth: The charge depth depends on the depth of the weathering layer and the level of noise interference one encounters when testing. smaller charge sizes mean higher frequency content.1. the stronger the air-blast and the ground-roll.2. The best charge size is that which achieved the maximum signal-to-noise ratio (S/N) at the target depth.3 Vibrators: Vertical vibrators produce and asymmetric radiation pattern of P-waves and S-waves. The explosive source consists of a detonator and an explosive charge. The selection of explosives as the sources of choice depends primarily on near-surface conditions and the accessibility of other energy sources. 4. 4. Alternatively. On the other hand. regardless of relative cost. 4.4 Other sources: Although dynamite and Vibroseis are used in majority of surveys. vibrators may be preferred on technical grounds. If multiple dynamite patterns do not pump enough energy into the ground. but less energy going into the ground. Generally. other sources can be and are used in the field 3D surveys. In the seismic industry.1.

a geophone can produce 0. tolerance of 2-2. + 0. of the manufacturer stated value o Natural frequency distortion with a maximum 20 tilt. The conductor’s or the magnet’s motion through the magnetic/electrical field.5Hz.1 Geophones: Conventional geophones are based on Faraday’s law of electromagnetic induction.03%.4mV output for a tiny movement of 2. The conductor in reality is a length of copper wire wrapped into a cylindrical coil shape. That are as follows: o Natural frequency within + 0. and very high geophone- . sensitivity and damping. if the conductor is an element of an electrical circuit.2.4. The large amount of subsurface information carried by seismic signal would be fully available for interpretation only if the geophones follow ground movement faithfully with minimum distortion.2 Seismic Receivers 4.5% on frequency. • Tolerances: Geophones have typical tolerances.5cm/sec velocity.2. Hence. For example. It is often referred to as the coil or element. Close tolerance digital grade geophones have distortions as low as 0. a conductor and a spring which positions either the conductor in the magnetic field space (in moving coil geophone) or the permanent magnet in the electric field space (as in moving magnet geophone). at one end of the sensitivity scale.1Hz. moving coil geophone and 2. moving magnet geophone(figure 6) The essential ingredients to make a geophone are a permanent magnet.1.1 Electrical Characteristics: • Sensitivity: Geophones are available with a wide range of sensitivities. 4.5 X 10 m/sec. A large variety of modern geophones are available today to meet the specific requirement of the user. causes an EMF to be induced that is proportional to the velocity of the earth’s motion. such a geophone is called a velocity phone because its output is proportional to the velocity of the earth’s motion.1V output for a 2. according to Faraday’s law. This law states that relative motion of a conductor through a magnetic field induces an electromagnetic force (EMF) which causes a current to flow through the conductor. while another geophone can produce as much as 0. The two types of geophones widely used in geophysical surveys are 1. o Sensitivity within + 5% of the manufacturer stated value.

In hilly. e. 4. To modern geophones has done away with the shunt resistance. This increased number of channels may make it difficult to create a patch that creates sufficient fold. where the height difference between the ends of any receiver group exceeds 2m.2. geophones have a resonant frequency of 10 or 14Hz.g.to-geophone uniformity. elevation difference) one can spread the phones out parallel to topographic contours to minimize inter-array statics smear. phones with lower resonant/natural frequencies are used. However..g. geophones with resonant frequencies up to 40Hz are being manufactured. terrain. The particular receiver type depends on the characteristics of the data to be recorded and the environment where the data acquired. however numbers (e. it will produce a voltage proportional to that variations in pressure. 6. South America. both geophone and hydrophone are available in a single unit known as dual sensors or the 4-component (4C) receivers consist of a hydrophone.. Receivers are usually wired in groups of 1. then electrical charges appear on some other pair of faces. If such a crystal is placed in an environment experiencing changes in pressure.2. Since shear-wave reflections contain a lower frequency bandwidth. If mechanical stress is applied on tow opposite faces of a piezoelectric crystal. 12 or 24. 24 on even 72 in the Middle East). geophones may be clustered in a small area. To overcome the disadvantage of using two separate sensors. 12.. resulting in very low distortion and high spurious response up to 250Hz.3 Dual Sensors: For ocean bottom cable (OBC) applications. 9. Threecomponent 3D recording requires three times the number channels of recording capacity since each component is recorded separately. two horizontal geophones and a vertical geophone installed in a single water proof enclosure for recording P. SV and SH waves. combining the output of geophones and a hydrophone is now widely accepted technique for reducing the ghosting effect caused by the water/air interface. These geophones maintain their natural frequency specifications with high tilt angles. In normal land operations. but in some parts of the world it is still normal practice to use 6 to 8Hz phones. 4. 4. While the trend is towards higher number of phones (9.2 Hydrophones: The hydrophone is an electro acoustic transducer that converts a pressure pulse into an electrical signal by means of the piezoelectric effect. 6) are still used in certain areas. . In steep terrain (over 5m.

say 5-80Hz. The Fourier transform can be used to move form one representation to the other. Bandwidth and Duration In seismic exploration. which goes onto the tape. The seismic signal includes the reflectivity signal as well as ground roll. • Seismic Signal: Everything received as a result of the source firing. If a wavelet has a short extent in a time and appears like a spike. The wavelet can be described in the time domain or alternately. The amplitude and phase of a wavelet contain all the spectral information of a wavelet. The duration of recorded signals depends on the nature of the source and target depth. Reflectivity Signal: The earth’s reflection sequence convolved with the source wavelet. it must be recorded. The signal was . These spectra are called the frequency-domain representation of the wavelet. This is the seismic signal plus all environmental noise.3 Seismic Instrumentation Once a seismic signal is transmitted and received. • Received Signal: The electrical output of the receiver group. channel waves etc. Often. each separate frequency having its own phase value. refractions. The information contained in a signal can be characterized by three quantities: Signal-to-noise ratio. It is recorded signal. that event is represented by a wavelet that has two components – the earth filter and the acquisition wavelet. whereas the wavelet in time is considered to be in the time domain. it is likely to be composed of a broad band of frequencies.4. diffractions. the recorded signal bandwidth is usually 0-250Hz. A reflection is a physical event caused by a change in the acoustic impedance of the earth. metal cased geophones connected by wire cables to a recording truck. in the frequency domain.. that is the instrument filtered signal plus any addition instrument noise. The different types signals are as follows: • • Source Signal: The pressure field created by the seismic source. even though they may be recorded in a broader band. sidesweep. or lower. • Recorded Signal: The data. When seismic recording first began in a 1920s the recording systems consisted of heavy. data are processed in a narrower band.

The preamplifier has low noise. filtering.3. 4.1. In the early 1960s they were replaced by digital tape recorders.3.1 Roll-along switch: It allows the observer to record a selected subset of the geophones connected to the recording truck. The individual analog amplifiers also were unreliable.recorded on a rotating photographic drum.3.3. The received incoming signal must be filtered to prevent aliasing prior to conversion to a digital form. . Its input impedance is equal to or greater than the cable impedance to the farthest station so that no signal amplitude is lost because of mismatching of impedances.3 Multiplexer: This is an electronic switch that time shares data form multiple channels.2 Pre-amplifier: This is a fixed gain amplifier that raises the incoming seismic signal above the background instrument noise level. It minimizes the need to move the recording truck. digitization and multiplexing at or near the receiver stations. they were being replaced in recording devices by a single multiplexed analog amplifier.1. In late 1970s. distributed systems were introduced that performed amplification. 4. By the mid 1980s distributed systems were in wide use throughout the industry.1 Basic components: The basic components of the land recording systems are: 4. Drums were replaced by analog magnetic tape recorders during the late 1950s but these often failed to operate well. digitization and recording. It allows the analog stream of data to be recorded in digital form.3.3. 4. and by late 1960s.1.5 A/D Converter: Analog signals are converted to digital signals with this device.1. It changes multiple parallel inputs to a serial output relay for amplification. The multiplexer cycles through all of the inputs during each digital sampling interval. 4. The amplifier must be completely linear throughout its operating range. 4.1. high input impedance and low distortion.4 Main Amplifier: This amplifier receives all analog signals input to it and passes then on to the A/D converter with an amount of gain determined by the gain controller. each of which had an analog-to-digital converter at the input stage to the tape drive.

1.3.1. the seismic signal passes from the geophone string directly into an amplifier and/or A/D converter. 4. A 24-bit technology system offers high fidelity because it records data over a large dynamic range. reflections. travel easily to areas of data acquisition. In land using recording a non-distributed system. such as the DMT/SUMMIT and the 24-bit OYO DAS. these recording units are usually truck or buggy mounted and can. majority of the acquisition systems provide 24-bit recording technology. layout of the large receiver spreads often used for 3D acquisition became considerably simpler. an analog seismic signal travels from the geophones along electrical conductors (the cable) to a roll-along switch in the recording truck (or “doghouse” or “dog box”).g. Peculiarities for each system need to be examined for the task at hand. In land operations. In contrast. refractions. Today. can be used for small.8 Tape Drive: Data finally are recorded on tape in digital form. Magnetic tape may be replaced by floppy disks. Instead a variable or automatic gain control (AGC) level is determined for application by the main amplifier in the feedback loop with the A/D converter to reduce or amplify incoming signal to keep signal levels within the desired converter range. The AGC level set at each sample is recorded on tape as part of the gain word. nearsurface 3D surveys. In the case of very low channel count systems (e. ground roll and environmental noise. In addition.7 Formatter: The formatter arranges the data stream (in the form of voltage and gain levels) into a binary code for writing onto magnetic tape. after which it is converted to a digital signal and recorded on tape or disk.3. it is . A fixed form of amplification with only a relatively small number of data bits cannot handle that range without some dipping at the most significant bit end of the converter. depending upon the system in use.4. Because digital transmission of multiplexed data uses many fewer cables than analog transmission. making the formatter the “brain” of the recording operation. instrument operational commands are distributed by the formatter to all the other components.3.1. all of which may have amplitudes varying in a range from microvolts to volts. Lower channel count systems with higher sampling rates. in a distributed system. therefore. 4.6 Gain Controller: The received signal includes. less than 120). ready to be passed on to the processing center for further processing. The controller sets the amount of gain while the amplifier applies it to the incoming signal. after which it travels in digital form along a cable to the recording truck.

For some systems. plain. Other telemetry systems have a disadvantage over distributed system in that the radio transmission of the data from the boxes to the recording unit takes longer than real time. If more than one recorder is used. Some telemetry systems can receive data in real time. mountain. data transmission time may be on the order of minutes per source point. “Seam less” receiver coverage from a variety of sources enables application of surface-consistent processes as de convolution. and FM interference may be significant in populated areas.2 Telemetry System True telemetry system has no physical connection between the station recording unit and the control system in the recording truck. These systems should be used where access is limited due to rugged terrain. If a 3D survey crosses a variety of terrains (e. Sercel Eagle system is an example of such systems.normal practice for several recorders to be used together in a master-slave pattern to reach sufficient channel capacity even for small 3D surveys. which may slow down the shooting crew. and amplitude correction. .3. Tree cover may also cause a problem for the signal transmission. amplitude and phase matching will be required to compensate for the recorder differences. or any other reason.. statics. transition zone). Mixed systems may be used to cross-rivers or roads at select locations. The SAR (Seismic Acquisition Remote unit) records the signal and sends it via radio frequencies to the CRS (Central Recording Station). permit problems. Different Types of Seismic Recorders Manufacturer System T/D/R Boxes Stations per Box Sercel SN388 Distributed System Station Unit (SU) 1-6 Crossing Station Unit (SU) Sercel 408UL Telemetry System/Distributed System/Remote Seismic Recording Field Digitizer Unit (FPU) 1 Line Acquisition Unit Cross (LAUX) Line Units Central System Central Control Unit (SU) Central Module (CMU) 4.g. it is desirable to use one type of recorder to cover all the survey areas. Thus shots of different types in the mountains or in the swamp can be recorded by the same instrument.

4. .3. While the conventional storage devices are the tape drives the latest equipment uses the cartridges with 10 GB memory capacity for storing the data. Previously it used to get recorded in SEG B format or SEG C formats. The data is stored in SEG D format.3 Storage: The data obtained in the seismic field survey is stored on magnetic tapes or cartridges.

then noise tests.e. a shallow layer may be necessary for processing or interpretation. the definition of the representative horizons is the beginning of the design.1 Near Surface layer: The velocity of the surface layer is used as a factor in computing offsets and determining the effect of ground roll. as the frequency required to image the target. but it may be quite complex and have several layers of variant velocity. experience. Usually the weathered layer is very low velocity because of exposure and erosion.1.Chapter 5 2D Survey Design Basics 5. Shallow horizon of interest and deeper horizons may be interpretational needs. If the area is frontier area. 5. This information can be obtained in approximate form from existing well logs or seismic data in the area. thus. Good data in the shallow part is needed to use the velocity analysis with confidence. The velocity and maximum dip of each layer are initial parameters. These parameters allow computation of depth of the layer Zs by the familiar time-distance formula where: t Vs Zs = = = two-way travel time to the shallow horizon average velocity to the layer.1.1 Basic Concepts in 2D Surveys seismic will image the The guiding principle should be to design afor costssurvey thatResolution parameters. are starting design factors. such selected target in the most economical way and time.2 Shallow Layer: While the target layer is most important for imaging. 5. The velocity Vs and the approximate arrival time are the needed parameters. .. and depth to the shallow layer This formula provides the information needed for the near offset. or geologic theory can be the source of this information. i.

1. tz = two-way record time of the target horizon = Maximum group interval 5. Vmin θ Fmax = minimum velocity.1.1. = maximum dip of the target horizon in degrees. When parameters conflicts arise during the design. Group interval represents which largest spatial sampling shall prevent aliasing during migration: where. The far offset required should be computed first for the target horizon and then for the deep horizon. It is the distance on the ground between receiver stations.3 Target Layer: The layer is the horizon of primary interest for the survey.5 Fresnel Zone: Fresnel zone (figure 7 (b)) is the smallest part of the reflector making an unambiguous image of the individual event and is circular at zero offset but elliptical with offset. Once the maximum offset .1. the requirements for the target layer should prevail.5. The velocity of the surface layer is involved because of the initial angular influence on the down going seismic waveform at the depth of the horizon. and = maximum frequency expected 5.4 Group Interval: Group Interval is the basic sampling on the earth’s surface by the survey.6 Far Offset: Far offset is a function of the depth modified by the velocity field. Fresnel zone is given by: where. For imaging the target horizon. geologic knowledge of the expected thickness and reflectivity is needed to estimate the frequency range 5.

where. When Tm exceeds 0. Too large on offset range for a given number of receiver stations may result in inadequate fold for the shallow layer or even the target layer.is computed in combination with the near / offset. Z is the depth of horizon. Vs is the velocity of the surface layer. If the horizon is dipping. and V the average velocity to the target. if the offset . Hmax should be extended by: where. the ideal parameters can be evaluated within the framework of the available equipment. then the distance. Data processing often requires considerable muting of the shallow data on the greater offsets because of NMO stretch. and other factors. The far-trace distance should preserve full fold on the target horizon. V= velocity at time Tz and Tz = arrival time of the event at H = 0. Neglect of this factor can result in underestimating field costs. Z θ = = depth of horizon and dip This extension is quite important in 3D exploration (Migration aperture). then data processing will probably form an automatic mute. The well-founded rule of thumb says that the spread length should be equal to or a little greater than the depth of the reflection being imaged. On the other hand. The target horizon should be protected by the survey for the mute.3. noise trains. The custom is to automatically mute below the “20 to 30 percent stretch factor” the formula most used for this step is: where H = offset distance. the group interval.

1.8 Sample Rate: The sampling rate in time is more or less standard. The rule is A sample rate of 2ms is used for most seismic surveys.7 Record Length: Part of the survey design is to determine the required sampling rate in time and the record length is a function of depth and velocity of the deepest horizon. 5. 5. and L = length in time of the longest processing filter. Normally 200ms is adequate for the filter length. ranging from 2 to 4 ms depending on the resolution needed. 5. The extra time in recording is balanced against the possible benefits from data from very deep horizons. accurate velocity analysis and the suppression of multiples during the processing can be endangered. Where Td = two-way arrival time of the deepest horizon of interest at the maximum offset Tr = required record length in time. Hmax = far offset.range is not long enough. Some allowance should also be made for migration.1. Hmin = near offset. Signal length becomes more important when the source is vibratory in nature.1. and NC = the number of channels available for recording.9 Group Interval and Field Equipment: The group interval possible with a particular recording equipment given by where. .

the sampled point is half the distance from source to receiver.1.1. target depth is the primary consideration.000 psi.12 Source Power: There is a decision to be made in some cases on the source power. 5. 5. NC is the number of channels and F is the fold. and length of the lines are important . The source interval is function of the desired fold coverage and the number of channels available.10 Fold Coverage: Each source position yields a certain amount of subsurface coverage. Where. the charge size in kilograms if the unit. It is well known that an explosive source in a cylindrical enclosure generates pressure waves and shear waves of both polarizations. For vibratory sources. The power needed is function of target depth and the environmental noise. Marine sources such as air guns and water guns are defined in terms of their volume and peak-to-peak strength. The subsurface sampling is half the interval of the surface coverage.1. Noise is also involved since more power has the potential to generate more noise.13 Line Location and Orientation: The geometry of the survey is not independent of the target. a large vibrator can generate 50. For dynamite. For flat layers. S is the number of units of group interval in the source spacing and NC the number of channels available. the available power (in pounds per square inch) is specified by equipment model. As the earth has a natural attenuation.11 Source Interval: The source interval in the distance between source positions.5. direction. The location. The maximum fold of coverage is given by Where. The amount of energy generated in a shot hole is proportional to the quality of dynamite. 5. Foldage is defined as the number of times a particular sub-surface sampling point (CMP) is covered by different sources receiver locations. For instance.1.

the more useful the tie of the seismic data to the well log. be perpendicular to fault planes. One very helpful ties is to a well. • When there is no conflict with other needs. Since definition of the fault plane is best on the seismogram when the lines are perpendicular to the plane. When there is existing seismic data nearby. are favored over strike lines.considerations in the survey design. • Line ties are important to interpretation. . Sometimes a small shift in the line location can avoid a troublesome obstacle. Some of basic concepts generally accepted for lines locations/orientations are: • The lines should. Figure 7(a) shows the Surface Geometry and Sub surface Nature and Behavior of 2D Layout. Dip lines for instance. when possible. lines should be planned to minimize elevation and terrain problems. new lines are planned in such a manner that they can be ties with the existing data. The closer the line can approach the well.

The source interval of a 2D survey must be extended to include a definition of the source line. A 2D seismic section is a cross-section of 3D seismic response. the receiver. These misties are due to inadequate imaging of the subsurface resulting from the use of 2D rather than 3D migration. The analysis of 2D designs centers on the subsurface coverage in the form of common-depth points (CDPs). For 3D surveys. Thus the simple parameters that defined the traditional 2D line now must be extended to include more geometry. checkerboards and other patterns developed for 3D surveys. layout may not be lines but circles. 6.Chapter 6 3D Survey Design Basics 6. For 3D surveys. 3D data permits reservoir characterization. source line must be defined. but most often are also in the line of survey. since for most common designs. As many receiver lines are laid out as the equipment for acquisition allows. this is seldom the case.1 Why 3D Seismic Survey? ub-surface geological features of interest in hydrocarbon exploration are 3-dimensional in nature. For 3D surveys. including out-of-plane of the profile. Although out of plane reflections (side-sweeps) are often recognizable by the experienced seismic interpreter. When integrated with well logs. leading to a more reliable interpretation. 2D migration normally assumes that all the signal comes from the plane of the profile itself. 3D migration of 3D data provides adequate and detailed 3D image of the subsurface. The integrity of any 3 D data set leans heavily upon the suitability of acquisition geometry. core and other petrophysical and production data. Despite the fact that 2D seismic section contains signal from all directions. Also. Arrays may be multi-dimensional.2 Basic Concepts in 3D Surveys The 2D surveys are as linear as the terrain allows. the source line is orthogonal to the receiver lines. the CDP becomes two-dimensional and is S . Source and receiver are normally in-line with each other. On the other hand. The receiver line becomes the receiver lines. the out of plane signal sometimes causes 2D migrated section to mistie.

deciding the bin size will be the first step in designing a 3D template. . If structure is complex. Moreover. V T t = = = average velocity to the event. the difficulty of computing. For 3D survey the fold may less. A good approximation to the radius of the zone is which shows that the zone increases in radius with depth but decreases with higher frequency wave fronts. Another new factor is the use of computers to do the design. half the surface size.termed a “bin”. In a seismic context. Dip and structure also are factors in the actual response. azimuthal distribution. “illuminating” a circular area at vertical incidence. where. These bins may be square or rectangular and define the spatial resolution of the data sampling. The accent of 2D lines is on the fold of coverage and the offset range. but the azimuth range is added to the offset range as a parameter. Migration serves to reduce the zone to some minimal size when accurately done and the data fits the assumption. as with the CDP. the theoretical point source expands as it propagates in depth. An important aspect of 3D data and Fresnel zones is the extra dimension of focusing possible with migration. The Fresnel zone assumes some new characteristics in three dimensions. the circle becomes elliptical. then good azimuths range becomes more important. arrival time and peak-to-zero crossing of the wavelet. Essentially. and offset ranges in the bins make the use of a computer program to aid in 3D design almost a necessity. this is the reflecting surface constructively contributing to the reflection. fold coverage. This angular effect actually reduces the size of the zone along the minor axis of the elliptical response. interpretation is usually conducted on workstations. It should be noted that when the reflecting point is offset. The range of azimuths in the bin is also a consideration. Indeed. Subsurface sampling will be. The multiple source and receiver lines. Where structure is complex the velocity analysis must include an azimuthal property..

2.2. Requirements of good fold on a shallow reference layer or a deep reflection for survey are clearly stated.1. Although this seems a rather obvious comment. As a field approximation. An approximation to the required offset for a given horizon is very simple and used often when surveys are designed in the field: Offset = depth of the horizon.6.1 Preliminary Parameters: There are some parameters that need to be estimated as input when designing the 3D survey. many times the objectives of the survey except for the aerial extent and approximate spatial sampling are not part of the input to design.1. The rules for the resolution of layer of given thickness are best determined by modeling. . 6. fold can be less than required in 2D surveys. the maximum frequency expected: T = Two way time of the horizon 6.1. New factors include the fact that the offset may now be measured at an angle and the depth is now that of a plane rather than a line. The general rule is that the resolution of a thin bed requires it to be sampled twice within a quarter wavelength of the highest frequency.2 Fold: The fold required for noise suppression is a function of the S/N conditions.1 Offset: The imaging of shallow target and deep horizons still requires certain offset of source and receiver. The physics and concepts are somewhat independent of whether the survey is to have two or three dimensions. Because of the extra focusing by migration and the flexibility of binning.1.3 Frequency: The temporal frequency required is not much different from that of 2D surveys. Field tests or existing 2D seismic data can yield an estimate of the needed fold for the 3D survey. 6. This translates in 3D to the number of traces in a bin.4 Objectives of the Survey: The most important information is defining the objectives of the survey.

the extent of the survey must be increased by: D where. The result is the template. • • Compute the receiver line spacing. Some design templates will dictate a different sequence of other parameters. 6. which greatly simplifies the survey design and makes the whole project less expensive. If extensive static corrections made during processing indicate problems in the near surface. θ = dip 6. source power. The seismic sections give information abut many of the design parameters such as noise. Find the number of receiver lines allowed by the field equipment. There are areas where neither source nor environmental noise is a problem.1. • Decide the in-line and cross-line roll alongs. frequency content of shallow data are some of the key factors in deciding the source power to be used. The array design should be studied for possible use in the 3D survey. The design of the survey can reduce processing problems in many cases. and general structure. Field records and final stack should be checked for environmental and source generated noise conditions. • Compute the number of source stations per kilometer required to achieve fold with available equipment. this should be noted on the survey design. constrained by the required offset ranges.3 3-D Survey Design Sequence There are many ways to begin and complete a survey design. the power used in either surveys. Type of source. quality of reflections at depth.2.5 Migration Aperture: When the beds are dipping. Z = depth. The specific sequence of steps that follow are general guide lines. weathering problems. = Z tan θ . A summary of the proposed sequence for developing a design sequence is: • Determine the subsurface bin size.6 Seismic Data Input: The most useful direct input is existing seismic data.2.6. The number of stations per square kilometer allows computation of the source line spacing. Twice the chosen bin size is the source and receiver station spacings.1.

spatial resolution needed. where. and economics. The traces when their subsurface reflection point falls with in the bin. F = desired fold.3. however. 6. 6. where. The basic sampling theorem applies to the bin. allow more design calculations if the fold and number of channels on the equipment are known .• Allow for obstacles and run analyses of the offset distribution ranges of offsets and azimuthal properties of the bins. θ = maximum dip in degrees and Vmin = minimum velocity. • Estimate time and costs of the script and iterate until attributes. Fm = Maximum frequency expected.1 Bin size: For 3-D data the bin is the basic building block for the rest of the survey.2 Source line spacing: The bin size will. NS = shots per square kilometer.3. and corrected and summed to represent that bin position by a point. . b is the bin size. and time are satisfied. Bin size depends on target size. Determining NS allows for the computation of the next important parameter source line spacing. R = number of channels B = subsurface bin size. costs. are treated as a CDP. A bin can be any size but rectangles and squares are the popular.

The increment is at the source line spacing. the receiver line spacing from calculation may be reduced. The minimum offset has been previously established with preliminary calculations and modeling. Fold controls the signal to noise ratio.3. The controlling parameter will be the largest minimum offset within a bin.6 Estimation of nominal fold: Stacking fold is the number of field traces that contribute one stack trace. The second constraint is the number of channels available with the equipment. The maximum offset found in the preliminary 2-D calculations or 3-D modeling is a function of deepest horizon to be imaged. The target parameters are the number and length of the receiver lines. The source line shift is an adjustable variable. A smaller near offset would require ‘a’ smaller receiver line spacing and be more expensive. The approximation is that the maximum offset needs to be at least as long as the depth of the most reflection to be imaged. but exact formula include dip. Fold should be decided by looking at previous 2-D and 3-D surveys in the area.5 Determining the template movement: Usually the field people prefer to roll along the direction of receiver lines. At the end of the coverage in in-line direction the next swath would be done in same manner incremented in the source direction and continued until the coverage was completed.3 Receiver line spacing: The new information required is the minimum offset and the offset ranges needed.4 Number and length of the receiver lines in the template: The problem is to be determine the number of receiver lines possible with the template. 6. The field estimate is that the maximum offset should be a little greater than the depth of the deep horizon. 6.3. As with the source line spacing.3. c is the largest minimum offset and b is the source line spacing.3. . a2 = (c2-b2)1/2 where ‘a’ is the receiver line spacing. 6. Thus. The number of lines is constrained by the required maximum offset which sets the length of the lines.6.

and the designer has to decide which aspects of a 3D design are absolutely necessary and which can be compromised. One has to establish which features are important in the area of the survey in order to select the best design option. All other 3D designs are basically subsets of such full-fold surveys. 6.3. The formula is as follows: In line fold = ___( no. the cross-line fold is: Cross line fold = source line length _.6. In this geometry Source and receiver lines are parallel and usually coincident. 6. of receivers x station interval )__ . 2 x Source interval along the receiver line (or) In line fold = ( number of receivers x receiver interval ) 2 x Shot line interval 6. in-line fold is defined similarly to the fold on 2-D data.2 Cross-line fold: Similar to the calculation of in-line fold.3. While source points are taken on one line. 6. creating swath lines halfway between pairs of source and . The grids are offset by one bin size.1 Full fold 3-D: A full fold 3D survey is one where source points and receiver stations are distributed on an even two-dimensional grid with station spacings equal to the line spacings.4.2 Swath: The swath acquisition method was used in the earliest 3D designs.1 In-line fold: For an orthogonal straight-line survey.6.4 Land 3D Layouts: Numerous layout strategies have been developed for land 3D surveys.3 Total Fold: The total 3D nominal fold is the produce of in-line fold and cross-line fold: Total nominal fold = ( in-line fold ) x ( cross-line fold).3. 2 x receiver line interval A full fold 3D survey has outstanding offset and azimuth distributions as long as one can afford to record with a large number of channels. receivers are recording not only along the source line but also along neighboring parallel receiver lines.

6. Then the receiver patch is rolled over one and the process is repeated.3 Orthogonal: Generally.receiver lines. . and keeping track of station numbering is straightforward. However inadequate sampling in the cross-line direction makes this design a “poor man’s 3-D”. This geometry is particularly easy for the survey crew and recording crew. Most companies prefer to have the source points at the half-integer positions. The azimuth distribution for the orthogonal method is uniform as long as wide recording patch is used. Because the receivers cover a large area. or when costs have to be minimized. The offset distribution in all occupied bin lines is excellent. In an orthogonal design. the active receiver lines form a rectangular patch surrounding each source point location creating a series of cross spreads that overlap each other. this method is sometimes referred to as the patch method. The operational advantages are attractive. This technique allows more surface area to be acquired prior to receiver stations moves. Usually all the source points between adjacent receiver lines are recorded. This method is easy to lay out in the field and can accommodate the extra equipment and roll along operation. Figure 8(a) shows a 3D layout and the subsurface nature while figure 8(b) shows the 3D cable layout used by GP “Y”. because many bins are empty.4. The azimuth mix is very narrow and depends on the number of live receiver lines in the recording patch and the line spacing. but are achieved at the cost of a poor azimuth mix and poor cross-line sampling. Parallel swaths are sometimes considered on land when severe surface restrictions exist. source and receiver lines are laid out orthogonal to each other.

Chapter 7 Reflection Field Layouts 7. coverage where each reflecting point is sampled more than once. In split-dip shooting the source point is at the center of a line of regularly spaced geophone group often results in a noisy trace (because of ground roll or truck noise with a surface source. Alternatively. 7. End-on and in-line offset spreads often employ sources off each end to give continuous coverage and two records for each spread. 7. or redundant. which consist of two lines of geophone groups roughly at right angles to each other. Areal or cross coverage indicates that the dip components perpendicular to the seismic line have been measured as well as the dip components along the line. either at one end of the active part to produce a broadside-L or opposite the center to give a broadside-T spread. perpendicular to the seismic line. and in areas of exceptionally heavy ground roll the source point is offset by an appreciable distance along the line from the nearest active geophone group to produce an inline offset spread. hence the source may be moved 15 to 50 m. that is. Often the source is at the end of the spread of active geophone groups to produce an end-on spread. are used to record 3D dip information. Cross spreads.2 Spread Types: By spread we mean the relative locations of the source point and the centers of the geophone groups used to record the energy form the source. Single coverage implies that each reflecting point is sampled only once. Each of these methods can employ various relationships between sources and geophone groups. the sourcepoint may be offset in the direction normal to the cable.1 Split-Dip and Common Midpoint Recording: Virtually all routine source seismic work consists of continuous coverage (profiling). The in-line and broadside offsets permit recording reflection energy before the ground-roll energy arrives at the spread. in contrast to common-midpoint.3 Arrays: The term array refers either to the pattern of geophones that feeds a single channel or to a distribution of shotholes or surface energy sources that are fired . Often the geophone groups nearest the source are not used. or gases escaping from the shot hole and ejection of tamping material). the cables and points are arranged so that there are no gaps in the data other than those due to the fact that the geophone groups are spaced at intervals rather than continuously spaced. which creates a sourcepoint (shotpoint) gap.

4. the effective radius of first Fresnel zone is where. since events are recorded in terms of two-way time.simultaneously. 7.1 Vertical Resolution: Resolution refer to the minimum separation between two features such that we can tell that there are two separate features. which sends back energy to the receiver within a half cycle delay. therefore real separation of the features must be quarter cycle. two way time and frequency. Rayleigh criterion of resolution states that two events can be resolved if their separation is half cycle. Effective radius of the first Fresnel zone is half of the actual radius. arrays provide a means of discriminating between waves arriving from different directions. The size of the zone depends on frequency. the higher the frequency the smaller the zone. so that it will produce constructive interference.4. Thus resolvable limit is wavelength/4. Similarly. 7. Arrays are linear when the elements are spread along the seismic line or areal when the group is distributed over an area. If we consider point source. A wave approaching the surface in the vertical direction will affect each geophone or an array simultaneously so that the outputs will affect the various geophones at different times so that there will be a certain degree of destructive interference. it also includes the different locations of sources for which the results are combined by vertical stacking.4 Resolution 7. V t f = = = average velocity of the reflector. . A “Fresnel Zone” is that portion of the reflector.2 Horizontal Resolution: Horizontal resolution depends on the radius of the first “Fresnel Zone”. The two popular types of array designs are the linear array and the areal array. Thus. the resolution would not have been a problem. waves traveling vertically downward from a source array will add constructively whereas waves traveling horizontally away from the source array will arrive at a geophone with different phases and will be partially cancelled. If seismic wavelets were a spike.

2 Seismic Data Acquisition 8. Uphole Survey Crew • For measuring the velocity and thickness of the weathered layer (Low Velocity Layer) and velocity of the sub-weathered layer (in crude terms for depth optimization) Recording Unit • Shooting Crew: For filling the drilled holes with the charge of specified quantity and detonating it. • Jug hustlers (the Cable laying crew): For laying the cable and planting the geophones at the specified pickets and for observing them all through the recording time for further corrections.2. Before beginning a survey the following .1 The Program: Usually the seismic crew receives the program in the form of lines on a map that indicate where data are to be acquired. • Recording Crew: For recording the seismic signals received by the geophones after blasting the charge. • Leveling team for giving the elevations at the shot point location and the receiver point location. 8. Ranging/Filling team for putting the pickets of the specified intervals along the line based on the control points on the line.Chapter 8 Reflection Field Method for Land Survey 8.1 he Seismic Field Party: Tinto the following groups: The land seismic data acquisition team is divided Survey Crew • • Fixing the control points for the line based on the GPS points given before hand. Shot Hole Drilling Crew • For drilling the holes up to the specified depth for putting the charge for blasting.

Lack of cross control may result in features located below the seismic line being confused by features to the side of the line. Crustal areas may be so extensively faulted that lines across them are nondefinitive. merely running a seismic line to a wellhead may not extend sufficiently beyond faults and other features to establish the existence of such placements. GP ‘Y’ and GP ‘Z’ is to map strati-structural features within the specified formation at an area in Krishna-Godavari Basin of Andhra Pradesh which lie within the lower to upper cretaceous section. Where the dip is considerable. Obstructions along a proposed line may increase difficulties unnecessarily. Lines may cross features such as faults so obliquely that their evidences are not readily interpretable. The structures being sought may be beyond seismic resolving power. whereas moving the line slightly may achieve the same objectives at reduced cost. Objective of the Survey: The objective of the survey done by the GP ‘X’.questions should be asked: “Is it possible that the proposed lines will provide the required information?” Data migration may require that lines be located elsewhere than directly on top of features in order to measure critical aspects of a structure. Near surface variations may be so large that the data are difficult to interpret whereas moving the seismic line a short distance may improve data quality. The seismic equivalent of these geological objectives are as under: For GP ‘X’ Area Area 1 Area 2 Depth (m) 1500 to 4300 1700 to 4200 Two Way Travel Time in (ms) 1250 to 3000 1400 to 2900 Dip 10 to 15 10 to 15 For GP ‘Y’ Area Depth (m) Area 1 Area 2 1800 to 3600 1800 to 3400 Two Way Travel Time in (ms) 1500 to 2500 1500 to 2500 Average Velocity (m/sec) Dip For GP ‘Z’ Area Depth (m) Area 1 1800 to 3400 Two Way Travel Time in (ms) 1500 to 2500 Average Velocity (m/sec) 2400 to 2720 100 to 120 Dip .

Reasons for the Survey: Out of the wells drilled in the area some have proved the presence of gaseous hydrocarbons from the formation and some have been dry. There by the area assumed important for exploration from these targets Geology of the Area: The Krishna-Godavari basin has been subdivided into three subbasins Krishna Sub-basin West Godavari Sub-basin and East Godavari Sub-baisn. The area under investigation for the GP ‘X’ and GP ‘Y’ lies in West Godavari Sub-basin while that of GP ‘Z’ lies in the East Godavari Sub-basin. Figure8(c) shows the generalized stratigraphy of KG Basin. 8.2.2 Permitting: Once the seismic program has been decided o n, it is usually necessary to secure permission to enter the land to be traversed. Permission to enter may involve a payment, often a fixed sum per source location, as compensation in advance for “damages that may be incurred”. Even where the surface owners do not have the right to prevent entry, it is advantageous to explain the nature of the impending operations. Of course, a seismic crew is responsible for damages resulting from their actions whether or not permission is required to carry out the survey. 8.2.3 Layout of Line The survey crew lays out the lines to be shot, usually by using an Electronic Total Station (refer Appendix), Compass Theodolite, and transit-and-chain survey that determines the positions and elevations of both the source points and the centers of geophone groups. Usually the survey crew is given a few GPS stations beforehand in the area of operation. The survey crew divides themselves into three main groups. The first group fixes the control points (using the Electronic Total Station) which control the direction of the source line or the receiver line. Usually these control points are given at an interval of about 1km. along the line, on either side of the line. The second group does the ranging and the filling (using the compass Theodolite) part on the line along the line at specified interval and placing the pickets (made of flat bamboo sticks with the marking of the picket number on

them) at those stations. The third group does the leveling i.e., gives the elevation values at each picket. Thus the survey crew lays the grid of source lines and receiver lines with specified picket intervals, receiver line intervals and source line intervals on ground. What they do is to project the details on the given onto ground very precisely. 8.2.4 Shothole Drilling: The next team of people to star their activities unit in the scene is the drilling crew (when explosives are used as the energy source). Depending on the number and depth of holes required and the case of drilling, a seismic crew deploys the drilling crews. Whenever conditions permit, the drills are truck-mounted. Water trucks are often required to supply the drills with water for drilling. In areas of rough terrain, the drills may be mounted on tractors or portable drilling equipment may be used. Usually the drilling crew places the explosive in the holes before leaving the site.

Seismic survey is divided into two main classes which are interlinked. These are: Experimental Surveys and Regular/Production Survey 8a.3 2D Survey Parameters (Before the Experimental Survey) (GP ‘X’) Instrument Source Type Group Interval Field Season Type of Shooting Channel/Foldage Spread Length Shot Interval No. of Geophones per group Geophone Pattern Shot Hole Pattern Record Length Sample Rate 408 UL Dynamite 20m 2004-05 Asymmetrical spread (216 + 40) 256/64 4300 + NTO 40m 12 Linear Single 6S 2ms.

Gain Mode K – Gain dB Low Cut Filter (Hz/dB) High Cut Filter (Hz/dB) Notch (50 Hz)

24bit 0, 12 Out 200/370 NA


3D Survey Parameters (Before the Experimental Survey) (GP ‘Y’

and GP ‘Z’)

Parameters Instrument Source Type Group Interval Field Season Type of Shooting Channel/Foldage Spread Length (m) Shot Interval (m)

GP ‘Y’ 408UL Dynamite 40 m. 2004-05 Asymmetric Split Spread 1008(168 per line)/6 X 6 6680 m (each line) 40 m 12 Areal Orthogonal – Single 6 2 0

GP ‘Z’ SN388

40 m. 2004-05 Asymmetric Split Spread 1008( 168 per line )/6X6 6680 m (each line) 40 m

No. of Geophones per group Geophone Pattern Shot Hole Pattern Record Length (sec.) Sample Rate Gain Mode K – Gain (dB) Low Cut Filter (Hz/dB) High Cut Filter (Hz) Notch (50 Hz) Receiver Line Interval (m) Source Line Interval (m) Bin size (m x m) (m sec.)

Areal Orthogonal – Single 5 2

12 Out 200 Out 280 Out 125 Out 280

20 x 20

20 x 20

specifically their lithologies. the source and receiver must be as broadband as possible and the data have a good signal-to-noise ratio. No delays should occur in the recording system. or in an anomalous area any necessary paperwork must be completed prior to drilling. Although in extreme cases. The depth of the hole to be drilled depends on the area on the problem to be solved. the weathering thickness. During drilling.1. If detonators (caps) are used. To obtain accurate time estimates. of the near surface layers. the source should ideally be a short time-duration pulse. from the time break through to the display.1 Data Acquisition Method: Once the locations of the uphole survey have been decided based on line intersections. apart form anti-alias filters used for digital recording.5. and (sometimes) the variations of record quality with source depth. it is likely that a depth of 50-100 m. for the proposed depths and the type of drilling in the area. for . which implies that the recording filters must be left open whenever possible. this is done by cuttings from various depths in the borehole. Unless there are unusual problems in the area. • Sometimes a string of geophones is placed in a hole of the order of 200 feet deep to measure the vertical travel times form a nearby shallow source. that is.8. Normally. or at least acceptable. at regular spacing along the lines. uphole depths have exceeded 500m. The objective of an uphole survey is to estimate the thickness and times and hence velocities. to ensure that any delays are understood and accounted for in the interpretation. Checks must be made on the whole timing system. will be adequate. along with comments about hard or easy drilling or that circulation was lost at a specific depth.5.1 Uphole Survey (Depth Optimization) Sheriff defines an uphole survey as follows • Successive sources at varying depths in a borehole in order to determine the velocities of the near-surface formations. The type of drill used must be appropriate. it is important that information be obtained about the penetrated geologic formations. 8.5 Experimental Survey 8.

to obtain seismic velocities form an uphole survey. the deepest shot must be detonated first. As a guide. In a high-resolution survey with a small depth increment between observations. Thus. tests must be conducted in the new area. hence. with larger sizes leading to anomalous times.example. . their delay must either be very small or to be estimated for each shot so that the detonation time is known.1. the charge size should be kept as small as possible yet still allow the signal recorded at the surface to have sufficient signal-to-noise ratio. caps (detonators) are normally sufficient to at least 20m. or a wiring harness can be used to load many shots at one time. Woods and Patterson showed that the times are influenced by the charge size.3 Source: If dynamite.5. possibly in the future. A succession of charges detonated at different depth are recorded by one or more receivers at the surface located a few meters away from the hole. meaning that the recording speed must be fast enough to allow the picks to be interpreted with this precision. Picks should be normally be estimated to an accuracy of 0.2 Source in Borehole and Receivers at the Surface: The basic field set up is as shown in the figure 9. The recording equipment should have the capability of stacking the data to enhance the signal-to-noise ratio for low-power surface sources and.1. for nondestructive sources in the borehole. Many systems now use a magnetic storage devise which allows several displays to be made at different gains. This mode of operation can also be used in transition zone or shallow-water survey areas where it is practical and safe to drill and load charges into the borehole. This is generally preferred method and dynamite is used by the production crew. Regardless of which method is used.5. The two basic approaches for conducting uphole surveys are: (i) (ii) The source in the borehole and the receivers on the surfaces and The source at the surface and the receivers in the borehole. depth and primers to at least 50m.5ms. the size of the charge depends on the near-surface geology and the depth of the shot. Charges can be loaded and detonated independently. 8. 8. the accuracy should be better than this.

8. Thus.The wiring harness is composed of many pairs of wires each of which is used for one of the charges. three aspects need to be considered: • • • the sampling over the area and along any one line.5 Sample Interval: The near-surface detail required was related to both the objectives of the survey and the complexity of the near surface. the depth sampling of any one survey. The whole assembly is then carefully loaded into the borehole to the correct range of depths.5.5. and the digital sample rate for surveys that are recorded digitally.1. Each receiver should be located several meters away from the top of the borehole. In addition. If a receiver is too close to the borehole.4 Receiver: A number of receivers are positioned close to the top of the borehole. normally with the weight at the base of the borehole.1. These vary from complex near-surface areas where the targets of the main survey have limited area and closure. The type of geophone used should have good low and high frequency responses to obtain the desired broadband recording. 8. a normal minimum is four located in a cross arrangement to record data from four azimuths. where the drilling fluid has entered the rock formation close to the borehole. The charges are attached to the harness and a weight attached beneath the deepest charge. The other technical factor that impacts the spacing of uphole surveys is the method that is used to interpolate the near-surface layers between the uphole survey locations to define a near-surface model for the computation of datum static corrections. with a natural frequency of less than 10Hz. the recording will be contaminated by arrivals through the drilling fluid and the invaded zone. the drilling process disturbs the ground near the borehole. a low-frequency geophone is generally required. to those in which the near surface changes slowly along the line and targets have a appreciable time relief or the exact attitude of the target formations is not critical. which can delay the arrival of an upcoming wave-field by as much as several milliseconds. consecutive charges have a preset distance (Shot interval) between them. . With respect to uphole survey sampling requirements.

or less. often on an irregular basis. the sample interval should be small enough to retain as much high frequency signal as possible so that a good uphole break is obtained. for detailed shallow high frequency surveys. The depth sampling must be sufficient to allow time-depth picks to define each of the geologic formations adequately. However. such as upholes. A depth sample interval of a few meters (2-3m) is generally adequate for most areas and allows velocity estimates over an intervals of 5-10 m.1. in which an analysis is done of which components of the near-surface problems are to be solved by the various techniques available. or 4ms. However. this implies a very fine sample interval. the near-surface formations change both vertically and horizontally away from the borehole. of Geophones Offset Distance Used by GP ‘X’ and GP ‘Y’ : Caps (detonators) : Geophone : 50m.The spacing of uphole surveys depends on several factors and on the problems to be solved. Each formation requires a minimum of three and preferably more picks for a reasonable velocity estimate. and in some areas 2ms. sampling may be appropriate.6 Parameters in the Uphole Survey The parameters used by all the three parties in conducting the uphole surveys are as under: 8. : 5 (one at 5 different offset distances) : 1st Geophone is placed at a distance of 1m from the shotpoint 2nd Geophone is placed at a distance of 3m from the shotpoint . should be more than adequate.1 Source Used Receiver Spread Length No.5. the system approach should be used. and even smaller spacing. 1ms. Overall. In critical areas. Where the data is recorded digitally. residual static corrections and interpretation. refraction. For non explosive sources.5.5 ms. When dynamite is used as a source. an uphole survey may be needed as often as every spread-length in extreme cases. a much smaller value may be appropriate. so that having precise measurements at one location will be of little practical value in interpolating a value midway between two uphole locations.1. 8.6. this should be 0. it is generally desirable to locate uphole surveys at line intersections so that the information can be used on the two or more intersecting lines and to sample different near-surface lithologies. If taken to the limit.

. 8.1.5. from the shotpoint 7th Geophone is placed at a distance of 10m. : 2m.1. The interpretation of individual uphole surveys along a line should be followed by lateral adjustments to ensure that the layer thickness and velocity profiles along the line are realistic. from the shotpoint 6th Geophone is placed at a distance of 5m.5. from the shotpoint 9th Geophone is placed at a distance of 50m.3rd Geophone is placed at a distance of 5m from the shotpoint 4th Geophone is placed at a distance of 25m from the shotpoint 5th Geophone is placed at a distance of 50m from the shotpoint Depth of Shot Hole Shot interval(depth) : 68m. Figure 10 shows the field layout used for doing the uphole survey by GP Z. 8. of Geophones Offset Distance Uphole parameters Used by GP ‘Z’ : Caps (detonators) : Geophone : 50m :9 : four Geophones are placed at a distance of 1m with different azimuths from the shotpoint 5th Geophone is placed at a distance of 3m. from the shotpoint 8th Geophone is placed at a distance of 25m. from the shotpoint Depth of Shot Hole Shot Interval (depth) : 68m.6.2 Source Used Receiver Spread Length No.7 Interpretation The main components of uphole survey interpretation are: • • • Picking the first arrivals from each depth level. : 2m. Table (I) shows the description of the uphole used by GP ‘X’. Applying any necessary corrections to these times and Plotting the data and estimating the velocities and thicknesses of the various layers identified.

refer Appendix in this case). designed to measure minor delays in the total system due to filters and other components of the recording instruments.8. The first four geophones are at an offset of 1m. 25m. 8. the time picked will depend upon the over-all magnification of the seismograph” Figure 11 shows a field record as obtained by the uphole survey team of GP ‘X’ for a source (1m of detonating cord) at a depth of 60m into a spread of geophones. “There is no sudden takeoff of the trace when the disturbance arrives. The correction to absolute time uses information obtained from the time break test. but the interval times can be used for interval velocity estimates. . Prerequisites for picking data at this accuracy are a sufficiently broad band signal bandwidth. variations in the time are often observed. For this approach to be sufficiently accurate.2 Arrival-Time Corrections: The two major corrections applied to uphole survey data are conversion to absolute time and vertical time. and 50m For a conventional uphole survey analysis. With respect of the picking of the first arrival. data are recorded on a digital tape or disk. and sufficient gain to show a good break on the display. adequate signalto-noise ratio.1. In addition.7. a time break test must be conducted with the signal displayed on the display to obtain absolute time information.5 ms or better. Any observed delay is measured and then removed from each of the times picked form the display. 5m. a fast paper speed display. picks should be made for all near-offset displays. Ricker stated. 3m. The gain of the display is important.5. the waveform must not change form one depth level to the next. trough or zero crossing. should also be picked. these cannot give absolute times.1. In most acquisition systems (Smartseis. This can be due to near-surface variations close to the receivers. When several traces are recorded at the same offset but with different azimuths. the pulse width typically broadens with the distance traveled. These times are used not only to measure times to specific depths but also to estimate interval velocities over fairly small depth ranges. To investigate these effects. Uphole times are picked from a peak. variations in the invaded zone between the source and receiver. or disturbance around the borehole form the drilling process. However.7. ground coupling of the receivers. The motion begins gradually as if a first kick arrival time is attempted.1 Picking and Timing Data: The picked times should be estimated to the nearest 0.5. traces out to an offset of about 15 – 25 m.

the surface elevation at the top of the borehole is different to the receiver (or source) elevation located a few meters away. A simple geometric correction is applied routinely to correct from the inclined raypath to a vertical one. The relationship between the vertical uphole time and the measured (inclined) uphole time for the general case is given by where. t is the measured uphole time corrected for any time delays. all the depth and times normally refer to a reference system perpendicular to the ground. T is the vertical uphole time. the above equation simplifies to . In some cases. the parameters are shown in the figure 12 which assumes that the borehole is vertical. Corrections for this elevation change the offset from the borehole need to be applied to the picked times to simulate a set of arrival times that would be measured at the top of the borehole. z1 is the depth of the shot.The measured times are from a known depth in the borehole to a specific offset from the top of the borehole. and ∆ E is the difference in elevation between the receiver and the top of the shot hole When ∆ E = 0 and z2 = 0. z2 is the depth of the receiver. When the drill is on a slope. x is the offset of the receiver from the top of the borehole. additional corrections are required. in other non-vertical situations.

Table I shows a worked example of the application of the second equation.7.7.4 Spatial Consistency: Each uphole survey is initially interpreted on its own. corrected to vertical travel. but consideration must also be given to other uphole surveys in the area. This is a reasonable approximation when the offset distance of the receiver (or source) form the top of the borehole is only a few meters. The time-depth display is then interpreted and interval velocities are estimated for the layers identified.5. however. 8. The underlying assumption in the geometric correction of the above equations is that no refraction of rays occurs at any velocity interfaces. one must also remember that the objective is normally to define the simplest model consistent with the data. not all changes in geology give rise to a change in velocity and the velocity can change within a geologic unit. it also implies that the dips of the interfaces are small. Figures 13 (a) and 13 (b) show the t-d plots drawn for the data obtained at uphole A of GP ‘X’. Any information about near-surface geology from the driller or geologist. This can be used to help define the various interfaces present and to provide an independent check of the depths noted. as well as other relevant information. A major point to consider is the error associated with each plotted point.1. 8.1.5. This is often subjective procedure. . The most commonly used convention is for depths to be plotted vertically and times horizontally. should be included on the display. the receivers are at different offsets from the top of the borehole. 8. Any adjustment must be within the error of the survey.8 Conclusion: From figures 13 (a) and 13 (b) we can thus conclude that the optimum depth to place the charge for uphole A is 36m . however. It is then often possible to adjust the velocities for a specific uphole survey to be more consistent with other values along the line or in the area for a specific layer or formation.1.3 Time – Depth Display: The absolute arrival times.5. are plotted at the appropriate depths on a time-depth display or plot. The geologic information is often useful in deciding where an interface is located. and several different interpretations can often be made from one data set.

2 Noise Experiment (Determination of Near Trace Offset and Array Length) At the start of the survey.5. the offset between geophones and the source should extend from zero to maximum offset that will be used in production recording. 8. The wavelength data needed to optimize array parameters may be measured by performing noise-spread tests. the geophysicist must decide how to handle the noise problem. If the noise is severe enough to hinder the survey objectives. One approach is to ignore the noise problems in the field and to assume that various data processing steps subsequently remove the problem of high noise levels.5. . This method is more popular than the normal spread because it is easier to move the source than the receivers.2. The most common methods of reducing noise in the filed are frequency filtering in the recording instruments and wavelength filtering through use of directional source and receiver arrays.1 Transposed Spread: The spread remains fixed in one location and the shot moves away from the receiver spread one spread length after each shot is fired. the signal will not be recorded and the data processing methods have nothing to recover. a noise spread shot performed to measure the level of the noise. In such situations it is best to select survey parameters to attenuate noise prior to recording in filed. If the dynamic range between the amplitude of the noise and that of the underlying seismic signal exceeds the dynamic range of the recording system. For each kind of spread. Of all the above spread types the transposed spread is more suitable for the land surveys. It is often called a walk away noise test because the shot or vibrator literally walks away from the receiver spread during the recording horizons.8. The transposed spread. There are four methods of noise analysis: The normal spread. A problem with this method is that a shot static difference misaligns noise and reflection traces when the individual spread is still the most popular type of noise analysis. The receiver interval used during a noise test must be short enough to avoid spatial aliasing of the short-wavelength noise. The double-ended spread and The expanded spread.

Each wave will align along different straight lines.2 (At GP ‘Z’) Instrument No.1 (At GP ‘X’.5.2 Instrument Parameters faced in the noise test: 8.2. GP ‘Y’) Instrument No.2.2.5. This can also be done with the help of computer by simply plotting all the noise records in order of their offset distances.3 Noise Analysis: From field record time. Time period of the wave can be found by measuring the time difference between two consecutive peaks or troughs where the wave shape / stand out is . Waves of different apparent velocities will align along different straight lines having different slopes.5.2.5. of Channels Profile Length Group Interval Number of Shots Record Length Sampling Interval Pre Amplification Gain Filters 8. distance (t-X) graph is plotted for each noise wave traces observed in the record. of Channels Profile Length Group Interval Number of Shots Record Length Sampling Interval Pre Amplification Gain Filters : SN 388 : 108 : 2160m : 20m :6 : 5sec : 2ms : 12 dB : Out : CM 408 UL : 108 : 2160m : 20m :6 : 6sec : 2ms : 0 dB : Out 8.8. slope of each straight line gives the apparent velocity of the corresponding wave.2.2.

Analysis of amplitude spectra for different trace offsets and F-K plots in different transform windows helps in calculating noise wavelength and amplitudes and also in deciding the near offset.5.3 Fold Back Experiment (Element Spacing Determination ) Fold back experiment is conducted to select the suitable element spacing in an array. each with a different array are shot with constant charge size and depth. Four spreads. Thus the array length is fixed as 22m. The geophones are placed as bunch (a bunch contains 12 geophones). B. Following are the various noise characteristics deduced from the record / plot Trends / Events A B C D Velocity (m/sec) 198 210 220 252 Frequency (Hz) 9 10 11 12 Wavelength (m) 22 21 20 21 Based on the maximum wavelength obtained from noise analysis the array length is fixed. The spread length for this test is about 535m with a geophone interval of 5m. Different types of geophone arrays are tested depending upon the noise characteristics. On seeing the section in figure 15 (a) and (b) we can observe the noise trends marked as A.. (b). (c) and (d) shows the noise sections as obtained using the recording instrument Sercel CM408UL by GP ‘X’ after recording the data for four shot points and figure 16 shows the noise section obtained at GP ‘Y’. Actually the entire spread length decided for the regular survey is folded into four arms as shown in the figure 17. observed in the noise experiment. Figures 15 (a). Thus the name is “fold back” . simulated plots and frequency-amplitude spectra are evaluated with the emphasis on the standout of events in the zone of interest. Figure 14 shows the geometry of the noise test carried out by the field party I have visited. Inverse of the same indicates frequencies and apparent velocities of signal at different offsets and times can also be measured from the record. The shots are placed at an interval of 400m and 8 shots were taken. …. which can effectively suppress the source-generated noise. The field monitors. C. Amplitudes of signals and noises can also be measured at different offsets and times. 8.clear and with least interference. each with equal length.

8.5. the second arm contains channels from 55 to 108.e.75m element spacing appears to be better section with less interference and better signal preservation compared to other arms. The distance between the 1st arm and the second arm is 5m while the distance between the 2nd and 3rd arm is 10m and the distance between the 3rd and 4th limb is again 5m. 36 and 38m) and separated by a distance of 5m.. shot depth and charge size optimization is conducted using the normal spread with • • Constant charge size and varying shot hole depth and Constant shot hole depth and varying charge size. From figure 19 (a) and (b) we can fix the element interval as 2m. Along the first arm the geophones are placed as bunches at an interval of 20 m. if the depth suggested by the uphole survey is 36m. Figure 18 shows the section obtained by conducting the fold back experiment at GP ‘X’ and figure 19(a) and 19(b) shows the section obtained by conducting the fold back experiment at GP ‘Y’. The shot line is placed exactly at the center of the four arms in the direction of the lines. The field setup is laid as per the requirement of regular production work.75m.5m. The last arm has the 12 geophones of a string spaced at an interval of 2m with a group interval of 20m. Thus the element spacing is fixed as 1. As per the fold back experiment conducted by GP ‘X’ the length of each arm is about 1060 m. shot holes of same depth ( as optimized previoiusly) are drilled and .experiment. the limb with 1. Equal amount of charge is placed in all the three holes and recorded one after another. while 19 (a) shows the section without the application of any kind of filter (b) shows the section after the application of a band pass filter (10 to 80 Hz). are taken with the first shot placed as shown in the figure 17.75m. perpendicular to the receiver line. again with the same group interval of 20m. The geophones in the third line are spaced at 1. The records are observed and the depth with which the best response is obtained is fixed as the depth for the regular survey. the third from 109 to 162 and the final limb from 163 to 216. Four shots with a shot interval of 400 m. then the three hole are dug with the depth variations as 34. with 54 channels. At first shot location three shot holes are drilled with different depths (in the order of the optimum depth suggested by the uphole survey i. The first arm consists the channels from 1 to 54. Similarly at the second location. As seen from 18. The second arm contains the 12 geophones of each string spaced at 1.4 Shot Depth and Charge Size Optimization After optimizing the geophone array.

The acquisition parameters optimized on the basis of above experiments are adopted for production work. Again the best response as is seen from the recorded sections size is fixed as charge size for conducting the regular survey.35 m. 2ms 24bit 0. The analysis of the processed outputs is done in a similar manner as for the other experiments. 18 m. A set of 2D acquisition parameters optimized on the basis of experimental work is given below 8.different charge sizes are placed in them and the shots are recorded.6 2D Survey Parameters for production work by GP ‘X’ Instrument Group Interval Field Season Type of Shooting Channel/Foldage Spread Length Shot Interval No. . of Geophones per group Geophone Pattern Shot Hole Pattern Record Length Sample Rate Gain Mode K – Gain dB Low Cut Filter (Hz/dB) High Cut Filter (Hz/dB) Notch (50 Hz) CM 408 UL 20m 2004-05 Asymmetrical spread (216 + 40) 256/64 4500m 40m 12 Linear Single 6 secs.5Kg 200m 1. 12 Out 200/370 NA ___________________________________________________ Shot Hole Depths Charge Size Near Trace Offset Element Spacing Array Length 36 + 2m. 2.

5Kg 36 + 2m 1.7 3D Survey Parameters for production work by GP ‘Y’ and GP ‘Z’ GP ‘Y’ CM 408UL Dynamite 40m 2004-05 Asymmetric Split Spread 1008(168 per line)/6 X 6 6680m(each line) 40m 12 Areal Orthogonal – Single 6sec 2ms 0dB 12dB Out 200Hz Out 280m 560m 20 x 20 4300m ------------------------------2. . Figures 20 (a) and 20 (b) shows the first two shot gathers obtained during the regular production work by GP ‘X’.8.5m 20m Areal Orthogonal – Single 5sec 2ms GP ‘Z’ SN388 Dynamite 40m 2004-05 Asymmetric Split Spread 1008( 168 per line )/6X6 6680m (each line) 40m Parameters Instrument Source Type Group Interval Field Season Type of Shooting Channel/Foldage Spread Length Shot Interval No.5Kg 36 + 2m 1.5m 20m Out 125Hz Out 280m 560m 20 x 20 4500m -------------------------------2. of Geophones per group Geophone Pattern Shot Hole Pattern Record Length Sample Rate Gain Mode K – Gain dB Low Cut Filter (Hz/dB) High Cut Filter (Hz) Notch (50 Hz) Receiver Line Interval Source Line Interval Bin size Migration Aperture -----------------------------------Charge Size Shot Hole Depth Element Spacing Array Length With all the above said parameters the regular survey is carried out.

then the reflection hyperbola is skewed in the up dip direction. strong amplitude and low group velocity. 9. . Digital recording along with the CMP multifold coverage was introduced during the early 60’s. With the advent of high end computing systems modern day processing has become a lot easier than it really used to be. coherent noise.2. 9. Data acquired from the field are prepared for processing by the field party itself and then it is send to the processing centre. the reflection hyperbola is symmetric with respect to zero offset. It is the vertical component of dispersive surface waves i. 9.1 I ntroduction: The seismic method has been greatly improved in the both in the areas of data acquisition and processing.2 Coherent Noise: Under the coherent noise category there are several wave types.1 Reflections: Reflections are recognized by the hyperbolic travel times. Processing is required because the data collected from the field is not a true representation of the subsurface and hence nothing of importance can be inferred from it. and random ambient noise. Turnaround times have therefore come down with lot of processing taking place in-field or onboard. Raleigh waves.2.2 Why Processing? Field record which we obtain contains: • • • reflections. If the reflection interface is horizontally flat. Typically we try to eliminate ground roll in the field itself by array forming of receivers. On the other hand if it is dipping interface.Chapter 9 Seismic Data Processing 9. • Ground roll is recognized by its low frequency.e.

Poor planting of geophone. • Cable noise is another form of coherent noise which is linear and low in amplitude and frequency. • Another form of coherent noise is the air wave which has a velocity of 300 m/s. Guided waves also are found in the land records. DMO and migration). • Multiples are another type of coherent noise. A mono-frequency way may be 50 or 60 Hz.The objective of seismic data processing is to convert the information recorded in the field to a form that can be used for geological interpretation. . where there is no flat.or intra. Because of their prominently linear move-out. They are secondary reflections having inter. in principle they also can be suppressed by dip filtering techniques. Notch muting is the only way of removing them. It appears on shot records as late arrivals. These waves are largely attenuated by CMP stacking. Power lines also give rise to noisy traces in the form of a mono frequency wave (50 or 60 Hz). transient movements in the vicinity.2. wave motion in the water (marine) and finally electrical noise of the recording instrument. removing the seismic impulse from the trace (inverse filtering) and repositioning the reflectors to its true location (NMO. It can be a serious problem when shooting with surface charges. One such filtering technique is based on 2D Fourier transformation of the shot record. depending on where the field survey was conducted. They propagate both in sub and supercritical regions. Through processing we are enhancing the signal to noise ratio. smooth topography. thereby making it into a more palatable form. Notch filters of ten are used in the field to suppress such energy. • Side Scattered noise commonly occurs at the water bottom. One important aspect of data processing is to uncover genuine reflections by suppressing all unwanted energies (noise of various types) .3 Random Noise: Random noise has various sources. • Power lines also cause noisy traces in the form of a mono-frequency wave. especially in shallow marine records in areas with hard water bottom.• Guided waves are persistent. wind. 9.bed ray paths.

3 Seismic Data Processing Seismic data processing is composed of basically five types of corrections and adjustments: • • • • • Time. that is. 9. it is even possible to make estimates of the pore constituents. From the amplitudes of reflections. The geologic information desired form seismic data is the shape and relative position of the geologic formations of interest. since gas accumulations often generate amplitude anomalies.3. and they are used to convert the known reflection times into estimated reflector depth. Amplitude. In areas of good data quality it is possible to produce estimates of the litho logy based upon velocity information. and Data positioning (migration) These adjustments increase the signal-to-noise ratio.9. to what would have been recorded if source and receiver were located at the same point. Frequency-phase content. correct the data for various physical processes that obscure the desired (geologic) information of the seismic data. The velocities of seismic waves in the earth can be derived from seismic data or measured in wells. Data compressing (stacking). . and reduce the volume of data that the geophysicist must analyze.1 Time Adjustments: Time adjustments fall into two categories: Static and Dynamic Static time corrections (normal move-out) are a function of both time and offset and convert the times of the reflections into coincidence with those that would have been recorded at zero offset. Knowing the shape of the structures at depth allows oil company explorationists to assign probabilities of finding commercially exploitable hydrocarbons in the area surveyed.

and nearsurface reverberations can often be attenuated through de-convolution approaches. Many deconvolution techniques use the autocorrelation of the trace to design an inverse operator that removes undesirable. Ghosts. the coordinates being time.3. x and y. Forty-eight to 96-fold stacks are common. Migration moves energy form its CMP position to its proper spatial location. 9. into as near a spike (unit-impulse function) as possible. and Relative true amplitude gain correction The first scales amplitudes to a nearly alike and is generally chosen for structural mapping purposes. De-convolution is the inverse filtering technique used to compress an oscillatory (long) source waveform.2 Amplitude Adjustments: Amplitude adjustments correct the amplitude decay with time due to spherical divergence and energy dissipation in the earth. The second attempts to keep the relative amplitude information so that the amplitude anomalies associated with facies changes. 9. porosity variations. It sums all offsets of a CMP gather into one trace. seafloor multiples.3. 9. and gaseous hydrocarbons are preserved. Conventional 2D seismic data initially exist in a 3D space: the three axes are time. Threedimensional data consist initially of a 4D data set.3 Frequency-Phase Content: The frequency-phase content of the data is manipulated to enhance signal and attenuate noise. often seen in marine data.3. There are two broad types of amplitude gain programs: Structural amplitude gaining or automatic gain control (ABC).5 Data Positioning (Migration): The data positioning adjustment is migration. offset and a coordinate x along the line of survey.3. offset and two horizontal spatial coordinates. Appropriate band-pass filters (onechannel filtering) can be selected by reference to frequency scans of the data which aid in determining the frequency content of the signals. In the presence .9.4 Data Compressing (Staking): The data compression technique generally used is the common midpoint (CMP) stack. which lies on the midpoint axis. predictable energy.

. DMO. vertically incident. 2. source wavelet and white reflectivity series that is free of noise. Stacking. Migration techniques have been developed for application pre-stack. All other processing techniques may be considered secondary in that they help improve the effectiveness of the primary processes. De-convolution assumes a stationary. and corrects amplitudes for geometric focusing effects and spatial smearing. Conventional processing of reflection seismic data yields an earth image represented by a seismic section usually is displayed in time. in their usual order of application. offset and time. Figure 21(a) represents the seismic data volume in processing coordinates – midpoint.). post-stack. Basic Data Processing Sequence 9. geometric. To produce seismic cross section representative of geology. A conventional processing flowchart is shown in the figure 21(b) on the next page. filtering etc.4 Objectives of Data Processing The objectives of data processing may be summarized as follows: • • • To enhance the signal to noise ratio (S/N).5 Since the introduction of digital recording. Many of the secondary processes are designed to make data compatible with the assumptions of the three primary processes. or a combination of both. Stacking assumes hyperbolic moveout. Migration collapses diffractions to foci. 9. increases the visual spatial resolution. De-convolution. The secondary processing steps include corrections (statics. Migration.of dip. and 3. velocity analysis. the CMP location is not the true subsurface location of the reflection. while migration is based on a zero-offset (primaries only) wave field assumption. minimum-phase. NMO. a routine sequence in seismic data processing has evolved. There are three primary steps in processing seismic data 1. To meet the exploration objectives of the client.


2 Reformatting: In this stage the data are converted to a convenient format which is used through out processing.Chapter 10 Seismic Data Processing Stage I (Pre-Processing) 10.1. Since the processing software can not operate directly on the above mentioned formats.1 De-Multiplexing: Field data are recorded in multiplexed mode (trace sequential) using a certain type of format. Data from the field will not usually be in the format required by the processing centre. Demultiplexing is not done on data recorded in SEG-D format. latitude. The out put of processing is in SEG-Y format.e.1. So first de-multiplexing (time sequential) of the data has to be done. elevations. and SEG-B (multiplexed format). Hence they are called field formats.1 Preprocessing the first and foremost step in Preprocessing: sequence and itiscommences with the receptionthe processing and of field tapes observers log. de-multiplexing can be envisioned as transposing a big matrix so that the rows of the resulting matrix can be read as seismic traces recorded at different offsets i. changing time sequential form into a trace sequential form. . Format differs with the manufacturer. type of recording instrument and also with the version of operating system. Field tape contains seismic data and observers log contains geographical data (shot/receiver numbers. the system internally converts its input data into a format which is compatible to it. The formats generally used for data recording are SEG-D (multiplexed/demultiplexed data). 10. longitude etc). There are many standards available for data storage. Data formatting defines –How data is arranged and what information is stored as on magnetic media (tapes or drives) which will usually follow an industry standard connection. Mathematically. 10.

those with static glitches or mono-frequency high amplitude signal levels are deleted. The field geometry is obtained from the observer’s log. 1. Figure 24 shows the process of merging with the record being the cmp gather and the window with numbers being the index which gives the details of the field parameters.1.g. 10.10. Bad data may be replaced with interpolated values. a sample rate of 4ms or even 1ms can be taken. provided that recording was at this rate. bad quality stack section often is due to incorrect field geometry set up. such as in de-convolution and velocity analysis.. or 4ms).cutoff 250Hz Sample rate 1ms -----. This procedure is called Labeling or Merging. Nowadays this work is done by a module of the processing software. Figure 23 (a) and (b) clearly shows the effect of editing. Frequency aliasing effects can be avoided by high frequency (HF) filter. static corrections). if the accuracy is sufficient. So an important step in the preprocessing is to apply the field geometry to the seismic data.5 Geometry Merging (Labeling): No matter how meticulously processing parameters are chosen. obtained in field and the other in SEG Y format readied for processing.cutoff 500Hz For sample to go from a sample rate of 1ms to 4ms it is necessary to filter all the information of frequency greater than 125Hz. as the processing time and cost are less.3 Re-sampling: Processing can be done at a sample rate different to that of recording (e.4 Editing: Editing involves leaving out the auxiliary channels & NTBC traces and detecting and changing dead or exceptionally noisy traces. If we are looking for an improvement in resolution. Polarity reversals are corrected. or if we want more accuracy in measurements (velocity analysis. using which the data on the record was collected . Noisy traces. Usually processing is performed at 4ms.1. 2. Figure 22 (a) and 22(b) shows two different field records one in SEG-D format. 10.1. This was previously done manually. adopted to the new sample rate: Sample rate 4ms -----. Figure 23 (a) shows the raw record and 23(b) shows the record after editing. wherein the removal of the occasional noisy traces gives the signal unmasked.cutoff 125Hz Sample rate 2ms -----. The field geometry has to be incorporated with the seismic traces. Out put after editing usually include a plot of each file so that one can see what data need further editing and what type of noise attenuation are required.

weathering thickness. Variations in source and receiver depths (marine gun/cable. The elevation corrections (also called datum correction) may be used to bring all times in a seismic record to a fixed level in subsurface which is the final processing datum. “Corrections applied to seismic data to compensate for the effects of variations in elevation. and/or event shooting. The term ‘static’ is used to denote constant time shift of whole data traces. Variations in velocity/thickness of near surface layers. often shortened to statics. The velocity needed for calculating the time shift is obtained from shot uphole times. land source). or reference to a datum” Statics are time shifts applied to seismic data to compensate for: • • • • • Variations in elevations in land. weathering velocity. Most automatic staticsdetermination programs employ statistical methods to achieve the minimization. This is usually the best static correction method where feasible.1. • Uphole-based statics involve the direct measurement of vertical travel-times form a buried source. These corrections are based on uphole data. FPD could be any arbitrary level(depending on the client requirement) or msl (mean sea level).6 Static Corrections: Sheriff’s definition of static corrections. The objective is to determine the reflection arrival times which would have been observed if all measurements had been made on a (usually) flat plane with no weathering or low-velocity material present. • Data-smoothing statics methods assume that patterns of irregularity that most events have in common result from near-surface variations and hence static corrections trace shifts should be such as to minimize such irregularities. The elevation needed for shot/receiver time correction is obtained from labeling records. • First-break based statics are the most common method of making field (or first estimate) static corrections.10. refraction first-breaks. as opposed to variable time shifts as applied by NMO corrections which are dynamic. Change in data reference times. Tidal effects (marine). . is as follows.

which causes further divergence of the wave front and a more rapid decay in amplitudes with distance. it decays as 1/r. high frequencies are absorbed more rapidly than low frequencies.1.7. v0 is the reflection velocity at a specified time t0 .1 Spherical Divergence: For a spherically spreading wave in a ‘lossless’ material. For a constant velocity medium. Here t is the two way travel time and v(t) is the rms velocity of the primary reflections averaged over a survey area. Conceptually a single shot is thought of as a point source that generates a spherical wave field: • In a homogeneous medium.7 Amplitude Recovery (Geometric Spreading Correction): A field record represents a wave field that is generated by a single shot. In particular. For a layered earth.1. where r is the radius of the wave front. Wave amplitude is proportional to the square root of energy density. • The frequency content of the initial source signal changes in a time variant manner as it propagates. 10. . Figure 25 shows a graph relating the Amplitude Decay with time/depth.10. energy density decays proportionately to 1/r2. Newman’s Formula: The factor 1/r that describes the decay of wave amplitudes as a function of the radius of the spherical wave front is valid for a homogeneous earth without attenuation. Therefore the gain function for geometric spreading compensation is defined by where. the seismic pressure amplitude decreases as reciprocal of the distance traveled. In practice. velocity usually increases with depth. This is because of the intrinsic attenuation of the rocks. amplitude decay can be described approximately by 1/ [ v2 (t) · t ].

10. Hence attenuation is lesser for low frequencies and higher for higher frequencies. Absorption can be expressed as a function of the distance traveled by the seismic wave.But in the ‘layer cake’ model used in CMP stacking. yielding This property is used when compensating for amplitude decay. implying it is also time variant. Ax = amplitude at distance x A0 = amplitude at reference point α = attenuation factor (absorption coefficient) The key point here is that amplitude decay due to absorption is exponential with distance.7. the velocity increases between the layers. which represents raw data. Absorption is very much a function of geology.1. this results in a TV2 relationship. Loss due to absorption seems to be nearly constant per cycle.2 Exponential Gain: Absorption is a process where by the energy of a seismic wave is converted to heat while passing through a medium. . Figure 26(b) gives a representation of the spherical divergence correction doing which we can see the clear recovery of the amplitudes at the later times of the section that were not comparable with those in figure 26(a) . figure 26(d) shows the record which is obtained by applying the filter the record 26(c). and in practice it increases with depth within layers. where. The loss is a result of the elastic movement. Again applying this gain correction we can see the amplitudes recovering in figure 26(c) which were not available on figure 26(b).

on the other hand. The fold does not change when alternating traces in each shot record are dropped. g) and midpoint-offset (y. and raypath geometries for various gather types. b. super critical reflections. Albeit incorrectly. g and s are the receiver-group and shot intervals.1. making up a CMP gather. The fold is halved when every other shot record is skipped. each individual trace is assigned to the midpoint between the shot and receiver locations associated with that trace. the fold of coverage nf for CMP stacking is given by where. Figures 39(a) and 39(b)shows the superposition of shot receiver (s. ground coupled air waves. The required coordinate transformation is achieved by sorting the data into CMP gathers. the term Common Depth Point (CDP) and common midpoint (CMP) often are used interchangeably. the following rules can be established: a. 10. h) coordinates.3 Filtering: Filtering is done to remove unwanted frequencies from the seismic data. It may also contain first arrival. respectively. So these effects have to be removed to improve the data quality.8 Muting: The field data does not always necessarily contain the reflected data. whether or not alternating traces in each record are dropped. 10. . Those traces with the same midpoint location are grouped together.2 Sorting: Seismic data acquisition with multifold coverage is done in shot-receiver (s. h) coordinates. by using this relationship. and ng is the number of recording channels. surface waves (ground rolls) etc. g) coordinates. For most recording geometries. Seismic frequencies have a range of 12 – 72 Hz and the frequencies other than this are attenuated using various filtering techniques. Seismic data processing. For this purpose muting is done which involves arbitrarily assigning zero values to traces during a desired interval selected by the processor.10. conventionally is done in midpoint-offset (y. Based on the field geometry information.

animals) Falling debris Wind noise Random Coherent Coherent Random Random Random Random Kill.e. surgical mute Mute Filter. Trace/shot summation F-K filtering Stacking De-spike F-X filtering Coherency filtering Editing (e.g. stack Filter.F-K filtering Muting Coherency filtering Land Data – Additional Type of Noises Noise/problem Nature Solution Hi-line Ground roll Air wave Correlation noise Traffic(vehicles.The following tables give an idea on various types of noises & methods to attenuate them. stack . Noise Attenuation Techniques Random Coherent Band-pass filtering Notch filtering K-filtering e. people.g. notch filter F-K filter Hi-cut filter. stack Filter . kill) Band-pass filtering Velocity filtering i.

1 I ntroduction: De-convolution compresses the basic wavelet in the recorded seismogram. 11. However. the Weiner filter differs from the inverse filter in that it is optimal in the least squares sense. Converting the seismic wavelet into a spike is like asking for a perfect resolution. In practice. spiking deconvolution is not always desirable. For example. The fundamental assumption underlying the de-convolution process (with the usual case of unknown source wavelet) is that of minimum phase. Also.2 Convolutional Model: The recorded seismic trace may be modeled as a series of interactions between the source signature (a finite. An inverse filter.Chapter 11 Seismic Data Processing Stage II (De-convolution) 11. because of noise in the seismogram and assumptions made about the seismic wavelet and the recorded seismogram. The convolutional model postulates that the above wavelet is the superposition of several . converts it to a spike. The processed normally is applied before stack. when convolved with the seismic wavelet. An accurate inverse filter design is achieved using the least-squares method. it can remove a significant part of the multiple energy from the section. a Weiner filter can be designed to convert the seismic wavelet into a spike. band limited wavelet) and the earth. De-convolution sometimes does more than just wavelet compression. it also is common to apply deconvolution to stacked data. much like the inverse filter. attenuates reverberations and short-period multiples. Wavelet compression can be done using an inverse filter as a de-convolution operator. The Wiener filter converts the seismic wavelet into any desired shape. Finally. from the seismogram. the resolution (spikiness) of the output can be controlled by designing a Wiener production error filter – the basis for predictive de-convolution. the prediction error filter can be used to remove periodic components – multiples. however. the inverse filter should yield the earth’s impulse response. thus increases temporal resolution and yields a representation of subsurface reflectivity. When applied to a seismogram.

In the time domain. In the frequency domain -----------(4) Where. earth filter. 11.) to form a complex pulse which then convolves with the reflectivity function to give the actual seismogram. output seismic trace to be the reflectivity series. de-convolution involves finding an inverse of the wavelet which.(2) where. E (f) and R (f) represent the amplitude spectra of the corresponding time functions (ignoring the phase for now). when convoluted with the seismic trace.3 (1) De-convolution: The objective of de-convolution is to remove the effect of the convolution of the basic wavelet with the reflectivity. s(t) is the waveform component associated with source location e(t) represents the earth’s impulse response Under the assumption the source waveform is known we have the following equation: ---------. output the reflectivity series. The seismic wavelet is converted to a spike. The de-convolution operator is an inverse filter.(3) The basic seismic wavelet w (t) is actually made up of the convolution of source signature with the propagation effects in the earth and the recording system sources.responses (the source wavelet. In theory. multiples. The function which has a constant amplitude spectrum over all frequencies is a SPIKE. instruments etc. A seismic trace x(t) is given by the convolution of the basic seismic wavelet w(t) with the reflectivity series r(t) plus random noise n(t). S (f). In practice it is to arrive at a better estimate of the reflectivity function. we resolve the reflectivity r(t) from the equation given below. We can remove the effect of the (S(f) ×E(f)) term in this equation by making it equal to one (or any constant value).4 De-convolution Methods: Generally de-convolution fall into one of the following two categories . X (f). ------------11. ---------. ghosting.

4. The input wavelet is minimum phase (i. ‘system’. • • Make certain assumptions about the data which justify the statistical approach. Multi-window design/application may be required to get optimum results for particular data sets where frequency content etc varies greatly with time). This is done when vibroseis is used as the source. The source waveform is known. No random elements are involved. 7.4. Assumption 5 is the basis for deterministic de-convolution – it allows estimation . 4. 5. Under such circumstances. where the source wavelet is accurately known we can do source signature de-convolution. 11. or combined wavelets) from the data itself.11. The earth is made up of horizontal layers of constant velocity. For e. Assumption 4 eliminates the unknown noise term in equation 1 and reduces it to equation 3.e. 2 and 3 allow formulating the convolutional model of the 1D seismogram by equation 1. This implies that the seismic wavelet in that their autocorrelations and amplitude spectra are similar. 1. the algorithm(s) used rely on the following assumptions. 6. The noise component is low enough to be ignored. Does not need to be used in conjunction with deterministic de-convolution. Reflectivity is a random process.. no shear waves are generated 3.e. The source waveform does not change as it travels in the subsurface – it is stationary (i.2 Statistical De-convolution: Statistical De-convolution is a process where we: • • Have no pre knowledge of the wavelet. specifically from the auto correlation of the data. Assumptions 1. Assumptions: To perform statistical de-convolution. 2. before de-convolution a minimum phase conversion (source de-signature) step may be required).1 Deterministic De-convolution: De-convolution where part of the seismic system is known. it has a minimumphase inverse. therefore. Derive information about the wavelet (either ‘source’. The source generates a compress ional plane wave that impinges on layer boundaries at normal incidence. within the operator design window the shape of the wavelet is consistent.g.

not just the zero-lag spike.g. not minimum phase. Statistical de-convolution attempts to ‘spike’ the data and/or remove repetitive energy (e. E. The Wiener filter applies to a large class of problems in which any desired output can be considered. its least-squares inverse – spiking de-convolution operator. The spiking deconvolution operator is strictly the inverse of the wavelet. ‘Spiking’ compresses the wavelet (by enhancing frequency content) but will never result in ‘reflectivity’ series being output. Once the amplitude and phase spectra of the seismic wavelet are statistically estimated from the recorded seismogram. which is re-estimated from the recorded seismogram by way of assumption 6. Their performance depends not only on filter length. mainly because • • Limited bandwidth Assumption not valid. When applied to the seismogram. is computed using optimum Wiener filters. Five choices for the desired output are Type 1: Zero-Lag Spike Type 2: Spike at arbitrary lag Type 3: Time-Advanced form of Input Series Type 4: Zero-Phase Wavelet Type 5: Any Desired Arbitrary Shape .4. but also on whether the input wavelet is minimum phase. Finally assumption 7 provides a minimum-phase estimate of the phase spectrum. noise not zero etc. the filter yields the earth’s impulse response. Statistical de-convolution can be • • Spiking De-convolution Predictive De-convolution(Also ‘gap’ de-convolution) Spiking De-convolution: The process by which the seismic wavelet is 11.1 compressed to a zero-lag spike is called spiking de-convolution. The filters that achieve this goal are the inverse and the least-squares inverse filters.2. Assumption 6 is the basis for statistical de-convolution – it allows estimates for the autocorrelogram and amplitude spectrum of the normally unknown wavelet in equation 3 from the known recorded 1D seismogram. multipliers).g.of the earth’s reflectivity series directly from the 1D seismogram defined by equation 3.

ri. spiking de-convolution is mathematically identical least-squares inverse filtering. respectively. Cross correlation of the desired spike. The process with type 1 desired output is called spiking de-convolution. …. 0. xn-1) yields the series (x0. 0.n-1 are the autocorrelation lags of the input wavelet. The generalized form of the normal equation1 takes the special form: This equation is scaled by (1/x0). Therefore. whereas it is computed directly from the known source wavelet in the case of least-squares inverse filtering (deterministic de-convolution).4. Given the . with input wavelet. say (1.. …. 11. and the cross-correlation lags of the desired output with the input wavelet.e.3 . the reflection series.. The least-squares inverse filter has the same form as the matrix equation (6). ringing etc) and generates an operator which will remove it leaving only the random element i. …….2 Predictive De-convolution: The type 3 desired output (Time-Advanced Form of Input Series) suggests a prediction process.2. 0. 0. 0).1.0).. x2. The autocorrelation matrix on the left side of equation 6 is computed from the input seismogram (assumption 6) in the case of spiking de-convolution (statistical deconvolution). the Wiener filter coefficients.2. Predictive de-convolution ‘predicts’ repetitive elements within the seismic trace (multiplier. x1. I = 0.. say (x0. ai and gi.The general form of the matrix equation for a filter of length ‘n’ is : here.

the following statement can be made: “Given an input wavelet of length (n + α). isolated minimum-phase wavelet.3 11.4. The action of spiking de-convolution on the seismogram derived by convolving the minimum-phase wavelet with a sparse-spike series is similar to the case of the single isolated wavelet. Based on assumption 6. There are two approaches to predictive de-convolution: • • The prediction filter may be designed using equation (7) and applied on input series. the procedure is called spiking de-convolution.2. 11.4. Predictive de-convolution is a general process that encompasses spiking de-convolution. autocorrelation of the input . The ideal result of spiking deconvolution is a zero-lag spike. Wiener showed that the filter used to estimate x(t + α) can be computed by using a special form of the matrix equation (5). the prediction error filter can be designed and convolved with the input series.input series x(t). w want to predict its value at some future time (t + α). In general. Now consider the real situation of an unknown source wavelet. the prediction error filter contracts it to an α-long wavelet.1 Predictive De-convolution in Practice Operator Design: We start with a single. where α is prediction lag. Following is the matrix showing an n-long prediction filter and an α-long prediction lag: Design of the predictive filters requires only autocorrelation of the input series. where α is the prediction lag. Since the desired output x(t + α) is the time advanced version of the input x(t).2. Alternately. Assumptions 1 through 5 are satisfied for this wavelet. When α = 1. we need to specialize the right side of equation (6) for the prediction problem. An increasingly better result should be obtained with more and more coefficients are included in the inverse filter.3.

11.seismogram rather than that of the seismic wavelet is used to design the de-convolution operator. Or in other words. Statistical deconvolution filters(or operators) are most commonly derived from the auto correlation of the input data using Wiener-Levinson algorithm. Equalizing the amplitude of noise in addition to the signal. Not too long an operator (less than 500 ms) – dependent on objective.3. Wavelet compression – degree of spiking. If the predictive gap or delay is only one sample.4. . Design window usually at least five times operator length. The amount of white noise to add will generally be in the range of 0. Separate operators derived from multiple windows? One or two derivation windows at the most (multi window de-con).2 Prediction Gap Length: Gap length will have an effect on: • • • Pulse stabilization – to equalize the basic wavelet through out the data. (Too long a gap may result in short period reverberations remaining) 11.3 Pre-Whitening: Addition of white noise to data (auto-correlogram) during operator design is to prevent: • • Operator instability (divisions by zero when calculating wavelet inverse). If two wave forms are perfectly random then the auto correlation is a spike. spiking de-convolution may be considered as a special case of predictive de-convolution where the ‘gap’ is one sample. Derivation windows slide behind the first break noise. Autocorrelation analysis: We can decay the point on our wavelet where our de-convolution operator begins to operate .2.1 % to 1%.via the production ‘lag’ or ‘gap’. which multiple system is targeted – long gap length with short active operator to straddle long period multiples. we have spiking de-convolution. Following are some of the implications in designing the De-convolution Operator: • • • • • • • The operator may be long enough to predict the multiples targeted.3.4. Occasionally.2. Window over data representative of design criteria. Auto Correlation: The result is a zero phase wave form with a maximum at zero lag.

2. The tempporal resolution is incresed and events show the continuity in their behaviour. Figure 28 (a) shows the effect of application of the Spiking Deconvolution on the raw data and we can see the events clearly marking their differences from the neighbouring random reflections. Narrow the band width of data. The values of operator length and pre-whitening which yields the sharpest output is taken as the optimum and de-convolution is done using these values. 11.4 De-convolution Panel: Operator length and amount is pre-whitening is decided by trial and error method by applying different operator lengths and pre-whitening to a CDP gather. . Decrease the S/N ratio of the data.4. We can clearly see the removal of the incoherent noise caused by the electric power lines in the decon spectrum. Too much white noise may: • • Decrease the effectiveness of the de-convolution process. Figure 28 (b) shows the spectrum of the raw data and the decon data.Too little white noise may: • • Cause the de-convolution operator to become unstable. All these predictive de-convolution parameters are fixed from running de-convolution panels by trial and error method.3.

A number of velocity functions are then generated (in practice usually six). There are many methods for determining correct velocities for the NMO equation. One half of them will contain lesser velocity values and the other half will contain greater velocity values (as compared to the reference velocity function) with a constant increment or decrement from one velocity function to the other. Where dips are large. Multi Velocity Function Stacks: The multi velocity function stacks (mvfs) panel displays a series of side by side stacked traces for a set of CDP’s.1 to interpret V elocity Analysis: Velocity analysis is an interactive tool usedon 2D & 3D prestacking or normal move out velocities stack seismic data. Figure 29 shows a record displaying a section to be . In practice velocity analysis is done as follows: A reference velocity function is taken from the well data of the nearest well.Chapter 12 Seismic Data Processing Stage III (Velocity Analysis. The methods that are being used by RCC are given below. NMO. Typically the test range is small at shallow times and larger at deep times due to the nature of the NMO effect. Several techniques utilize the variation of normal move out with record time to find velocity. The velocities can be a series of time variant velocity functions as a function of time. This panel is used to pick velocities by visually locating the maximum-stacked response. DMO and Residual Static Corrections) 12. a common reflecting is not achieved. These traces are corrected for NMO with a series of different velocities. Velocity Spectrum Analysis: Velocity spectrum analysis provides a means to interactively pick the velocity which is correct for applying NMO corrections. Velocity analysis is usually done on common midpoint gathers where the hyperbolic alignment is often reasonable. Typically the analysis procedure involves comparing a series of stacked traces in which a range of velocities were applied in NMO.

So a time correlation has to be applied according to offset. NMO correction is the time correction which will ideally linearise the alignment of primarily reflected signals in the CDP gather. As a result of NMO correction a frequency distortion occurs particularly for shallow events and at large offsets. This is called a multi velocity function stacks (mvfs) panel. but due to the additional distance traveled by the seismic wavelet. The maximum permissible for the stretch is 10% and signals where more stretch is observed is muted. Figure 30 shows the velocity function selection and thus how the velocity analysis is done. As offset increases. v is the normal Move Out velocity or stacking velocity of reflection event.2 Normal Moveout Correctons (NMO): NMO is the difference between reflection arrival time at a geophone situated at a certain distance from the shot point and arrival time at a geophone situated at the shot point. Tx is the actual reflection time of the seismic event due to Normal Move Out effects. While applying NMO the trace undergoes a slight non linear stretch which is called NMO stretch. each strip corresponding to each velocity function and each trace corresponding to each CDP. This is not due to any anomalies in the subsurface. Alternately a velocity spectrum is also generated. T0 is the zero offset reflection time of the seismic event.analysed for velocity. 12. It is quantified by . the seismic wavelet arrives late at the geophone. From this we can interactively pick the correct velocity function. A group of GDP’s (usually 21) which fall under full foldage area is then taken and each of these CDP’s are stacked applying each one of the seven velocity functions. x is the actual source receiver distance. Mvfs are used generally to fine tune the velocity picked using velocity spectrum. The output is seven strips with 21 traces each. NMO is applied according to the formula Where.

f is the dominant frequency f is change in frequency TNMO = Tx – T0 Figure 31 shows the NMO stack obtained after stacking the NMO corrected traces. The alternate name for DMO is pre-stack partial migration.g ‘lenses’ of low velocity material near the surface) Vertical ray approximation is incorrect. 12. Residual statics may be applied to data as they are and known as ‘Trim’ statics. Undetectable thin. in addition to NMO.where. the move out will be greater when the reflector is dipping. Residual statics can be. . another correction which takes into account is the dip of the reflector must be applied. Weathering thickness varies rapidly. Because DMO is a geometric correction that repositions seismic data in a sense of migration scheme. This following from the fact that.4 Residual Statics Corrections: ‘Field’ statics do not generally solve all delays within the data for a variety of reasons for example: • • • • • Velocities vary both laterally and vertically within the layers.3 Dip Moveout Correctons (DMO): In the case of a dipping reflector. DMO correction is applied according to the formula where. Residual statics correction attempts to fine-tune the field statics. The terms CRP and CRP gather are accurate descriptions of the data post DMO. Typical procedure is to measure time-shifts between traces within a CMP and a ‘pilot’ trace (usually the stacked CMP itself) and solve for the source and receiver static in a surface consistent manner . 12. Tx = Two way travel time T0 = Zero offset travel time V = velocity above the reflector φ = dip angle After applying the DMO correction the data is in CMP gather from a dipping interface model do have a common reflector points. at times destructive. Local anomalies (e . low velocity layers.This results in non surface consistent static values for every trace.

. Picking travel time deviations tij based on cross-correlation of traces in a CMP gather with a reference or pilot trace that needs to be defined in some fashion.Residual statics corrections involve three phases: 1. Modelling tij by way of following equation and decomposing it into its components: source and receiver statics. Stack-power optimization. the stack-power optimization may be used to determine the best correlation coefficient prior to solving the final time-shifts using the simultaneous equation or similar techniques. in simple terms. and where. to travel times on the pro – NMO – corrected CMP gathers. the various terms are defined in the figure 32(a) while figure 32(b) shows how to pick travel time deviations from NMO corrected gathers. Applying the derived source and receiver terms sj and rj. 3. 2. Alternatively. respectively. may be the result of applying multiple sets of surface consistent values to the data and the set giving the maximum stack-power chosen. structural and residual moveout terms. Figure 33 shows a field record on which represents the De-convoluted Stack and Residual Stack. The most common methods of deriving the time-shifts and the resultant static values are• • • Cross-correlation method Stack-power optimization Combination of above The time-shifts produced using the cross-correlation technique may be decomposed into shot and receiver statics by solving a set of simultaneous equations.

Considering all the noises to be random. Statics etc has to be made. prevailing noises etc. By summing the S/N ratio is increased as signal gets enhanced but random noise remains the same. The velocity that has to be applied for NMO correction to prepare brute stack is a reference velocity obtained from VSP data. The main point in recording multifold data is to stack all the traces together.1 S tacking: Stacking is basically summing of all the traces which has a common reflection point. This is done using time variant filtering. DMO. the shallow reflections will have high frequencies and the deeper reflections will have lower frequencies. where n is the foldage. Before final stacking all the corrections viz. Any departure from this trend (ie high frequencies in lower part of the trace or low frequencies in the upper part of the trace) indicates a noise which has to be removed so as to improve the S/N ratio. This stack is called BRUTE STACK. Figure 27 (b) represents the filtered record which is obtained by applying a high pass filter (8 to 16 Hz) on the raw field record shown in figure 27 (a). the S/N ratio improvement by stacking will be √n times. This record clearly shows the elimination of various noise components from the raw field record.NMO. Generally before decon and velocity analysis a gather is stacked to have a rough idea about the different horizons. Figure 34 shows real field record with brute stacking. While the other figure 31 and 35 show the NMO Corrected Stack and the Final Stack. Stacking is ineffective in suppressing multiples and diffractions. 13.2 Time Variant Filtering: Owing to the attenuation of seismic energy by the earth. Time Variant Filtering and Migration) 13. The air waves which are clearly visible on the raw record. are eliminated in the filtered record. Time variant filtering is usually applied on stacked data. . at the mid portion of the record.Chapter 13 Seismic Data Processing Stage IV (Stacking.

deformed and displaced image of the subsurface.3. Reflections originating anywhere are brought into the vertical plane of section (X. . Geophones record the vertical component of the moment of the ground and the hydrophones record a pressure wave whatever the incidence of the wavefront is. the seismic section now represents the theoretical acquisition configurations of coincident source and receiver. The apparent dip of an event on a zero offset stack section is less than the true dip of the event. Migration is based on a 2D scheme with the following assumptions: 1. The final section assembles all these planes of incidence to carry out the migration. 13.13. It provides more accurate depth section.3 Migration Migration is a process which attempts to correct the directions of the geological structures inherent in the seismic section. which only allows for only travel paths perpendicularly to the reflectors. It is only valid if this plane of incidence is fixed for each horizon considered. Migration requires that the velocity function at each of these planes of incidence be known. Zero offset stack section gives a false picture of dipping reflectors as events A`` and B`` are plotted at true trace positions A` and B` respectively in figure 36.1 Restrictions of 2D Migration: Migration must normally be carried out in the plane of incidence relative to each horizon. which give an inexact. 2. 2. as soon as there is any dip or velocity variation: 1. This plane of incidence is vertical 3. It improves resolution and collapses Fresnal zone. Structures can be represented by cylinders whose principal axes are perpendicular to the plane of section. Migration redistributes energy in the seismic section to better image the true geological structures. The ideas behind the above constraining assumptions also underlie the production of multifold coverage stack sections. It collapses diffraction back to their point of origin. All depth points of seismic horizons are in a single plane passing through the seismic line. Migration is done to rearrange seismic data so that reflection events may be displayed at their true subsurface positions.T). After NMO and stack .

Kx2+ Kz2 = (ω/v)2 / If we consider the seismic as a sum of monochromatic plane waves. then all the same frequency and the plane wave of the same frequency and dip are mapped on to a single point in the F-K domain irrespective of their location in the original time section. 4.3. So any operation in F-K domain is localized to account for any lateral or vertical velocity variations. and the migration method based on this summation is called the Kirchhoff migration. In practice. For 3-D migration the phase shift is 90 degrees and the amplitude is proportional to frequency. However. The velocity is taken as the rms velocity.3. Times are measured vertically along the CDP traces. To perform this method. Place the result on the migrated section at time corresponding to the apex of the hyperbola. which assumes horizontal beds. shifted down dip and dip decreased Tight syncline “bow tie” shape 13. which can be allowed to vary laterally. spherical spreading and wavelet shaping factors is called the Kirchhoff summation. lateral variation .2 Migration in Fourier Domain: Migration in Fourier domain works with dispersion relation which provides the relationship between the horizontal wavenumber and the vertical wavenumber for any temporal frequency.3 Kirchoff Summation: The diffraction summation that incorporates the obliquity. The CDP is situated perpendicularly below the midpoint on the surface. the order of the filter application specified by the wavelet shaping factor (This wavelet shaping factor is designed with a 45-degree constant phase spectrum and an amplitude spectrum proportional to the square root of the frequency for 2-D migration .3. Geophysicists know well the simple examples of images in time deformed and / or displaced in relation to the depth model: Depth model Diffracting point Dipping reflector Time representation Diffraction hyperbola Dipping reflector. Then apply the filter with the above specifications and sum along the hyperbolic path. multiply the input data by the obliquity and spherical spreading factors.) and the summation can be interchanged without sacrificing accuracy because the summation is a linear process and the filter is independent of time space. 13.

5 Omega – X Migration (F – X Migration or Hybrid Migration): This is similar to the finite difference migration in T-X domain and is developed by Kjartannson. .3. Phase Shift Plus Correction: This is an extension of phase shift migration to account for lateral velocity variations. For downward continuation. inverse Fourier transform is taken to convert F-K domain for imaging at t=0. Figure 37 shows a Migration stack.in velocity distorts the hyperbolic nature of the diffraction pattern and somehow must be considered.3. the phaseshift operator is computed at every depth step allowing variation in velocity with depth. Here we cannot downward continue the previous F-K domain data. an additional phase shift is applied to account for the difference between the average velocity function and the actual velocity function at each X before applying the imaging principle. This techniques can handles vertical velocity variation for dips while preserving amplitude and phase. 13.There are two terms in the computation: the diffraction term that collapses the energy along the hyperbolic path to its apex and the thin lens shift term that places the collapsed energy at its actual spatial position in the subsurface. At every depth step. but cannot account for lateral velocity variations. This term is velocity dependent for depth migration and velocity independent for time migration. After converting F-K to F-X. The value for the rms velocity typically is that of the output time sample. the downward continuation is performed with a constant average velocity function.4 Phase Shift Migration: Phaseshift migration is due to Gazdag and works in F-K domain. Still the method can account for only mild lateral velocity variations. 13. because at each step an additional phase shift is applied before imaging. Here. It is based on the 45 degree approximation to the one way scalar wave equation and is formulated in the F-X domain. Therefore the phase shifted data in F-K for the next depth steps and hence is much move expensive. which is equivalent to summing over all frequencies.

Stone Evans B.B.Bibliography Name of the Book Introduction to Geophysical Prospecting Designing Seismic Surveys in two and three Dimensions A Handbook for Seismic Data Acquisition Static Corrections for Seismic Reflection Surveys ONGC Project Reports on KG Basin Seismic Data Processing Quality Manual Seismic Data Analysis Acquiring Better Seismic Data Encyclopedic Dictionary of Applied Geophysics Applied Geophysics Author/Publisher Dobrin M. M. J. . Dale G. Sheriff Telford W. Mike Cox ONGC ONGC Oz Yilmaz Peter Carr Perchett Robert E.

Sign up to vote on this title
UsefulNot useful