This action might not be possible to undo. Are you sure you want to continue?

Objects fall to Earth with an acceleration of about 32 ft/s2 [980 cm/s2]. The unit "centimeter per second per second" (cm/s2)is known as a gal in honor of Galileo. In gravity exploration, the acceleration of gravity is the fundamental quantity measured, and the basic unit of acceleration is the milligal (mGal). Thus, the acceleration of a body near the Earth's surface is about 980,000 mGal. For borehole gravity, a microgal is used as the basic unit, with 1 mGal = 10-3 mGal.

Inverse Square Law and the Principle of Superposition The magnitude F of the gravitational force between two point masses is given by

(1) where G = universal gravitational constant = 6.670 × 10-11 N-m2/kg M1, M2 = masses 1 and 2, respectively R = distance between center of masses F = force This equation is also known as the inverse square law, since F varies with 1/R2. The acceleration or attraction a1 on M1 is

(2) Note that a1 is independent of M1. The acceleration vector is

(3)

The minus sign indicates that acceleration decreases as R increases. The principle of superposition indicates that the attraction of a group of point masses is equal to the vector sum of the individual attractions:

(4) Vertical Component Concept The Earth's gravitational acceleration is approximately 980,000 mGal in an essentially vertical direction (i.e., roughly perpendicular to the Earth's surface). Gravity accelerations of local geologic disturbances may range up to approximately 300 mGal. These local accelerations are not necessarily vertical; but they are measured by gravity meters that are leveled with respect to the Earth's total gravity field. Since the Earth's main vertical gravity field is much stronger than any local disturbance, these local anomalies have little effect on the direction of the Earth's gravity field. Thus, from a practical standpoint, a gravity meter detects the "vertical" component of local geologic gravity accelerations, which is the component parallel to the "plumb-bob" direction (that is, the orientation of a weight which is hanging from a string at a fixed point). Figure 1 ( Vertical component of gravitational attraction of a buried sphere ) illustrates the vertical component concept for a buried spherical mass distribution.

Figure 1

We can show mathematically that the gravity attraction of a sphere is equivalent to the gravity attraction of a point mass, with the sphere's mass concentrated at its center. Therefore, the vertical gravitational attraction az for a simple point mass body is

(5) where G = universal gravitational constant M = mass of the body R = distance from the sphere's center of mass to the gravity observation surface z = vertical component of R Attraction of Complex Bodies For a constant density, mass is equal to density times volume. Thus, for an element of volume dV, we may express the increment of vertical acceleration daz as follows:

(6) where ρ = density of the attractive mass For a complex body made up of dV small elements, the total vertical component of gravitational attraction az is

(7) where ρ(V) is the density contrast as a function of location within the volume, and the integration is carried out throughout the volume of the density contrast ( Figure 1 ) and ( Figure 2 , Attraction of distributed body).

Figure 2

Geologic Application In gravity exploration, we define a density anomaly as a deviation from the ?corrected? gravity values that would be produced by a uniform, layer-cake, subsurface geologic setting (see Section 4.0). To produce a measurable gravity anomaly, there must be (1) a lateral density contrast between the geologic body of interest and the surrounding rocks, and, (2) a favorable relationship between the gravity station locations and the geometry (including the depth) of the geologic body of interest. In this discussion on gravity, we will examine methods for determining density contrasts, calculating gravity effects of simple geologic bodies and measuring and reducing gravity field data. We will also look at interpretation-the art of deriving a subsurface density distribution that could geologically and mathematically explain an observed gravity field. Densities of Rocks Density Computation In surface gravity exploration, we look for lateral density contrasts that we can relate to subsurface geologic features, such as stratigraphic changes, faults and folds. A layer-cake geologic section with no local structure or stratigraphic changes will not produce local gravity anomalies, because there is no lateral density contrast present. In this section, we look at rock densities and practical methods for determining them. The bulk density of any rock is the sum of each constituent mineral's volume fraction multiplied by its density ρi:

(1) where VI = volume fraction of ith mineral, expressed as a dimensionless decimal ρI = density of mineral i, g/cm3 ρb = bulk density, g/cm3

The sum of the volume fractions should equal 1. 3 3 3 Mineral densities can range from near zero for air or shallow natural gas to 2.65 g/cm for quartz, 2.70 g/cm for calcite, 2.87 g/cm for dolomite, to 19.3 3 g/cm for gold. For a two-component system made up of one matrix mineral, porosity (f, expressed as a fraction), and a fluid, we have:

ρb = ρm(1-φ )+ρ φ φ A sedimentary rock's porosity depends on such characteristics as grain size and shape, depth of burial, geologic age and geologic history (e.g., metamorphism or diagenesis, sorting of grain sizes, maximum depth of burial, etc.). The porosity of intrusive or metamorphic rocks is frequently near zero except when fracture systems are present. The porosity of extrusive rocks can vary greatly depending on the environment of deposition and geologic history since disposition. Because rock porosity can greatly affect bulk density, we must exercise care when computing bulk densities of sedimentary, metamorphic and extrusive rocks. Density Contrasts Local gravity anomalies are caused by lateral density contrasts. If horizontal density layering is uniform, then we will observe no local gravity anomalies. Once we have selected a base density, we can compute density contrasts as follows: ∆ρ = ρanomalous-ρbackground where ∆ρ = density contrast ρanomalous= anomalous density ρbackground= density of the "background" (1)

(2)

For example, if a massive sulfide ore body has a bulk density of 4.0 g/cm3, and is contained in a limestone having a density of 2.7 g/cm3, the density contrast of the ore body is +1.3 g/cm3. The amplitude of the gravity anomaly observed from such an ore body is proportional to the density contrast. We can determine density contrasts directly by calculating the gravity effect of a known feature and comparing this calculated effect with observed gravity, or indirectly by determining bulk densities and computing density differences.

Figure 1 ( Salt dome density model, U.S.

Figure 1

Gulf Coast ) illustrates the concept of density contrast for the case of a salt dome in the United States Gulf Coast area. With normal impurities, the average density of salt is assumed nearly constant as a function of depth, although it can vary between 2.15-2.20 g/cm3. Here, the density is a constant 2.20 g/cm3. The densities of the Gulf Coast clastic rocks surrounding the salt dome generally increase with depth at a gradually decreasing rate. Figure 1 shows clastic rock density increasing from 2.0 g/cm3 to over 2.55 g/cm3. The density contrast (∆ρ) is the salt density minus the density of the laterally adjacent clastics. Note that at a certain crossover depth, the clastic and salt densities are equal; there is no density contrast. Above the crossover depth, the salt is more dense than the surrounding clastic rocks; below the crossover depth, it is less dense. The local gravity anomaly observed over the salt dome is directly related to lateral density contrasts in the subsurface. If the salt dome top is at or below the depth of density cross-over, we will observe a gravity minimum. If the salt dome extends above the density crossover, we will observe the same long-wavelength gravity minimum, but with a superimposed, shorter-wavelength positive gravity anomaly due to the salt above the crossover. Interpretation of salt located near the density crossover can be very ambiguous. Magnetic and seismic data can sometimes assist in such interpretations.

22 g/cm3) are absent over the faulted anticline. we could choose a reference density of 2. Notice that the layers with densities of 2.45 g/cm3 (contrast -. Notice also that the "Top of High Density Rocks" occurs at the top of the carbonate section. and 0. which is above the basement. we must determine or estimate the density of each formation as shown in Figure 2 .37. -. For example.0 g/cm3.0. Calculation Based on Composition . (contrast -. The density contrasts of each of the overlying four layers would then be (from top to bottom. USA ) illustrates a different density distribution.30 g/cm3. Figure 2 formation type. seismic data and gravity profiling.67 g/cm3 to correspond to the density of the Precambrian basement. -. as was the case for the clastic rocks surrounding the salt dome in Figure 1 . Here. density varies primarily with lithology and formation type rather than only with depth. The absence of these negative density contrasts will enhance the relative positive gravity anomaly observed over the anticline. some of the more common of which are mineral composition analysis and the use of data from surface samples. Then we can subtract a reference (background) density from all density layers to produce a model of density contrasts. Rocky Mountain area. Where density varies with formation type. wireline logs. cores.22. 0.Figure 2 ( Density vs. in Figure 2 . Sources of Density Data There are a number of practical methods for obtaining density information.12. respectively) -.37 g/cm3) and 2.

determination of hydrocarbon density. Wireline Logs Wireline logs.g. or matrix densities. Other uses include identification of minerals in evaporite deposits. for the expected actual porosities and fluid saturations. the ideal log for determining density is the Gamma-Gamma Density Log (assuming that it is properly calibrated. The sum of the volume fractions should equal 1. Cores Core analyses often contain information relating to grain density and fluid saturations. Log Interpretation Principles. Bailey (1945) patented a method for determining rock density which applies Archimedes' principle. Exploratory wells generally have more complete log suites than development wells. detection of gas. are usually the best source of density information. We must be careful to (1) properly identify samples. Gamma-Gamma Density Log "Density logs are primarily used as porosity logs. A combination or suite of logs thus provides more information about a formation than does any one individual log. when available. sonic or neutron).In making calculations based on composition. As a rule." -Schlumberger. using a device known as a Jolly Balance. calculation of overburden pressure and rock mechanical properties. Wireline logging tools respond differently and independently to different matrix compositions and to the presence of fluids and/or gas. (2) collect fresh samples which have minimum porosity alteration due to recent surface weathering and (3) correct for the presence of pore fluids. Surface Samples Many times-particularly in mining applications-surface samples are our only source of density information.. measured in a hole in good condition. 1989 . determinations of oilshale yield. Note: This discussion on wireline logs is designed to point the reader in the right direction. evaluation of shaly sands and complex lithologies. depending on whether the rocks in the exploration problem are above or below the water table. We can use these densities provided that we adjust grain densities. and then multiply each volume fraction by the density of the mineral occupying that volume fraction. Core analyses can also provide information that helps us calibrate density measurements from wireline logging tools (e. Specific response characteristics of logging tools or questions regarding logging tools should be discussed with your company's log analyst and/or a qualified service company representative. and run from "grass roots" to basement). we must correct all mineral proportions to volume fraction.

which we use to help interpret gravity data. of its energy to the electron. but not all. the response of the density tool is determined essentially by the electron density (number of electrons per cubic centimeter) of the formation. Electron density is related to the true bulk density. At each collision a gamma ray loses some.916 2.710 2.9990 0.654 2. This type of interaction is known as Compton scattering.In gravity exploration." Table 1 lists common rocks and minerals with their corresponding actual bulk densities and apparent bulk densities (derived from electron density). in turn.351 {1. applied to the borehole wall in a shielded sidewall skid.0300 {1. and molecular weight.355} {1.400} 2.863 2. and the density of the fluids filling the pores.850 2.648 2. Consequently. we determine density contrasts. The number of Compton-scattering collisions is related directly to the number of electrons in the formation.710 2. which.0222 Index. These gamma rays may be thought of as high-velocity particles that collide with the electrons in the formation. as summarized in Schlumberger's Log Interpretation Principles (1989) is as follows: "A radioactive source.165 2.320 1.863 2.032 2. From these bulk densities.984 2.960 1.796} .957 1. The scattered gamma rays reaching the detector.9991 0.977 1.850 2. bulk density. ρ b 2_Σz MOL. the formation porosity.442} {1. ρb.9657 0. Compound Formula Actual Density.852} {1. which further explain how Table 1 is derived. emits medium-energy gamma rays into the formations.372 2.800} 2. depends on the density of the rock matrix material.9977 0. Its principle of measurement. WT 0. The gamma-gamma tool's depth of investigation is only a few inches away from the borehole in a radial direction.9581 1.9985 0. at a fixed distance from the source.074 2. we use density logs to determine subsurface bulk densities of rocks.708 2. the sum of the atomic numbers (Z) making up a molecule. are counted as an indication of formation density. Schlumberger (1989) gives several equations relating electron density. ρe Electron Density ρa (as seen by tool) Quartz Calcite Dolomite Anhydrite Sylvite Halite Gypsum Anthracite Coal SiO2 CaCO3 CaCO3MgCO3 CaSO4 KCl NaCl CaSO42H2O {1. and then continues with diminished energy. Note that a considerable correction is needed to obtain the true bulk density in halite and sylvite.650 2.

200} Coal 1.970 1.2470 1.247 ρmeth 2.146 0.154} 1.000 ppm n(CH2) CH4 C1.1101 1.000 1.2 1.135 0. 1989).000 1.110 1.238 ρg 1.590} {1.188 Fresh Water Salt Water Oil Methane Gas H2O 200.237 0. (Courtesy Schlumberger Oilfield Services. Figure 6 ( Schlumberger FDC tool )is a schematic representation of Schlumberger's FDC (Formation Density Compensated) gamma-gamma density logging tool.272} {1.325 ρg -0.1407 1.188 1.0797 1.0600 {1.335 ρmeth-0.1H4.850 1.238 1.850 ρmeth ρg Table 1: Gamma-gamma log measured and true bulk densities (g/cm3).500} {1.173} {1.Bituminous {1. .

which shows • Bulk Density (g/cm3) . It automatically compensates for borehole effects to produce the data display of Figure 5 .Figure 6 This tool employs two gamma ray detectors and a single gamma ray source. ( FDC Log Display ).

density correction. made from the short and long detector output comparison. In hole sections where washouts (indicated by the caliper log) or fractures (indicated by the caliper. (g/cm3) • Gamma ray correlation log • Caliper curve. in addition to a bulk density measurement. To determine density distributions from a density log. Blocking smoothes out the thin-bed . or other wireline logs) exist.Figure 5 • Schlumberger's borehole correction. This tool is similar in appearance and in operation to the FDC tool except that. the tool also measures the photoelectric absorption index. the densities indicated by the log are lower than their true values. Schlumberger's latest density logging tool is known as the Litho-Density Log. we usually block the density log into segments of constant density or density gradients. This index can sometimes be related to lithology.

per inch) and an expanded density scale (such as 0.39 g/cm3 2.50 g/cm3 When we have completed a table of densities vs. .70 g/cm3 2. Figure 4 . For surface gravity interpretation.65 g/cm3 2.density chatter. Example of density blocking ). we can plot the data points with a compressed depth scale (such as 1000 ft. The resulting graph should show the density-depth or densityformation relationship (example. Figure 5 provides an example of density blocking. possible density log blocks on Figure 5 might be: Depth Interval 7005-7050 ft 7050-7065 ft 7065-7102 ft 7102-7212 ft Density 2. depth blocks.2 g/cm3/inch).

We discuss this tool in greater detail in Section 8. (1) .Figure 4 Borehole Gravity Meter The Borehole Gravity Meter (BHGM) can provide accurate bulk density information if we know enough about the subsurface geology to make certain corrections for nearby. This tool works well in cased holes. Sonic Log The sonic or acoustic log measures the interval transit time (∆T) of P waves. Resolution of thin beds depends on the density contrast and the accuracy of depth measurements. largescale structures. which is the inverse of P wave interval velocity.

where ∆T is in ms/ft Vint is in ft/s.(1974) made an extensive study of the relationship between interval velocity and bulk density. (2) Figure 3 . Figure 3 ( Velocity-Density relationship in rocks of different lithology ) shows Gardner's empirical relationship with lithologic effects.25 where Vint is in ft/s ρb is in g/cm3 This relationship generally works well for most clastic and carbonate rocks?at least as a first approximation. Gardner's empirical relationship is: ρb = 0. Gardner et al. if no density log density data is available.23(Vint)0.

As Figure 3 indicates. we can convert the blocked ∆T's to bulk densities and determine a density relationship. or log data in addition to that provided by the sonic tool. To distinguish among different rock types. we can adjust the constants of Equation 2 to fit local geology. Then. density log bulk density. we can estimate lithology from the sonic-density crossplot. rocks of differing densities may have the same interval velocity. and anhydrite has a bulk density much higher than Gardner's empirical relationship would compute. Figure 2 ( ∆t vs. For example. .Note that salt has a bulk density much lower. We should block the sonic log as we would the density log. we need additional knowledge of the local geology. Figure 2 ρB crossplot for density determination )shows a crossplot of sonic ∆T vs. If sonic and density logs are both available. To eliminate this problem.

If we know some background densities. and that in such cases. we can often obtain information on density contrasts. This illustrates the potential of using gravity (or magnetic) data to help identify the lithology of unknown intrusions (e. the gravity profiling method can yield reliable densities for rocks within topography. etc.In geopressured (“overpressured”) zones. 1974) makes it possible to obtain bulk density estimates from seismic data. gamma ray. Seismic Data The relationship between P-wave interval velocity and bulk density (Gardner et al. Gravity Profiling Method In areas where topography is not structurally or stratigraphically related. Density analysis for a region should include thorough research to see if any geopressure zones are known to exist. Figure 1 ( Nettleton density profiling method )illustrates the use of the gravity profiling method. Thus. We can use reflection seismic interval velocities or refraction layer velocities in conjunction with Gardner’s relationship to estimate interval bulk densities. and caliper curves.. Additionally. geopressured masses of rock (e.g. igneous. because gases have a relatively low concentration of hydrogen when compared with either water or liquid hydrocarbons. neutron porosities are higher than their actual values due to the presence of bound water (a hydrogen source).. neutron-measured porosities are lower than their actual values. Your company log analyst or wireline service company representative would be familiar with this process. then we can estimate absolute densities.g. velocities and bulk densities are generally lower than normal. the method may be unreliable. When gas is present. salt.) to produce a complete integrated geophysical interpretation. that topographic features are often geologically controlled. . In shales.. The neutron log gives fairly reliable liquid-filled porosities in non-shaly rocks.. Neutron Log The neutron logging tool is sensitive to the hydrogen concentration in the formation. Older neutron logs read in “counts” and need to be calibrated to porosity. Most modern neutron logs include a compensation for borehole effects and display porosity. Density calculations require a knowledge of (or assumptions regarding) the matrix lithology. by comparing the calculated gravity effect of a seismically identified structure with the observed gravity anomaly. reef. This method has been published several times by Nettleton. Remember. where fluid pressures are greater than normal hydrostatic pressure. though. shale diapirs) can produce gravity minima.

What are the pitfalls of this method ( Refer to Figure 1 )? .2 g/cm3. From these. a.Figure 1 which involves applying “elevation factors” that are based on free-air gravity corrections (to compensate for the effects of elevation) and simple Bouguer corrections (to compensate for the effects of topography or geologic structure). This Bouguer density represents the average density of the near-surface rocks. In Figure 1 . this corresponds to a Bouguer density of 2. we select a Bouguer density that minimizes the correlation between topography and the Bouguer gravity. Calculate the interval velocity and use Gardner's Relationship to compute the interval density. A sonic log reads ∆T of 50 µs/ft.

a. What is the interval velocity and interval density? Comment on the accuracy of this calculation ( Refer to Figure 1 ).74 g/cm3 This density would be approximately correct for a carbonate rock. in the form of a crossplot such as the one shown in Figure 1 ) to distinguish between the two cases.000.23(Vint). A sonic log reads ∆T of 100 µs/ft.000/∆T = 20.23 x 11.89 = 2. We would need local geologic knowledge or additional log data (e.25 = .23 x 20. .25 = . 2.000 ft/s rb = . ∆T = 50 ms/ft Vint = 1.000.g. However. anhydrite also has a velocity of 20..000 ft/s but a much higher density.96 g/cm3.Figure 1 b.

23(Vint).30 g/cm3 Refer to Figure 2. .000 ft/s rb = . T = 100 ms/ft Vint = 1.23 x 10 = 2.000.000/∆T = 20.25 = .Figure 1 b.

at 10. that is.30 g/cm3 is within about . as a horizontal slab of material of thickness t that extends to infinity in all horizontal directions.000 ft/s.08 g/cm3). such as there was between carbonate rocks and anhydrite in part a of this problem. then we can make a more accurate density estimate. Bodies that can be modeled in this manner include lava flows or sedimentary sections with a uniform thickness and flat dip over a large area. At 10. Figure 2 shows that. However. Thin 2-D Prism We can approximate many geologic features using the thin 2-D prism model.08 g/cm3 unless something is known about the shaliness of the rock.04 g/cm3 on average) and sandy rocks are generally less dense than 2. Thus.000 ft/s there is no large ambiguity between rock types. Determination of Gravity Effects on Geologic Bodies Infinite Slab Model We can approximate a body of relatively constant thickness and very large areal extent by representing it as an infinite slab. This mathematics of this simple graphical method are based on concentrating the mass of the prism as a horizontal "sheet of mass" at . shaly rocks generally are slightly more dense than 2.Figure 2.30 g/cm3 (perhaps by about . and does not take into account the Earth's curvature or its spherical coordinate system. If it is known to be sand or shale. The infinite slab model assumes that the gravity effect is independent of depth. generally. our answer of 2.30 g/cm3 (perhaps by about .

and width shown in natural (vertical = horizontal) scale. or by a series of thin disks stacked vertically. We can approximate the gravity effect of these bodies using the thin disk model (where thickness is less than depth). The thin 2-D prism model is useful for anticlines. or any other body which is linear in map view. For more detailed modeling of elongate bodies. Figure 1 As an approximation. To make the calculation graphically. salt domes or structural domes. pinnacle reefs. thickness. We can easily model faults by assuming that one side of the prism extends to infinity on one side of the profile. we can use "2 1/2-D" computer modeling (with end corrections to adjust for the non-infinite extent) or 3-D modeling. horst blocks. Such approximations can be useful for designing complex 2 1/2-D (with end corrections). This 2-D representation might be appropriate for an elongate geologic feature.the prism's average depth ( Figure 1 . grabens. we may consider a body as "thin" if its thickness does not exceed its depth of burial. Examples might include igneous intrusives. The attraction ∆g from an observation point is: . We then measure the plane angles with a protractor. and roughly circular in map view. we draw a cross section of the body with the depth. Thin Disk Model Many geologic bodies are finite in areal and vertical extent. channel sands. or 3-D computer models. Thin 2-D prism model ).

03∆ρtω where ω = solid angle in radians. Figure 1 This method assumes that the disk's mass is concentrated in a sheet at the average depth of the disk. g/cm3 (1) Figure 1 ( Solid Angles for Horizontal Circular Disks ) illustrates the thin disk model method and contains a solid angle table for use in doing the calculations. to average depth of disk. ω = 2π(1-cosφ ) where φ = plane angle between the line perpendicular to the disk from the center of the disk to the observation point and any line between the observation point and the edge of the disk. t = thickness of disk in thousands of feet ∆ρ = density contrast in disk.∆g =2. We can use the following relationship to calculate the solid angle subtended by a disk for a point directly above the center of the disk. as illustrated by the dashed line around the midpoint of the disk. We would use the (2) .

Figure 4 ( Thrust fault example . Equation 2 is useful for computing a structure's maximum gravity response without referring to a solid angle table. Note the sharp gravity decrease at X=12. we can refer to the following 2-D models: • Figure 4. which is related to the thrust sheet edge in the subsurface. By way of illustration. EDCON has modified this method for use on personal computers. in the form of a program named GMOD. 2-D Computer Models Talwani (1960 and 1965) published a method for calculating gravity and magnetic effects of twodimensional bodies defined by polygon vertices entered in a cross-sectional view.2-D. . • Figure 3 ( Salt anticline .2-D modeling )shows a 2-D thrust fault model. 3-D ) shows the gravity effect of a salt anticline.000 ft.edge of the disk at the average depth of the structure.

Figure 2 ( Warm springs valley . respectively). 4.interactive model. 4W. Note that at strike lengths of less than 8W. and 1 times the body width (labeled 8W. Figure 3 is prepared with the cross section shown at the midway point of the axis of the body perpendicular to the profile. 2. 2W. For bodies with lengths less than 10 times width. EDCON's rule of thumb is that the strike length is equal to or greater than 10W before the 2-D assumption is really valid. and W.Figure 3 The strike length varies from infinity (labeled "2-D") to 8. intermediate product )shows an interim result of interactive modeling across Warm Springs Valley. it is best to use a 21/2-D model (with end corrections for the body geometry outside of the line of section) or 3-D modeling. the two-dimensional assumption does not work very well. .

Figure 1 ( Warm springs valley . .Figure 2 The residual gravity is shown by square "dots. a final solution ) shows a final solution with point A moved into its proper position such that the calculated effect of the gravity model fits the residual gravity. Note the discrepancy between the completed and the residual anomalies over point A.interactive model. The gravity values are calculated at the topographic elevations shown by the square dots in the model portion of the profile." and the calculated gravity effect of the model is shown by the smooth curve.

From Gardner's empirical relationship. since granite has an average density of 2." "computed gravity") = 12. 15." "gravity effect. the density contrast of the infinite slab of sedimentary section is -0. which has an average density of 2. .000 ft/s. This section rests on granite.54 g/cm3.67 g/cm3. where the thickness of the slab is the greatest. we determine the gravity effect of the infinite slab.000 ft thick section of sedimentary rock having an interval velocity of 15. Assuming that the densities are properly computed..13 g/cm3 × 10 kilofeet = -16.Figure 1 Ex-1: Calculate the approximate gravity attraction of a 10. Next. Since the calculation is made in the middle. ∆g = 12.77 × -0. "acceleration of gravity. Therefore. the calculation over-calculates the gravity effect somewhat (refer to Figure 1 ).6 mGal could be expected over this sedimentary basin.67 g/cm3. an anomaly of about -16. gently-dipping basin.000 ft/s interval velocity corresponds to a bulk density of 2. we need to determine the density contrast. will treating the section as an infinite slab result in an over-calculation or undercalculation of the effect? First.77 × ∆ρ × t where t is in kilofeet. Thus. The sedimentary section lies in the middle of a large. ∆ρ is in g/cm3 and ∆g is in mGal.e.6 mGal Thus. ∆g = a (i.13 g/cm3.

Compute the effect if the fault were buried at 5. Assuming a density contrast of .Figure 1 Ex-2: A fault having a displacement of 1. a low-density layer is present on the downthrown side of the fault which is absent on the upthrown side.000 ft.000 ft depth below the surface. What gravity criteria do you see which indicate the location and the depth of the fault? Refer to Figure 1 .000 ft occurs at 2. .25 across the fault. graphically compute the effect of the fault.

if the observation point were far enough west.071∆ρt(θ) = 0. compute the "constant" in the formula ∆g = a = 0. vertical = horizontal ) scale with the observation level in the same natural scale as the body.0178. Repeat this process for the fault buried 5000 ft. dash in the average depth of the high-density mass on the upthrown side of the fault.Figure 1 The procedure is as follows: First..25 g/cm3) (1 kilofoot) (θ) = 0.071 (0. Multiply each of these angles by 0. Analysis: From the thin 2-D prism formula. a gravity value would be computed as follows: . and then plot the result at the same horizontal scale as the geologic cross section. strike the angles between the horizontal (the average depth point on the upthrown side of the fault at infinity) and the average depth point on the fault plane itself. Then.e. draw the problem at a convenient and natural ( i.0178 mGal Next. Next.

000 ft and having structural relief of 2.25 g/cm3)(1 kilofoot) = 3.60 mGal This is exactly half of the infinite slab amplitude. the gravity directly over the fault is given by: ∆g = . a gravity value would be computed as follows: ∆g = . We can determine both the anomaly amplitude and shape.0 mGal We can observe from the gravity model that for both the deep and the shallow fault models. From the figure. It can be shown mathematically that the average depth of the fault is equal to the horizontal distance between the "half amplitude" point (over the fault plane) and either the "3/4 amplitude" or "1/4 amplitude" points.77 ∆ρt = 12. Compute the gravity effect along a profile through the axis of the dome. The diameter of the dome is 15. as well as the location of the gravity anomaly in relationship to the geologic body. Ex-3: Compute the gravity effect of a basement dome.∆g = .0178 (180°) = 3.0178 (90°) = 1.2 g/cm3.20 mGal It is apparent that if the observation point were far enough east. The density contrast between basement and sedimentary rocks is 0. buried 5.0178 (0°) = 0. This exercise illustrates the usefulness of models in defining the gravity response to a postulated or known geologic situation. . Also.77 (0.000 ft Refer to the solid angle chart shown in Figure 1 . the steepest gravity gradient and inflection of the gravity curve occur directly over the fault plane. we can also see that the shallower fault causes a steeper gravity gradient than the deeper fault. we compute the identical value if the observation point is west at infinity: ∆g = 12.20 mGal Note also that if we use the infinite slab formula. for both the shallower and the deeper fault.000 ft above the surrounding basement rock.

R = 7500 ft (radius of disk). Z/R = 6000/7500 = 0.Figure 1 a. Define parameters for the solid angle chart ( Figure 1 ). Z= 6. Depth = 5000 ft and Thickness = 2000 ft. Therefore.000 ft (depth to the average depth).000 ft. . Since the diameter is 15.80. Therefore.

70 0.40 1. .Figure 1 b.15 0.06 Comments On Axis i.18 0. Read the appropriate values of X/Z to determine the solid angles by interpolating between contours. In Figure 1 .08 ∆g. Set up a table as follows: X. radians 2. ft 0 6000 12000 18000 24000 X/Z 0 1 2 3 4 ω .80.38 0.45 0.94 1. read solid angles from the solid angle chart along vertical line having the value Z/R = .55 0. mGal 1.

Then multiply each solid angle by 0. The role of the reduction datum (usually sea level) is only to provide a starting point for computing effects of an assumed Earth model. while LaFehr (1991) reviews the question in detail and includes a list of correct and incorrect texts on the subject. we correct for known or expected effects so that maps and profiles show only anomalous effects. Ervin (1977) points out the problem.81 to complete the table.03 (. The choice of Bouguer density can therefore be a very important first step in interpretation. To accomplish this purpose.03 ∆ρ (t)ω = 2. A common misconception about the gravity data reduction process is that the gravity field is somehow "reduced to sea level" or some other datum. Many contend that the Bouguer anomaly is an interpretive product. To do this. In rugged or even moderate terrain.81w mGal iii. so we only need to calculate one side of the body. Also note that this plot gives both the amplitude and shape of the gravity anomaly. Note that the body is symmetrical. .2 g/cm3)(2 kilofeet) ω = . because its purpose is to remove the effect of topography. Consider Figure 1 ( Comparison of Bouguer anomaly observed on the ground and at 2500 m elevation ). Any depth estimation or modeling analysis of the normal free-air and Bouguer gravity anomalies must be referenced to the observation surface. We can use this information to design survey specifications or interpretation procedures. the Bouguer anomaly map's appearance depends greatly on the choice of the Bouguer density.ii. where Bouguer anomaly values have been computed for two sets of gravity observations over a hypothetical model. The two most common products of data reduction are free-air anomalies and Bouguer anomalies. we must correctly judge the density of topography. c. even though they correctly outline the mechanics of computing anomalies. Plot the result over the body (see Figure 1 ). Gravity Data: Reduction and Processing Introduction A gravity anomaly is the difference between what one observes and what one expects from applying a simple Earth model. The purpose of gravity data reduction and processing is to produce gravity anomaly values that relate to subsurface geology more readily than the observed data. The concept that the reduction to Bouguer gravity produces a corrected field that is the same as the anomaly one would have observed at sea level is wrong. Determine value of constant for disk as follows: ∆g = 2. particularly in areas of high elevation and in rugged topography. Several textbooks incorrectly describe gravity reduction as reduction to a datum.

These spring-balance type instruments are capable of efficiently measuring small changes in gravity (∆g). using a weight-dropping apparatus that can measure absolute gravity to an accuracy of less than about 5 mGal. During the early part of the twentieth century. The value of the absolute gravity reference at the Colorado School of Mines is 979. Station gravity has been established at several points over the Rocky Mountain Gravity Calibration Line near the Colorado School of Mines. The new system. Since about 1970. or gravity at some point or station on the Earth's surface. the highest station at Echo Lake on Mt.256. pendulums were used to establish networks of absolute gravity reference stations. Station Gravity Station gravity is the absolute value of gravitational acceleration (g). and the other is at a constant elevation to simulate an airborne gravity profile. corrects this error. The old Potsdam gravity values are too high by about 14 mGal.Figure 1 One set of observations is on the terrain surface. use of the Potsdam Gravity System was nearly universal from 1909 to 1971. it has become increasingly practical to obtain more accurate absolute gravity measurements using weight-drop instruments.131 mGal. In exploration practice. gravity surveys are carried out using gravity meters. known as IGSN 71 (International Gravity Standardization Net. although they cannot directly measure absolute station gravity. and this has resulted in the adoption of a new reference system.073 . Evans has a value of 979. The sharper gradients of the Bouguer anomaly observed at the surface reflect the proximity of the truncated high density layer compared to the airborne observations.571. 1971).

The base constant varies with time and ambient instrument temperature. station gravity = (calibrated gravity reading) + (tide correction) + (base constant) Successive occupation of a base station with an established gravity value is the common means of monitoring instrument drift and establishing the time variation of the base constant.01 7 976432.24 2 3044. This may be a satisfactory practice where survey accuracy tolerance is greater than 0. the tidal variation is as much as 0.91 9 .01 9 976432. 1959) are available to compute tidal variations in gravity as a function of time and station location. We can determine the base constant for a particular meter by reading the meter at a base where absolute gravity has been determined. but they require calibration.008 -0. we may obtain an absolute gravity value.003 0.15 4 3038.75 9 3007. the most common and the best practice is to compute the tidal correction and correct for instrument drift separately.625 3044. Gravity meters indicate readings in instrument counter units.050 -0.76 8 3004.3 mGal (peak to trough). as long as the interval between base occupations is less than 6 hours (the average time between peak and low tidal effects).95 6 Tide Corr. (mGal) Station (mGal) BASE 95 100 95 98 08:16 09:44 10:18 10:44 11:16 3097.004 0. (mGal) Tide Corrected Gravity (mGal) Drift (mGal) Base Const.055 -0.16 2 3040.g. So. It used to be common practice to correct for instrument drift and tidal variation all at once by interpolating linearly between base station readings.60 0 -0.64 3 979476. even with a very accurate instrument capable of measuring the absolute value of gravity. and that over the course of a day. Keep in mind that these are values for gravity when the tidal effects of the sun and the moon are zero.055 -0. This practice was used to avoid the labor and cost of computing tide corrections from tables.01 6 979567.903 0.107 3040.234 3044. Once the gravity meter reading is calibrated in milligals. There are two tidal peaks and lows over the course of a day.12 3 979470.1 mGal. it is necessary to correct for the tidal variation of gravity to obtain a tide-free station gravity value. Reading (mGal) 3135. Longman. Computer algorithms (e. These are typically close to 1 mGal/counter unit.256 979476. Thus.02 2 976432.000 0. LaCoste and Romberg instruments (which are considered low-drift).005 0. units) Cal.006 976432.35 2 3007.mGal. A low-drift instrument has a nearly unvarying base constant. either by multiplying by a scale factor or by interpolating from a calibration table.104 3038. Table 1 shows an example of data taken with a computer-nulled meter over a calibration range: Station Time Meter Reading (calib. drift on the order of one mGal per month at constant ambient temperature. This variation is referred to as instrument drift.68 0 3044..053 3135.76 0 3002.01 8 976432.12 4 979472. and a base constant is added to the calibrated reading. With the wide availability of computers.

256 Table 1: Example of data taken with a computer-nulled meter over a calibration range Note that the repeated drift values at station 95 agree to within 2 mGal. or last.007 0. a substantial fraction of exploration data had been referenced to the Potsdam datum and the 1930 International Gravity Formula. The intent of the Bouguer reduction is to use a simple.01 4 979476.008 976432.15 9 3135. The example data were taken on a day when there was little ambient temperature variation. 1967) for computing the expected value of gravity on the reference ellipsoid (a near-sea-level mathematical surface that closely approximates the Earth's shape) is: g(φ ) = 978 031. the choice of Bouguer correction density becomes a critical decision affecting the interpretation of the Bouguer Anomaly map or profile.12 5 979567.242 0. because the radius of Earth rotation decreases as the observation point nears the poles.846 (1 + 0. The Bouguer Anomaly is the most common presentation of gravity measurements used in exploration.28 2 -0.95 BASE 11:35 12:06 3007. Drift was computed by assuming zero at the first base reading and linearly interpolating the increase in tide-corrected gravity reading from the first to the next. The International Gravity Formula (GRS67.1 mGal) it is important to protect the meter from large temperature changes-for example.000 mGal at the pole to about 978. For very precise surveys (e.79 9 3044. and is now used to reduce most exploration data. Latitude Correction Gravity decreases from about 983. precision tolerance of less than 0. In rugged topography. About 3400 mGal of this decrease results from a difference in centrifugal acceleration. the rest is due to polar flattening.000 023 462 sin4φ ) (1) where φ = latitude Equation 1 was developed for use with IGSN 71.040 3044. The example illustrates the correction sequence from a gravity meter measurement to the establishment of a station gravity value.76 5 3097. single-density model to separate topographically-related elements in the field from the geologic anomalies of interest.005 278 895 sin2φ +0. Geodetic Reference System.000 mGal at the equator.049 -0.110 3135. This is unusually precise data.01 5 976432. The correction for elevation reflects the expected decrease in gravity with increase in elevation. and (2) decreased distance from the center of the Earth at the pole compared to the equator (polar flattening).g.. which is given by . by keeping the meter shaded and not leaving it inside a hot vehicle It is also good practice to record temperature at each station. Prior to 1971. Corrections for Expected Variations The correction for latitude reflects the expected increase in gravity with latitude for two reasons: (1) decreasing centrifugal force. base reading. the observation point is farther from the center of the Earth's mass at higher elevations.

areas with dimensions of hundreds of miles.01 mGal. We can do this by differentiating the latitude correction formula to give w.g(φ ) = 978 049 (1 + 0.812 sin (2 φ ) mGal/km The maximum rate of change with latitude occurs at f = 45 degrees: 0. and is increasingly used to position geophysical surveys. It is poor practice to use this approximation in reducing data. The WGS 84 Geoid is calculated from the spherical harmonic expansion of the gravitational potential. the Ellipsoid. The reference ellipsoid is a mathematical surface constructed to approximate the geoid. This distinction has no practical impact on the geologic interpretation of gravity maps for exploration purposes-except that mixing elevations measured from a sea-level datum and those measured from an ellipsoidal datum in the same survey area would result in unacceptable errors between stations.000 0059 sin22φ ) (2) Equation 1 was derived as an improvement of Equation 2. including the crust both above and below sea level. Another common practice with old data sets is to compute a north-south gravity gradient for a latitude near the center of the area. and GPS The geoid is the sea-level equipotential surface.0. or less). GPS is the satellite-based navigation system maintained by the United States Department of Defense.g. By convention and for convenience. we reference the values computed from a gravity formula. Sometimes ellipsoids are derived as best fits for regions (e. except that surveys reduced using the two different gravity formulas will not tie together properly. it is not unusual to see new data reduced using the old formula-just to avoid recomputation of the earlier data set. In the past.008 mGal in 10 meters. Scores of such surfaces have been derived for use in mapping. Or.. For practical purposes. The choice of a reference system for the latitude correction is of no practical importance for interpreting gravity data at the exploration scale (i. WGS 84 EGM is based on a very large number of gravity measurements in a worldwide gravity database. Deviations of the WGS 84 Geoid (3) . For a survey to be accurate within 0. GPS positions are relative to the World Geodetic System 1984 Ellipsoid. based on observations of satellite orbital characteristics resulting from Earth's shape and distribution of mass. All density variations within the Earth have an effect on geoid shape. using the approximation rather than evaluating the International Gravity Formula resulted in time and cost savings. so the required coordinate accuracy is relative to a base rather than absolute.. to put it another way.e. we are interested in the relative changes of gravity. The Geoid. Computers are now so widely available that this is no longer the case. Conventional surveying methods (spirit leveling) and inertial surveying determine elevations relative to the geoid. because it complicates integration with independently reduced data sets. It is the average level of sea water after removing the effects of currents and tides. Because of this.005 2884 sin2 φ . to sea level rather than the height of the ellipsoid. Elevation. WGS 84 Earth Gravitational Model (EGM). North America or Europe) while ellipsoids such as that developed for GRS67 and WGS84 are meant to fit the geoid for the entire world. GPS (Global Positioning System) surveying poses a new hazard for this type of mix of survey reference datums. the rate of change of gravity with latitude: ω = 0. it is the level that sea water would have in imaginary canals dug underneath continents. such as the GRS67 International Gravity Formula.812 mGal/km or 0. we must know the north-south coordinates to within 10 m.

Because of the proliferation of ellipsoidal models and elevations datums. calculated to order 18 )shows the departures of the geoid from the WGS 84 Ellipsoid calculated to order 18. The DMA Technical Report (DMA. Common land surveying using spirit leveling delivers elevations referenced to the relatively bumpy geoidal surface. the associated gravity anomaly is given by (4) . 1991) describes the reduction procedures that must be followed for reducing gravity with respect to the WGS 84 reference for geodetic purposes. explorationists must be careful to document their survey base references. A simple relationship we can see from his work is that for a geoid height anomaly with a wavelength l and amplitude ∆h. the lumps at shorter wavelengths are smaller in amplitude. Sandwell (1992) describes a method to compute gravity anomalies from a network of geoid height measurements at sea using satellite altimetry. the relative amplitude of geoidal relief at wavelengths of detailed exploration interest are almost unmeasurable. Even though the geoidal surface is bumpy and reflects the gravity anomalies that are the "targets" of an exploration survey. The choice of elevation datum will not affect the geologic interpretation of an anomaly map as long as the datum is consistent within the prospect.from the WGS 84 Ellipsoid have been calculated up to order 180. Figure 1 ( Departures of WGS 84 geoid from WGS 84 ellipsoid. Figure 1 The figure shows the relief of the longer wavelength lumps in the geoid.

primarily for use in borehole gravity data reduction. The most critical source of error in land gravity surveys is the measurement of elevation at the survey site. It is the surveying of elevations between gravity stations that is the most time-consuming and expensive part of any land gravity survey. varying with altitude and latitude.50 mGal Vertical Accuracy 3. The horizontal accuracy requirement reflects the rate of change in the latitude correction which is a maximum at a latitude of 45 degrees. we can treat the local rate of change of gravity with elevation as a constant.10 mGal 0. but this intended refinement to the reduction process is likely to cause distortion of anomalies and should not be adopted as a standard reduction procedure. or just over an inch. Because the bump is even smaller for lower-amplitude anomalies with shorter wavelengths. To achieve a free-air correction accuracy of 0. The adopted constant is F = 0.01 mGal = 0. which is 3.3086 mGal/m in free air.01 mGal 0. ∆h is the geoid height above the ellipsoid and l is the anomaly wavelength. We would expect that gravity measured at the base and the top of a number of television towers would closely reflect this normal free-air gradient. TV towers are practically massless. free-air anomaly maps show a lot of similarity to topographic maps where relief is significant. The effect of referencing to the geoid rather than the spheroid is a very small warp in the regional gradient of the field. we must know the elevation rather precisely.where g = 980 000 mGal. To accurately predict gravity at the elevation of the gravity observation. These small adjustments have no practical impact on interpreting anomalies of exploration interest. The free-air anomaly reflects any anomalous mass. ∆h and l must be in the same units. for example. the conventional practice of using sea level reference rather than the theoretically correct ellipsoidal reference for elevation is justified and has no practical effect on results at exploration or engineering scales of investigation. As a result. Some researchers have recommended larger. we find that the bump in the geoid associated with a 10 mGal gravity anomaly with a wavelength of 10 km is just 16 mm. taking no account of any mass between the computation datum (usually sea level) and the observation point. Specified Anomaly Accuracy 0. local adjustments of F. If we solve the relationship for ∆h.2 cm/1 inch 32 cm/1 ft 160cm/5 ft Horizontal Accuracy 10 m/30 ft 100m/300 ft 500m/1600 ft . Table 1 lists vertical and horizontal survey requirements for a range of specified free-air anomaly accuracy. we must measure elevation to an accuracy of 0. including the mass of all the rock underlying the topographic surface. gravity is expected to decrease by 0.01 mGal.3086 mGal/m for every meter in elevation. That is. gravity will decrease with separation distance from its center. Because gravity survey operations span a very small fraction of the average radius of the Earth.2 cm. based on measured values of the vertical gravity gradient. Free-Air Correction The inverse square law (Equation 1) predicts that above the Earth's surface. This constant is the first term in a Taylor-series expansion of the rate of decrease in gravity with increasing distance from the center of the Earth.3086 mGal/m. Robbins (1981) has published very small adjustments in F.

Louisiana. As we will see in the next section. Bouguer slab correction or "Simple Bouguer correction" 2. In essence. Bouguer Slab Correction and Simple Bouguer Gravity The common. the refinements become more important. The legend of a Bouguer anomaly map should always specify the assumed Bouguer density. Remember that the starting point in predicting gravity is a latitude formula such as Equation 1. sea-level Earth. This formula predicts gravity for a smooth. Bouguer Correction The objective of the Bouguer correction is to produce an anomaly map that indicates subsurface density variations caused by geologic structure. by using the Bouguer correction. Where the slab approximation is good enough. The density used in the correction is called the Bouguer density. Bullard(1936) conceived of the Bouguer correction as a series of three steps where the last two steps apply only in rugged terrain: 1. simple Bouguer anomaly is based on using an infinite slab of some density to approximate the gravitational effect of the rock between the datum (sea level or the ellipsoid) and the station. the data reduction process becomes simple. we are attempting to further refine the estimate of expected gravity at the observation point by modeling the gravitational attraction of topography. Areas of flat terrain in Texas. In areas with more topography. Adding a hill or plateau of rock under our TV tower will result in an increase in gravity. In progressing from the free-air anomaly map to the Bouguer anomaly map. Bouguer terrain correction or "Complete Bouguer correction" Hundreds of thousands (even millions) of gravity stations have been reduced no further than the simple Bouguer correction with no practical loss in utility of the data.Table 1: Required Vertical and Horizontal Survey Accuracies Given a desired level of gravity anomaly accuracy. The Bouguer correction is the most complicated and interpretive of the corrections to observed gravity. the Bouguer slab correction results in a combined elevation factor which is about 30 percent less than the free-air elevation factor. our hope is that the resultant Bouguer map will be free of obvious correlation with topography unless the topography directly correlates with subsurface geology. The attraction of the Bouguer slab is: B(ρ) = 2πG ρ Bouguerh (5) . but the computational work load is greatly increased. even though we could view the vertical accuracy requirements as being mitigated by the Bouguer slab correction. and much of the Middle East are such that further refinement of the Bouguer correction would be inconsequential. The Bouguer correction is designed to compensate for the attraction of rock between sea level and the observation point. the values in Table 1 should be taken as minimum tolerance specifications. Bullard "B" Correction 3.

we would expect gravity to be less than the value computed from the latitude formula.2 or Elevation factor = F .3086h . The latitude formulas each predict a value for gravity at the surface of a sea-level Earth.or in mGal. at sea level. For h in feet. a Bouguer density of 2. Consider the following example: Suppose we want to predict the value of gravity at the shore of the Dead Sea.67 g/cm results in an elevation factor of 0. What would the gravity be at the bottom and the top of the tower if the tower were inundated with sea water having a density of 1.06 mGal/ft.09406h . as described in Section 4. data reduction was simplified by a single digit elevation factor. The formulas assume that rock rather than water underlies the observation station.67 g/cm is also a good average density for continental crust. where h is in feet Simple Bouguer Gravity for Stations Underwater or Underground To compute a Bouguer Anomaly for a station underwater. Just to make it easy.03 g/cm ? d.04191 ρ Bouguerh where h is in meters and F = free air gradient. The elevation of the Dead Sea is -400 m.67 g/cm has been such a popular choice for 3 Bouguer density. we expect an increase in gravity with depth due to F. B(ρ) = 0. The 400 m TV tower would be handy for checking our results. on the surface. What will the gravity be at the base of the TV tower? 3 c. the free-air gradient. At sea. when "computers" were essentially mechanical 3 adding machines.012 mGal.assume the missing rock has a density of 2.67 g/cm ? What is gravity at the top of the borehole? Solutions: The principal for computing expected gravity is well illustrated by the underwater station. We already know that at that latitude gravity should be g (30 ) = 979 324. B(ρ) = 0. we can combine the free-air correction and the Bouguer correction into a single elevation factor.01277 ρBouguerh. because the density (and gravitational attraction) of water is less than rock.0. which probably explains why 2. that there is a lot of rock missing between you and the bottom of the tower -. Starting with the predicted value from the latitude formula g(30 °).0. Elevation factor = F .2. the absence .B(ρ) = 0.) b. a. 2. In the 1940s and 1950s. What should the gravity be at the top of the TV tower just above the Dead Sea shore? (Hint: you would notice. we'll move the Dead Sea about a hundred o miles south to a latitude of 30 degrees.67 3 g/cm . What if we were somewhere else at a latitude of 30 degrees.B(ρ) = 0.04191 ρ Bouguerh Given a Bouguer density.04191 ρ Bouguerh where ρ Bouguer = Bouguer density and h = elevation above sea level in meters. if you were at sea level on the top of the tower. the ground elevation was at sea level. and our gravity meter was 400 m down in a 3 borehole where the rock above the meter has a density of 2. When working with elevations in 3 feet. we use the same approach: compute expected gravity and subtract it from what is observed.

06873d . and finally.252 Base of Tower: Sea surface: 979279.519 On the surface at the top of the borehole (Elevation=0. For the other cases.692 -0.04191× 1.of rock between sea level and the water bottom leads to an expected decrease in predicted gravity.04191 × 2.012 . Where depth is positive downward. d.3086 mGal/m × 400m) = 979402.0.03 g/cm3) × 400m = 979296.04191× 2. so ρwater is replaced by ρrock. the expected change in observed gravity due to water depth.(0.04191ρrock d .252+(. Depth=0): g(d) = 979324.1119d Air (TV tower) -400 400 979 402.3086d .1967h Sea Water (surface) 0 400 979 296.04191ρwater d Notice that the solution for the borehole case is identical except that there is rock instead of water.0. Depth=400m): 979324.3086 mGal/m × 400m)-2× (.012 In the borehole (Elevation of Top=0.67 × 400) = 979279. the solutions are as follows: TV tower in air: Top of Tower: 979324.519 -0. is: g(d) = +0.252 -0. the upward attraction of the water leads to a further decrease in predicted gravity.692 979279.012 + (.67 g/cm3) × 400 The solutions are summarized below: Medium of Gravity Measurement Elevation (h) Depth (d) Gravity Value Elevation/ Depth factor Air (TV tower) 0 400 979 279.252 + (.

08480d Borehole (ρ = 2. Note that the factors give the predicted change in gravity with elevation or depth which can then be added to the value computed from the latitude formula for each situation." R0= earth's average radius.932 +0.100d The last entry above suggests a rule of thumb: gravity will increase with depth in a borehole by about 1 mGal per cm.Sea Water (bottom) -400 400 979 385. R=actual local earth radius. Figure 2 .012 +0.1967h Borehole (ρ = 2. Bouguer slab and Bullard cap with the neglected "triangle.425 +0. The simple Bouguer anomaly for each of the above observation points is observed station gravity minus the expected gravity value we just computed. Bullard B Correction The Bullard B correction is the difference between the effect of an infinite slab and a spherical cap on the Earth's surface ( Figure 2 . a=angle subtended by the truncating Bullard radius.489 g/cm3) n/a 400 979 364.012 -0.67 g/cm3) n/a 400 979 357.1535d Borehole (surface) 0 n/a 979 324.

We can obtain the Bullard B correction by evaluating Equation 6 with b = 0. In this writer's experience. and is solely a function of elevation. shown with the terrain. The correction appears to be inappropriate in most topographic situations unless terrain corrections beyond a radius of 167 km are carried out. In a practical sense. H = h/(R0 + h) and R0 = 6371 km. they tend to be more difficult to intuitively understand than the directly computed effects of hills or valleys. This correction to the Bouguer slab correction has been neglected in commercial work. . and compute the gravity effect of topography at our station. Whether it is theoretically more correct to include it is another matter. The terrain correction is the change we have to make to a slab to make it look like the actual terrain ( Figure 3 . For complicated topography. a = 0. we rarely know the Bouguer density this precisely.026 radians. Whitman (1991) developed an approximation for the Bullard B correction and a revised Bouguer slab formula that includes the correction: (6) where b = 1. The Bullard B correction ranges in value from about -1. we need to model the topographic volume from surface to sea level (or below). although we know the rock densities making up topography are often not even approximately constant. The gradient of the correction ranges from zero to less than 0. Because terrain corrections are corrections to a slab model.5 mGal over an elevation range of 0 to 5000m. the average radius of the Earth. the Bouguer slab model is usually satisfactory for removing unwanted topographic effects. we can use this relationship to replace the simple slab formula (Equation 5) and obtain a Bouguer correction that includes the Bullard B correction. LaFehr (1991a) advocates the routine inclusion of the Bullard B correction in gravity data reduction. This computation almost always assumes a constant Bouguer density. The Bullard B correction has been the subject of much recent discussion. Bouguer slab for the station X.0015 mGal/m?we can think of this maximum gradient as equivalent to a Bouguer correction density uncertainty with a maximum value of 0.5 to +1. By setting b = 1 in Equation 6. Terrain Corrections In flat terrain. including the correction would make no practical difference in the outcome of any interpretation.Not to scale ).036 g/cm3.

Before the widespread use of computers. ).. Figure 4 (Use of terrain chart with topographic map: (a) terrain chart overlying topographic map (b) enlarged view of a single zone )gives an example of the use of such a chart.e. For example. terrain effects are negative). Terrain corrections for land gravity stations are always positive (i. A Bouguer slab with its base at sea level and its top at the level of the gravity station has been used as a first approximation for the effect of topography. Historically. as shown in Figure 3 . It was also common practice to economize by neglecting the effect of terrain far from the station. there is very good reason for making terrain corrections in this way. On the mountain side. Clearly. An important feature of both charts is that the ring segments become smaller as they get closer to the station. Two such templates were widely used for decades: the Hammer chart and the Hayford-Bowie chart. We can see this by considering a station on the flank of a mountain. the value of expected gravity at a station is less than that predicted by the Bouguer slab. while Swick (1942) describes parameters for the Hayford-Bowie chart. but stations in the foothills and in the mountains would require them. the slab correction overestimates the contribution to gravity on the valley side of the station. and were done only when absolutely demanded by topography.Figure 3 Terrain corrections are computed for the hacheured section. terrain corrections might be neglected for stations on the valley floor. 1976) and Dobrin (1988) describe the parameters for constructing Hammer charts. Terrain corrections used to be laborious. Nettleton (1940. That is. . reflecting the need for more accurate knowledge of the topography close to the station. determining terrain corrections involved using templates of segmented concentric rings that would be laid over a map of topography and centered on a gravity station location to permit estimation of the average elevation for each segmented compartment. for surveys over valleys or basins. the effect of mass above the station exerts an upward attraction that also reduces the expected value of gravity at the station. Computer studies of terrain effects have revealed that this practice can cause unanticipated anomaly distortion in some rugged survey areas.

5 mGal in rugged terrain. 2. Plouff's (1966) and Krohn's (1976) are two of the most widely used. We must take extreme care to avoid errors of up to 0. 3. Digital models of topography greatly simplify computing terrain corrections. This is especially true of the ?inner zone? terrain corrections within 2 km of the station.Figure 4 There are a number of computer methods for computing terrain corrections. For example. Krohn (1976) showed that the flat-topped model elements used in the segmented compartments of terrain correction charts lead to systematic overestimates of the terrain correction. where digital terrain models and computer terrain correction programs are highly beneficial. Sources of terrain correction error stem from the following: 1. for terrain within 200m of a station. Terrain inaccuracies close to the station have relatively large effects compared to poor terrain data farther away. an estimating the slope to be 20 degrees rather than its actual 30 degrees will lead to a 0. Smaller model elements and terrain elements with sloped tops have both proven to produce more accurate terrain corrections. Inaccurate sources of terrain data. Neglecting near-zone terrain measurements can lead to even larger errors. Cogbill (1991) discusses refining earlier computer techniques and applying the digital elevation models that can be purchased in the United States from the National Cartographic Information Center. . Inadequate terrain models. Poor knowledge of terrain density. Another obvious source of error is significant variations in terrain density from the assumed Bouguer density.5 mGal error. Computer methods of computing terrain corrections have led to more accurate corrections and have enabled careful study of the sources of error. and digital topography is becoming available for a wider range of countries at increasingly detailed grid intervals.

neglecting far-zone terrain can lead to significant errors from station to station. The advantage of adopting such a standard is that it would help to assure adequate terrain corrections in all circumstances.4. we can correct marine and airborne gravity for free-air and Bouguer gravity effects just as we correct station gravity values for static measurements. We can usually do this effectively using profile modeling to gain an idea of the amplitude and rate of variation of terrain effects in comparison to target anomalies. a terrain effect that is smooth and unvarying when evaluated at a single elevation may vary significantly at varying station elevations. the relative cost of performing terrain corrections to the Hayford Bowie radius of 167 km is much less than it once was. Lakshmanan (1991) has proceeded in this way to construct complex density models required for analyzing microgravity surveys for engineering and archeological applications. is to compute the effects of geometric shapes of hills and valleys directly and make no Bouguer slab (or Bullard B) correction at all. making the Bouguer correction this way can be more effective than dealing with corrections to the Bouguer slab. For that matter. If the terrain corrections from the far-zones are relatively unvarying over an area comparable to the size of anomalies of interest. Some have objected to such complexity at the data reduction stage. An alternative and natural way to proceed. and the Bouguer anomaly map is therefore a first step in the interpretation process. For some problems. Others. The Eötvös Correction The Eötvös effect is the vertical component of the Coriolis force. processed station gravity values from a marine or airborne gravity profile have been subjected to high-cut filtering. we should take advantage of the ease of computing and evaluate the effect of varying density assumptions in the context of the actual survey and survey target. because constructing such a model clearly involves making a geologic interpretation. In rugged terrain where there is substantial vertical relief between adjacent stations. However complex or simple the assumptions for terrain may be. however. However. have suggested Bouguer corrections involving complex density models for topography. Given the wide availability of digital terrain models. His model of expected gravity included varying densities for the various blocks of differing rock types (granite and porous limestone) used in constructing the pyramid. Corrections for Marine and Airborne Gravity Measuring gravity from a moving vehicle in a dynamic environment poses problems and requires corrections that we do not encounter for land or underwater gravity measurements. which places a limit on the spatial wavelengths that can be resolved. but sometimes not. depending on speed and direction ( Figure 1 and Figure 2 ). including Vajk (1956). and would facilitate the integration of data from diverse survey sources where the reduction standards had been followed. . Once we make the necessary instrumental and dynamic corrections. LaFehr (1991) points out that this is a general phenomenon of distant regional effects. Terrain corrections beyond a distance that is a few times the wavelength of anomalies of interest commonly have been neglected in order to save effort?often without appreciable distortion of the anomalies of interest. it is a change in vertical acceleration that will affect any moving vehicle as a result of the change in centrifugal acceleration. LaFehr (1991) has proposed adopting a terrain-corrected radius of 167 km as a data reduction standard. Unlike station gravity on a land survey. given the power of computer modeling. Far-zone terrain corrections. the expense of terrain correction to the outer zones may not be justified. One of his studies located a hidden chamber in the pyramid of Cheops in Egypt. In other words. it is a good idea to recognize that the choice of Bouguer density is interpretive.

Figure 1 .

which we must correct. course is less critical. because the effective centrifugal acceleration acting on the gravity meter is increased and the downward pull of gravity is decreased-the gravity meter feels lighter. small variations in course result in large Eötvös variations. For east-west lines. a is course (azimuth.e. measured clockwise from geographic north). On north-south lines. and R is the radius of the path over the Earth (very close to the radius of the Earth).. φ is latitude.a) ∆gEötvös = 7.Figure 2 The Eötvös correction is given by: ∆gEötvös = 2ωVcosφ sinα + V2/R where ω is the rotation rate of the Earth. and speed changes lead to (1b) . For speed in knots. in the direction of the Earth's rotation). the Eötvös correction in mGal is: (1. Small variations in the vehicle's eastward velocity therefore cause variations in the instrument reading. V is speed.5φ 3Vcosφ sinα + 0.0042V2 When traveling east (i. the effect is negative.

although vertical acceleration corrections are more significant.1 knot near the equator results in a 0. even in rough sea conditions. Successful airborne gravity surveying demands highly accurate altitude control and measurement.000 mGal. It is also important for airborne measurements. A momentary change in eastward or westward speed of just 0. Typical survey ship speeds are 5 or 10 knots (150 to 300 meters per minute).0 mGal are plotted in Figure 3 .the greatest change in Eötvös effect. and it turns out-perhaps surprisingly-that the sea's vertical motion at periods of a few minutes amounts to less than 1.0 mGal.. The wavelength and character of variations in Eötvös typically cannot be separated from geologic signal by simple filtering.000 to 100. 7 to 15 seconds). Airborne gravity measurements have been successfully used for regional exploration problems for several years. but the time to traverse the spatial wavelength of typical geologic anomalies is very long compared to the period of ocean wave motion (i. but suffer from long-period uncertainties in aircraft elevation.e. The vertical sinusoidal motion that will give rise to vehicle accelerations of 1. Marine acquisition also is plagued by strong vertical accelerations. and GPS navigation has recently proven capable of delivering much more accurate velocity measurements than previous shore-based navigation systems. Accurate measurement of the Eötvös correction demands high precision navigation data. . Although the short period vertical accelerations typically experienced under survey conditions in an aircraft are not as great as on board ship. Vertical Acceleration Correction Commercial gravity surveys using stabilized platform gravity meters were first successfully carried out on ships. Herring (1985) and Hall and Herring (1991) have presented results of high-resolution field tests that demonstrate the increased spatial resolution possible using the direct velocity measurement capability of GPS.75 mGal change in the gravity reading. The Eötvös effect is by far the single largest source of error for shipborne gravity measurements. which result from sea swells and waves and can amount to 10. an airplane does not inherently maintain long-term stability of elevation. The second term in the formula is independent of course and is nearly negligible for marine measurements.

alternatively.01 cm/sec). They can occur only when the accelerations have the same periods and there are systematic phase relations between the accelerations.0 mGal at a wavelength of one minute would require us to control or know the aircraft's relative elevation to a precision of less than 1 mm (or. . 1967). otherwise the errors will average to zero. Figure 4 shows an example of a cross-coupling error. control of the vertical acceleration correction to a tolerance of 1. Cross-Coupling Errors and Corrections All types of shipborne gravity meters are subject to cross-coupling errors caused by the interactions of the effects of accelerations on the gravity meter or stabilized platform (LaCoste. Sensitive altimeters measure relative changes in elevation on aircraft and give results that allow resolution of a few mGal at periods of a few minutes. The vertical acceleration correction is the current limiting factor on airborne gravity resolution and accuracy.Figure 3 This figure gives us a means of estimating the shortest wavelength anomaly that we could expect to resolve at a given level of vertical motion uncertainty. On the other hand. we would have to know the aircraft's vertical velocity to a precision of 0.

Such stabilized platform errors occur when the platform's center of gravity (including the gravity meter) is not accurately on the platform's axes of rotation. the component e is the cross-coupling error. and if the center of gravity is off vertically.Figure 4 In this figure. then b will be approximately proportional to av and Equation 2 becomes (2) . let us assume that β is caused by vertical acceleration. if the center of gravity is off horizontally. In the case we are considering. av. the gravity meter will detect a component of horizontal acceleration given by: e = kax where k is a constant. and ax is the horizontal acceleration. vertical accelerations will cause errors. If the sensitive axis of the gravity meter is in the direction shown in the figure. It will eventually average out to zero if there is no systematic correlation between ax and b. but there will usually be correlation. For example. horizontal accelerations will cause errors. In this case. the platform is off level by an angle b.

5 m/sec. The selected high-cut wavelength is one index of the shortest wavelength resolvable on the survey line. The argument for this approach is that any correlation between the correction and geologically caused anomalies would have to be fortuitous and is probably incorrect. (3) Filtering and Spatial Resolution Various types of correlation filtering (like that described for the cross-coupling correction. the water particle motion in waves is roughly circular. but LaCoste (1973) found no practical benefit in computing more than seven. we must exercise our judgment and select a high-cut filter that is designed to suppress what we judge to be residual noise on each individual traverse in a survey. Since we can measure the instantaneous accelerations ax and av . The high-cut wavelength is usually expressed in seconds of traverse. Therefore we can expect Equation 3 to give a systematic cross-coupling error whose magnitude depends on the phase difference between the accelerations. We know that if there were no errors in observed gravity. After we have made all corrections. noise will remain in the corrected gravity trace. we can multiply them together to obtain a "monitor" which we can use to check and to correct the observed gravity. This procedure gives us the value of the constant k in Equation 3. Proper use of LaCoste's method almost always results in some improvement in gravity data quality. five is sufficient. rough treatment during use or transport. often. and will appear to be periodic or random. If they do. Shorter filter wavelengths reflect better survey conditions: calmer weather and more accurate navigation data. It is based on the premise that observed gravity should not systematically correlate with any combination of ship accelerations. as well as giving a corrected gravity profile. but probably a different value than that of Equation 2. there is an error in observed gravity. The following example shows how the method works: Let us consider a gravity meter that works perfectly except for having an error in the form of Equation 3. A boat speed of 5 knots is about 2. cross-coupling errors are corrected for as closely as possible. so the shortest resolvable spatial wavelength for a high-quality survey (e. In fact. and similar to the philosophy for refining corrections for E?tv?s and vertical acceleration. In the manufacturing of gravity meters and their associated stabilized platforms. as well as simple high-cut filtering) are applied to shipborne and airplane gravity data. Routine processing to improve the cross-coupling correction has been common since the late 1970s. Experience has shown that there is often a strong correlation between the horizontal and vertical accelerations of ships. To determine the size of the error. The principle is simple. one using GPS velocity measurement for the Eötvös correction with data acquired in good weather conditions) might be . the gravity profile should not correlate systematically with the monitor profile.e = k axav where k is still a constant. When processing the data. inadequate maintenance and other causes sometimes degrade gravity meter performance. However. We can compute and correlate up to seven monitors with observed gravity.. We can compute more using higher order terms. LaCoste (1973) has devised a method to improve the cross-coupling corrections to match meter performance in the field.g. The shortest resolvable spatial wavelength is the time wavelength multiplied by the vehicle speed. The objective of correlation filtering is to minimize the correlation of the correction with the final corrected gravity profile. In other words. they should not have the same shapes. we must determine what fraction of the monitor needs to be subtracted from the observed gravity so that the resulting corrected gravity does not correlate with the monitor. Typical high-cut filters used in shipborne and airborne gravity processing have high-cut limits from 200 to 1000 seconds. We can correct for cross-coupling when we adjust the gravity meter system and also when we process the data.

Airborne surveys suffer in this respect from the much greater speed of airplanes. For commercial survey work. we would expect the shortest spatial wavelength resolved by an airborne survey to be about 10 km. Under similarly excellent conditions. In one common method for removing bias errors. which would otherwise result in survey line misties.500 m (or 200 s * 2. The upper and lower curves indicate a typical range from average to good operating conditions. Survey line intersection differences are evaluated for each survey line crossing in a survey network. such as cross-coupling or meter drift.5 m/s). spatial resolution is limited by sampling rather than filtering. and possibly from errors in the Eötvös correction. aircraft speeds will be about 180-200 kph or 50 m/s-20 times the speed of a ship. We expect bias errors to arise from errors involving the gravity meter itself. we shift each survey line profile up or down by a constant level . Figure 5 For land work. Figure 5 shows the range of amplitude and wavelength resolution that we can expect from a range of gravity survey methods. Adjustment of Survey Line Crossing Differences Network adjustment of marine and airborne gravity data is designed to recognize and remove systematic bias and random errors in the data.

One common approach is to assign each line a reliability weight that depends on the average absolute mistie for a given line. Figure 7 ( Amplitude of vertical motion that will result in 1 mGal vertical acceleration. . Figure 6 ( The Eötvös Correction ) shows a graphical representation of intersection adjustment statistics typically shown on marine gravity survey line profiles. The systematic corrections for a network are further constrained such that the sum of the systematic corrections is zero.to minimize the sum of the square of the mistie errors at each intersection. The remaining random errors in the network are typically removed by proration of error between intersections. The final choice for the value at each intersection is weighted toward the statistically better line at the intersection. effectively eliminating DC shifts to the network as a whole. Figure 6 This type of display allows rapid visual evaluation of survey line quality because good survey lines typically show smaller random error mistie bars. The DC level shift for each line has no effect on the shape of relative anomalies on the individual lines.

Also notice the inverse correlation between Eötvös correction and observed gravity. Notice the difference between the observed and filtered data. when averaged worldwide on a large scale. The final Eötvös-corrected free-air gravity does not reflect the Eötvös event. Regional and Isostatic Effects Observed free-air gravity values. Free-air gravity vs.Figure 7 ) shows a segment of a typical marine gravity profile. are near zero ( Figure 1 . .

Thus. applied to the free-air gravity. Bouguer gravity generally increases in a seaward direction along continental margins. Historically. without variation in crustal thickness. As a result. but not for the density of the material which constitutes the topography. have large negative Bouguer anomalies. the weight of each column of rock resting on that depth would be approximately equal if considered on a large scale. with lower density crustal rocks underlying mountains and higher density rocks underlying ocean basins. such as on 250 x 250 km blocks. Both hypotheses postulated lower density shallow "crust" floating on a higher density uniform liquid."This compensation mechanism is known as isostasy. some "compensation" mechanism would be present to keep high topographic regions from "sinking. mountain ranges such as the Rockies in Colorado. whereas oceans typically have positive Bouguer anomalies. Researchers have believed for some time that above a certain depth (presently thought to be at least 60 km). . Generally. Airy's hypothesis stated that compensation for high topographic features is provided by a thickening in the base of the lower density crust. are negative for station elevations above sea level. Bouguer corrections in ocean basins are positive. Thus. Over the last one hundred years. Free-air gravity is corrected for station elevation and latitude. various theories have been put forth to explain the above phenomena. then regions of higher elevation generally have negative values of Bouguer gravity. mountains had thick low density crustal ТrootsУ holding them up. which is more dense than seawater. Pratt's hypothesis stated that compensation is provided by a variable crustal density.Figure 1 elevation ). two different hypotheses were used to explain isostasy: Airy's hypothesis and Pratt's hypothesis. USA. because seawater is mathematically replaced by rock. When corrections for the density of topography (the Bouguer corrections) are made to the observed free-air gravity values. The Bouguer corrections.

03 = +0. . which indicates that the crust is thicker under mountain ranges and thinner under ocean basins.324. but the mass of water under the meter has increased so gravity will increase by the attraction of the water: a Bouguer slab with a density of 1.245. Ex-1: What is the expected value of gravity at the top of a 400 m TV tower with its base at sea level at a latitude of 30 degrees? g(φ ) = 978 031.005 278 895 sin2 30? +0.680 mGal.3086 .0.012 mGal is the expected value for gravity at the base of the tower. so the expected gravity at the top of the plateau is 979.005 278 895 sin2 φ +0. It is apparent that Bouguer gravity values contain some effects related to the Earth's crustal structure and crustal density variations.1967 mGal/m.1967 mGal/m) = 78. Let the tide go up and down by one meter. what is the expected value of gravity on the top of a 400m plateau where the plateau is composed of rock with a density of 2. also assume that correction for the tidal attraction of the sun and the moon have been made.572 mGal.67) = 0.000 023 462 sin4 φ ) g(30o) = 979 324. gφ a = 400m(.03 for most sea water). 0.67))(400m) = (400m)(.000 023 462 sin4 φ ) g(30o) = 978 031.2654 mGal Negative 1m tide: ∆g = +0.00 (or 1. gravity will decrease by the amount of the free-air gradient.(0.332. Using the free-air gradient.Wessel (1986) provides a depiction of continental margins and their approximate densities. Ignore the acceleration of the meter as it goes up and down.67 g/cm3? g(φ ) = 978 031.04191)(2.846 (1 + 0.012 mGal The elevation factor (for h in meters): F-B(2.3086 . How much does gravity change? As the tide level increases by 1m.3086 mGal.04191× 1.2654 mGal Ex-3: At a latitude of 30 degrees.67) = 0.3086 mGal/m).846 (1 + 0.846 (1 + 0. gravity is 123. and also that the continental crust is less dense than the oceanic crust. Gravity will be less at the top of the plateau than at sea level.03 = -0.04191× 1. Ex-2: Put a gravity meter on board a stationary boat in the middle of a large harbor. We must consider these crustal effects in the Bouguer gravity when interpreting gravity data for local sedimentary structure.3086 + 0. Positive 1m tide: ∆g = -0.005 278 895 sin2φ +0.000 023 462 sin4 30°) = 979. Using the elevation factor. (F-B(2. The expected value of gravity at the top of the tower is 979 200.440 mGal less at the top of the tower.

we can judge the minimum required gravity measurement accuracy and the maximum station or line spacing. An old rule of thumb for an Тideal survey design is that the station or line spacing should be about one-half the depth of the target. Known responses of similar structures and hypothetical model calculations. Survey Design The survey targets observed or hypothetical response will define the surveys minimum wavelength and amplitude resolution requirements. for example. The survey design problem is that shallow-sourced anomalies may be filtered (in the case of shipborne and airborne surveys) or aliased to look like deeper anomalies. A not-so-obvious pitfall in this approach is an over-compromised survey that fails to meet the survey objective. but must be distinguished from the target response. surveys are often designed to search for a certain anomaly that we believe to be associated with a type of geologic structure. . In exploration. The survey objective might be simply to locate anomalies of a certain minimum size. A sometimes missed consideration in survey design is that target anomaly responses are always superimposed on effects of structure or density variation that may be of no interest. Usually. and much of what is said in this section applies equally well to magnetic survey design. Survey design plans should always include all available geologic information. as does Naudy (1971). the so-defined ideal survey is not economically feasible or justified. The "Тideal" survey. ranging from studies of entire oceans and continents to searches for man-made underground chambers. In other instances. are important design criteria. we seek a more precise definition of the anomaly field so that we can use detailed modeling to better define a structure such as a salt flank or the trace of a fault. An effective approach is to define the theoretically ideal survey. For example. From these requirements. A little simulation (imagined or computed) of expected consequences of dropping part of the ideal station grid should lead to a survey design that is nearly as effective as the ideal and much more practical. evaluate its cost and look at various designs until we find one that we can expect to do the job within a reasonable budget. We can interpret and resolve target anomaly effects from the background and interpolate them between lines provided that interference from non-target anomalies is manageable. High sample rates along survey lines or roads and trails give useful information (with easy access) about the relative importance of structure shallower than the target with little added cost. but interference from other anomaly sources. Survey design is partly a matter of constructing a net so that anomalies of interest can't slip through.Gravity Survey Design and Gravity Meters Introduction Gravity survey objectives span a variety of operating scales and targets. are also important concerns. who advocates using one-fourth the depth of the anomaly source as an ideal. orienting survey lines perpendicular to structural strike results in a higher sample rate in the direction where it is more effective. would adequately sample all anomaly wavelengths whose amplitudes reach 1/5 or 1/10 the amplitude of the target anomaly. as well as aliased effects from sources much shallower than the target. The station or survey-line spacing would be 1/4 or 1/2 of the shortest wavelength. geographical-coordinate-survey method and survey network design is based on expected target response. Low-resolution surveys often fail to provide enough information to distinguish between anomalies of interest and background geology. Survey design considerations for both gravity and magnetic surveys have a lot in common. The most effective choice of gravity instrument. where cost is not a consideration. Reid (1980) goes into this issue in detail.

Under these circumstances. is much shorter than the wavelengths of exploration interest. These survey methods are being overtaken and replaced by satellitebased systems and inertial surveying. Inertial Surveying The inertial survey system is based on the double integration of acceleration outputs from three orthogonal accelerometers mounted on a gyro-stabilized platform. For land work. which can be corrected directly by calculating sea level variations from position and time. where the operational advantages of the method are of the greatest potential value. Sea level variations due to the tides result in very long wavelength variations in observed gravity amounting to only a few tenths of a mGal. that the "conventional" surveying methods of the past may not be the conventional or the most popular methods of the near future. the elevation survey accuracy requirement determines the largest cost component for the entire survey. survey cost and accuracy are directly related.000 of the distance from the nearest control point. The integration constants at the beginning of each traverse correspond to each of the survey coordinates plus the vehicle velocity in the direction of each coordinate. which is required for the highest precision microgravity surveys. This is because variation in ship elevation related to wave motion has a predominant period of 7 to 15 seconds which. it is reasonably easy to achieve a relative vertical accuracy of less than 1 cm. The practical resolution of barometric altimetry is on the order of 0. The practical result is that transport costs often offset the lower cost of the altimetry equipment. conventional methods are relatively slow and expensive. for normal boat speeds. Shipborne surveys benefit from the vertical reference provided by average sea level so that it is not a factor in survey accuracy or resolution. Multiple repeats are required in these cases to reduce the uncertainty of the measurement. Because of wind and weather changes. On the other hand. borehole and airborne gravity. repeat differences of more than 10 m (3 mGal) are common. so gravity surveys are often incidentally helpful in finding elevation survey errors on seismic surveys. Over distances of a few hundred meters.5 m (about 0. Keep in mind. Gravity values reduced with an incorrect elevation will not fit coherently with adjacent stations.1 to 0. however.Geographic Surveying Elevation survey accuracy practically defines ultimate survey accuracy for land. isobaric surfaces are reliable only as a vertical reference in calm weather and gentle terrain. Loop closure requirements for typical land gravity surveys for oil exploration range from 20 to 100 cm (0. New survey points must lie nearly in a straight line between the initial and final control points on a given line. Disadvantages accrue when distances between stations are long. At each new survey point. The advantages of conventional surveying stem mainly from the low cost and wide availability of the survey instrumentation.2 mGal). we correct the system drift by resetting the system velocities to zero--known as a zero velocity update. Barometric Altimetry Barometers have been used to establish vertical control for gravity surveys with varying success. Inertial survey accuracy is about 1 part in 50.5 mGal) where stations are spaced at about 500 m. It is often a matter of contention whether vertical coordinates surveyed along seismic lines meet the vertical accuracy specification for the gravity survey. More usually. Conventional Surveying for Land Gravity Surveys Conventional geographic land surveying employs optical triangulation and spirit leveling. Repeatable measurements to within 1 m or less have been achieved over short distances in calm weather. in rugged terrain. Networks of recording barometers installed at known elevations have been used to correct for variations in barometric pressure over a survey area. these roughly 12-hour-period variations in gravity are removed at the stage of adjustment of survey line crossing differences. Typical survey design consists of establishing control points on the perimeter of a survey area so that survey traverse lengths will be on the order of 50 km and expected . or in terrain or vegetation where line of sight is short. For most land gravity surveys.

coordinate accuracies will be 0. A constellation of about 24 satellites circle the Earth in three orbit planes such that we can use measurement of signal transit time from any four satellites to compute our location and precise time relative to the GPS system. Using helicopters for transport. A unique characteristic of inertial systems is that they require no outside signal during a traverse.5 m. Idaho and Montana. Production rates typically average 50 stations per day with a helicopter and over 100 stations per day with a vehicle along roads. stations were established on nearly regular 1-mile and 0.5-mile grids. The methods for dealing with the degraded signals is a burgeoning and rapidly changing field. Additionally.1 to 0.2 m. This means that the sky must be visible. the constellation was incomplete. GPS promises some of the same advantages of inertial surveying. Department of Defense. the data will be declassified. Although expensive on a per-station basis. A further review of the present state of technology at this writing seems futile.3 to 0. The anomaly field is complex in this area. since it is certain to continue changing rapidly. so that survey work could only be carried out at certain times of the day. Another limitation is the completeness of the satellite constellation.S. government has recently reaffirmed its commitment to the GPS program. GPS Surveying GPS is the Global Positioning System satellite navigation system established by the U. although each survey traverse must start and end at a pre-defined coordinate. Rarely were the conventional survey lines spaced more closely than about 10 km. S. Differential GPS makes use of common-mode error extraction by observing the signals at a fixed survey base and using base information to correct the data from the survey receiver. In some very remote areas in Africa. the combination of helicopter and inertial surveying enabled rapid survey coverage over difficult areas that would have been much more expensive and impractical using conventional surveying. and methods are being developed to achieve survey accuracies of less than 1 cm with short station occupation times. GPS survey receivers and survey methods have proliferated since 1990. and GPS shows promise of dominating an increasing number of land survey applications. Equipment costs have come down. and it is very costly on a daily or hourly basis compared to other survey equipment.S. most of the areas had been surveyed conventionally to some extent. The main disadvantage of inertial surveying is cost. The U. The inertial system installed in a utility vehicle facilitates establishing survey coordinates along roads where short line-of-sight or other considerations make conventional surveying impractical or slow.-military users. control points have been established prior to the survey using satellite survey methods. . and maintenance into the next century is assured. The system's accuracy for determining the position of a single receiver is deliberately degraded to about 100 m for non-U. The usable vertical and horizontal resolution of the inertial system is 0. Obviously. Future utility of GPS depends on continued maintenance of the satellites. The first commercial use of inertial surveying in support of exploration gravity surveys for oil was during the exploration of the Overthrust Belt of Utah. Wyoming. The pre-defined control coordinates needed to begin inertial surveying must come from another source. and will no longer be degraded by the U. but rugged terrain confined most of the survey traverses to roads and trails in the valleys and along streams. Another successful application of the inertial surveying system to gravity survey work uses ground transport rather than a helicopter. so detail on structural trends and closures between lines was needed both in evaluating prospects and in planning additional seismic work. Positions are normally referenced relative to a point on the survey vehicle such as the helicopter landing skid or a point on a car bumper. Department of Defense. The equipment is expensive. a fundamental limitation of GPS is that the satellite signals must be received at the survey point. Until recently. Speed is its main advantage. but at lower cost. By 1977. Survey accuracy and speed are independent of intervisibility of survey points (or visibility of satellites). because the 1575 MHz signal is blocked by vegetation and buildings. such as bench marks and triangulation points.S.

recording accurate relative times between fixes. One unique aspect of GPS navigation that is critical to accurate measurement of the Eötvös correction is the capability of some GPS receivers to measure velocity directly from the Doppler shift of the GPS carrier signal. Relative velocity accuracy of 2-3 cm/s can be achieved at one-second sampling intervals. Most gravity meters can repeat measurements to within 0. Nettleton (1976). so some concluded that the total length of a measuring spring would have to be 107 cm times 10-4 cm--about 30 feet tall. Dobrin (1988) and Torge (1991) provide interesting. Relative position accuracy is therefore more critical than absolute accuracy. Further improvement in the accuracy of GPS velocity measurement seems likely. The lower frequency systems (about 1 MHz) have greater range and less accuracy. we have obtained the vertical acceleration correction measurement for airborne gravity by differentiating the output of sensitive altimeters. Displacement of the mass would be proportional to the total change in gravity--about 1 part in 107 for 0. and data quality suffers unnecessarily. while the higher frequency systems (hundreds of MHz) have greater accuracy but more limited range. Without accurate measurement of the time interval between fixes. as well as accurate positions.001 mGal or 1 mGal. Horizontal positioning for most of the marine surveys of the recent past has been obtained from land-based radio-navigation systems operating at a range of frequencies. fieldportable instrument.1 mGal. 1985). GPS also shows promise of improving the vertical acceleration correction for airborne gravity.1 mGal. detailed summaries of the development history of gravity measurement. At the present level. For longer periods.1 mGal would have to be 10 m tall. the Eötvös correction is degraded below the level dictated by position (distance) accuracy. Before that. we compute velocity from the measured time and distance between fixes.5 mGal (51 cm/s is 1 knot). we expect vertical acceleration accuracies of about 1 mGal at periods of 7-12 minutes from GPS. Although a number of shore-based radio navigation systems are still used for some applications. GPS has become the most common navigation system for marine geophysical surveys. . and some instruments are capable of nearly 0. Airborne gravity surveys have used the higher frequency systems. The most widely used instruments still in common use are the LaCoste and Romberg gravity meter and the Worden gravity meter and its clones. making vertical acceleration corrections more effective. GPS delivers much better resolution of the Eötvös correction (Herring. displacement could be measured to an accuracy of about 10-4 cm. gravity measurements for exploration employed pendulums and torsion balances. has not always been accomplished. The straightforward problem of measuring the displacement of a mass hanging on a spring caused some to conclude that a spring and mass gravity meter capable of measuring a gravity change of 0. For all of these systems. As a practical matter. particularly on seismic operations. Several inventions were made to enhance the displacement of a spring balance so that accurate gravity measurements could be made using a reasonably sized. Gravity meters came into wide use as practical field instruments in about 1940. This obviously demands accurate timing of fixes. we can attain Eštvšs correction accuracy of 0.05 mGal. When the first meters were being developed. Land Gravity Instruments The term gravity meter or gravimeter has come to mean some kind of sensitive spring balance capable of measuring changes in gravity with an accuracy of at least 1 mGal. This is equivalent to a maximum Eötvös effect of about 0. Until now. the velocity needed to compute the Eötvös correction must be obtained from successive point positions--that is. the Eötvös error is further reduced by averaging samples--for periods of over 2 minutes.Navigation Systems for Marine and Airborne Gravity The navigation accuracy requirement for shipborne and airborne gravity surveys is a consequence of the need to compute an accurate Eötvös correction that is proportional to the eastward velocity component of the ship or aircraft.

These instruments have seen little use in exploration.A significant and relatively recent addition to land gravity measurement is the development of falling body instruments capable of measuring the absolute value of the Earth's gravity field to very high precision--on the order of 1 mGal.mgCsinθ (1) . Figure 1 The high sensitivity results from the spring characteristics and the geometry of the suspension. LaCoste and Romberg Land Gravity Meter LaCoste (1934) invented a spring suspension that can be used to make gravity meters with very high displacement sensitivities.L0)h . The suspension consists of a beam supported by a diagonal spring whose upper point of attachment is vertically above the hinge and can be adjusted vertically to balance the pull of gravity on the beam ( Figure 1 . Schematic diagram of LaCoste and Romberg gravity meter ). the sum of the torques exerted by gravity and the spring is T = k(Acosb + Bcosα . Referring to Figure 1 . but they can be expected to facilitate the establishment of accurate calibration ranges for gravity meters. They may also prove valuable in the monitoring of geothermal and gas reservoirs.

then Equation 1 becomes: T = k(ABsina cosb + ABsinb cosα) . This practice probably led to incorrectly calling all high-sensitivity instruments unstable. experimenters tried to achieve high sensitivity by using spring suspensions which were stable over part of the range of motion and unstable over the rest of the range. Practically. Figure 2 ( Diagrammatic cross section of LaCoste and Romberg gravity meter ) (2) . or if gravity changes at all.mgCsinθ = kABsin(a + b . In other words. Before the advent of zero-length springs. They achieved fair results by operating in the stable range but near instability. They are also sometimes referred to as labilized or astatized because they are sometimes made by combining unstable elements with stable elements. or would be zero if the turns of the spring did not bump into each other.where k = spring constant L0 = length of spring when exerting no force m = mass of beam g = gravity C = distance from hinge to center of gravity of beam and other symbols are as shown in Figure 1 If we let L0 = 0. but this is a misnomer. Such springs can be made readily both out of metal or quartz. This high-displacement sensitivity depends on a spring whose unstretched length is zero (L0 = 0). Most of the gravity meters now in use are of this type. we can adjust the spring tension A so that kAB = mgC and T = (kAB -mgC)sinθ = 0 Equation 2 states that there is zero torque on the beam regardless of the angle θ. Gravity meters with high-displacement sensitivities are sometimes referred to as unstable gravity meters. they are generally called zero-length springs.mgCsinθ = (kAB -mgC)sinθ Now. and note that θ = a + b and that h can be written either as Asinb or Bsinα. this means that we can achieve a very high displacement sensitivity. the beam will stay wherever it is put. the beam will move theoretically to the end of its travel.

Figure 2 and Figure 4 ( Diagram of gear train assembly and measuring screws ) are diagrammatic cross sections of the LaCoste and Romberg gravity meter which uses the high-displacement sensitivity suspension. .

We can thus adjust sensitivity by adjusting the level reference along the axis parallel to the beam--that is. It weighs about 6 pounds. The beam's position is read by observing a cross-hair on the beam with a microscope or by using an electronic readout device built into the instrument. Shipborne meters. The LaCoste and Romberg instrument must be kept thermostated during use. Figure 1 and the analysis that leads to Equation 2 is based on the spring attachment point being vertically above the hinge point. it would be very stable and insensitive to any change in gravity. Shipborne meters use a proof mass of 29 g and borehole meters use a mass of 8 g. The beam would be oriented vertically and hang straight down. Figure 3 ( . and requires a storage battery of about equal weight for thermostating. borehole meters and electrostatically nulled meters are normally operated with the longlevel reference adjusted as close to infinite sensitivity as possible. An interesting feature of this instrument is that the displacement sensitivity varies with the orientation of the suspension in relation to the direction of the gravity vector. and tilting it the other way causes the instrument to become unstable. by adjusting the long level. During manufacture.) Normal land instruments are operated so that the instrument is stable but not quite infinitely sensitive. each individual meter is calibrated over its entire range in the laboratory and on known gravity base stations. In this position. (Think of the instrument tilted 90 degrees toward the beam. which results in theoretically infinite sensitivity. Tilting the instrument toward the mass reduces the displacement sensitivity. The beam is nulled by vertically adjusting the upper end of the spring.Figure 4 In practice. its displacement sensitivity is about 1000 times that of a simple weight on a spring. The proof mass used in the LaCoste and Romberg land meter is about 15 g. All LaCoste and Romberg meters are constructed of metal and use metal springs.

For both meters. The LaCoste and Romberg Model D meter is designed to achieve microgal sensitivity with increased resolution of the smeasuring screw dial. . Worden Gravity Meter The Worden gravity meter sensor is constructed almost entirely of fused quartz. Figure 3 Various modifications of the LaCoste and Romberg instrument have been used to achieve precision on the order of 2 to 5 mGal.Model G meter and its carrying case ) is a photograph of a LaCoste and Romberg meter with its carrying case. Although the geometry of the suspension is quite different from the LaCoste and Romberg meter. we can adjust sensitivity by adjusting the "long-level" reference. Laboratory repeats of less than 1 mgal have been demonstrated. More recently. the instruments have the zerolength-spring. high-displacement sensitivity design in common.02 to 0.05 mGal. electrostatically nulled systems that employ computer-based correction and recording systems have been used to achieve consistent repeatability of 2 to 5 mGal. The proof mass of a Worden meter is only about 5 mg. The commonly used instrument in exploration is the LaCoste and Romberg Model G meter which routinely delivers repeatability of 0.

The main advantage of the quartz meter is its light weight due to the relative thermal stability of quartz and the small sensor. Recently. Instrument drift is computed and corrected internally. Only about ten borehole instruments are operational. Among the advantages of the CG-3 are world-wide range without resetting and automated operation. Sodin.05 mGal. Scintrex has offered a CG-3M which has a reading resolution of 1 mGal. including its battery. . The borehole gravity meter's main application is deep-investigation density logging. Borehole Gravity Meter The LaCoste and Romberg borehole gravity meter is the only commercially successful borehole gravity instrument. Other weight drop instruments have been developed and are described in Torge (1989) and Nettleton (1976). The CG-3 is an automated system with a microprocessor-based control and data acquisition system. The manufacturer's stated accuracy for the instrument is 2 mGal with a repeatability of 1 mGal. The nullers are similar in concept to those used on some LaCoste and Romberg Model G meters and on the Scintrex CG-3. The CG-3 sensor consists of a 300 mg proof mass hanging on a spring about 2 cm long. Worden-type meters can be used to acquire data accurate to a few tenths of mGal without the use of thermostatic control. In addition. in contrast to several thousand land instruments. it is useful in identifying porosity and fluid saturation values that are undisturbed by the near-wellbore environment and thus representative of reservoir conditions. and can be operated without thermostatic control. The proof mass is mechanically constrained to move only very slightly. uses a fused quartz sensor and electrostatic nulling. Scintrex CG-3 Autograv The Scintrex CG-3 meter. its design is basically identical. the FG5 is not a field exploration instrument. The measurement relies on an electronic design capable of resolving small changes in the electrostatic force needed to maintain the proof mass at precisely the null reference point. Although it is smaller than the LaCoste and Romberg land gravity meter. Scintrex and Sharpe).01 mGal. The borehole meter's ability to achieve high precision in noisy borehole conditions has been enhanced by the addition of electrostatic nulling and computer-based correction and recording systems.001°C) using a two-stage or double-oven temperature-stabilized environment. Worldwide Instruments. Thermostatically controlled Worden meters achieve repeatabilities of about 0. Some versions of the Worden meter provide thermostatic temperature control that improves the drift stability of the instrument but requires the additional weight of a battery and some loss in the ease of field portability. The sensing element is encased in a sealed vacuum flask to isolate it from outside temperature changes. The proof mass is about half that of a land meter. Laser interferometry is employed in the Axis Instruments FG5 Gravimeter. Actual drift is claimed to be linear and adequately handled by the microprocessor. Clones of the Worden instrument were also made in China and the old USSR. weight and cost. The Worden meter weighs only about 5 pounds. The instrument has a shipping weight of 349 kg. Because of its relatively large size. but it has applications in establishing absolute base stations and monitoring temporal changes in gravity. Axis Instruments FG5 Absolute Gravimeter Absolute measurement of the acceleration of gravity is based on the fundamental quantities of distance and time. Given adequate base ties. The vacuum-sealed sensor is maintained at a nearly constant temperature (within about 0. most models use a bimetallic temperature compensation spring. weighs 12 kg. Standard resolution of the CG-3 is 0.02 to 0. A number of manufacturers have produced quartz spring gravity meters that are basically identical to the Worden meter (For example. The instrument. which has been in commercial use since 1987. The measurement time required to obtain 1 mGal precision is less than two hours for a quiet site.

underwater gravity meters were widely used offshore. very few underwater surveys have been carried out over the past 20 years. the Eötvös correction and vertical acceleration corrections. the need to level the instrument in the borehole and the need to maintain the instrument at a constant thermostating temperature of about 123°C. depending on operating conditions.3 cm for higher logging temperatures up to about 250°C. The meters are housed in a small. The leveling mechanism of the meter limits operations to wells with a deviation from vertical of 14 degrees or less. Only two or three operational underwater gravity systems exist. The larger diameter sondes accommodate a Dewar flask to protect the instrument from well temperatures higher than the meter's thermostating temperature. The accuracy of underwater gravimetry is about 0. Underwater Gravity Meters Until the advent of stabilized platform shipborne gravity meters in 1965. Underwater gravity instruments now in use are essentially identical to the LaCoste and Romberg Model G meters. The relative accuracy of the borehole gravity meter is 2 to 15 mGal.Operating limitations of the LaCoste and Romberg borehole gravity meters are imposed by the size of the meter.1 mGal. Marine and Airborne Gravity Meters The instruments used in commercial exploration for marine and airborne survey work are the LaCoste and Romberg Models S and SL (for straight line). However. Torge (1989) and Nettleton (1976) describe these and other systems in detail. Published (Valliant. Current underwater gravity meter systems use electrostatic nulling and remote servo-system control for leveling and spring-tension adjustment.5 to 2. Gravity measurements—estimated resolution and accuracy ). because of the relatively high cost of underwater gravity operations. . Nettleton (1976) provides an historical review of the development of underwater gravimetry. marine and airborne acquisition accuracy has been limited by dynamic corrections. Typical logging sonde diameters are 10.0 mGal at wavelengths of about 2 km ( Figure 2 . 50-cm diameter diving bell which is weighted with lead to make it sink.7 to 13.5 cm for logging temperatures below about 115°C and 12. the Bell BGM-3 and the Bodenseewerk Kss31. Until recently. the total weight of the meter and its housing is about 160 kg. 1983) and unpublished performance comparisons of these instruments reveal little difference in their basic abilities to measure changes in gravity. The length of the logging sonde is about 3 m. Typical relative survey accuracy for marine gravity has been 0.

1991).Figure 2 Relative accuracy. the following analysis (LaCoste. The gravity meter is mounted on a gyro-stabilized platform to keep it level. the zero-length-spring suspension on the Model S is stiffer in order to better withstand horizontal accelerations. The Model S stabilized platform air-sea gravity meter was first used commercially in 1965 by GAI-GMX (LaFehr and Nettleton. even when vertical accelerations are several hundred thousand mGal. 1967). It differs from the LaCoste and Romberg land gravity meter in two significant aspects: first. LaCoste and Romberg Model S About 100 of the LaCoste and Romberg Model S instruments have been built and are being operated by exploration contractors and various government and educational institutions. 1967) shows that this is not the case. the damping in the vertical direction is greatly increased to prevent the movable beam of the gravity meter from hitting its stops.2 mGal at wavelengths as short as 500 m (Hall and Herring. The basic equation of motion for the system is the simple harmonic motion equation: . second. Although we might expect the high damping to give a slow gravity meter response. demonstrated by using GPS combined with improved low-noise electronics and a computer-controlled LaCoste and Romberg Model S gravity meter. There are about 50 currently used in exploration. is 0.

The form of the inherent cross-coupling error is similar to the example of Figure 1 ( Effect of horizontal accelerations on gravity cross coupling effect. The Model S also has inherent cross-coupling as a result of the meter's cantilevered beam design. constant velocity. it will take a long time for the ball to accelerate to a steady-state. has some degree of imperfection cross-coupling and/or platform leveling. let us find the solution for a step function change in gravity on the right side of Equation 1. and ax and av are the horizontal and vertical components of acceleration. and that the time constant for damping actually used in the gravity meter is only about 1 millisecond. In other words. Thus. this is the direction in which a pull on the beam has the maximum effect. the beam will remain wherever it is placed in its range of travel. flat. exponentially approaches a limit. glass table. If the table and ball are immersed in a viscous fluid so that the motion of the ball is highly damped. we can assume that the restoring force constant k is zero. ) developed for the relationship e = k axav where e is the cross-coupling error. We also see that the time constant decreases as the damping coefficient increases. very little time will be required for the ball to accelerate to its very low terminal velocity in the viscous fluid. the time constant is entirely negligible because of the high damping. We therefore see that the sensitive axis (2) .(1) where g = acceleration of gravity m = mass of beam z = vertical position of a point on the meter case B = displacement of the beam with respect to the meter case F = damping coefficient k = restoring force constant on mass c = spring tension constant S = vertical displacement of springs upper end relative to the meter case Since the Model S gravity meter uses a zero length spring suspension. We find that ∂ B/∂ t. Consider a beam-type meter on a perfectly level platform. k is a constant. In air. and that the limit is the value of the step function divided by F. The remark that the ball will remain anywhere it is placed on a level table points to another analogous point of behavior: with the spring balance nulled. the beam velocity. The ball will remain at rest anywhere it is placed on the glass table until the table is tilted. A qualitative analogy helps illustrate this point: a hard ball is resting on a level. The sensitive axis in that type of gravity meter is the direction normal to the beam and the axis of the hinge. With this approximation. like all gravity meters. respectively. The LaCoste and Romberg Model S gravity meter. the displacement sensitivity is practically infinite. Meters designed so that the proof mass travels in a vertical straight line do not have inherent cross-coupling error.

This is the same condition we had in the case of the platform being driven off level by varying vertical accelerations. • Make the proof mass move in a straight vertical line. • Accurately null the beam to keep b = 0.shifts as the beam moves. even though the gravity meter is kept level by the stabilized platform. LaCoste and Romberg Model SL The LaCoste and Romberg Straight Line meter design is similar to that of the standard Model S meter in that it uses the zero-length spring suspension and is highly damped. This is the method employed in the LaCoste Straight Line meter and in the Bell and Bodenseewerk meters described below. . This method was used in the Askania Gss20. Figure 1 There are three ways to deal with inherent cross coupling: • Measure the beam's deflection and the horizontal acceleration ax and correct for it. This suspension eliminates inherent cross-coupling effects. Equation 2 still applies. but now b in Figure 1 refers to the angle the beam makes with the horizontal. One main difference is that the Straight Line meter uses a suspension in which the center of gravity of the proof mass moves in a straight vertical line rather than in the arc of a circle. This is the method used in the LaCoste and Romberg Model S meter.

NASA has computed gravitational models for the Earth using spherical harmonic analysis up to order 20. the sensor does not employ a high-displacement sensitivity suspension. Valliant found that they were substantially inferior to the Straight Line meter. The resolution of the gravity field based on orbit analysis has very limited resolution and is of no practical value for exploration. The sensor itself is only about 2. The use of silicone fluid and an increase in the rigidity of the suspension make imperfection cross coupling negligible. which can only be carried out over oceans. The measurement is based on a servo loop. It is housed in a temperaturecontrolled oven mounted on a gyro-stabilized platform. The sensor does not employ a high-displacement sensitivity suspension. Satellite Altimetry . and by sea surface altimetry. However. The shortest wavelength resolvable from such a model would be about 2000 kilometers. The spring-mass system is a relatively long tube hanging on a coiled spring. only three of the Straight Line meters have been manufactured. their performance was nearly as good as that of the Straight Line meter. and a few are in use in exploration. Like the Bell meter. where displacement is sensed by capacitance pick off. or observation of perturbations of the satellite paths from ideal elliptical orbits. The straight line meter uses silicone fluid rather than air for damping. Measurement of Gravity from Satellites Satellites obtain gravity measurements in two ways: by orbit analysis. the Kss31 is a later version of the Kss30. which simplifies manufacture and adjustment.There are other differences between the straight-line and standard models. and which increases ruggedness.4 cm high. To date.3 cm in diameter and 3. and the nulling force is supplied by varying the current in an electromagnet. Valliant (1983) made extensive tests of the Straight Line gravity meter against two randomly chosen standard LaCoste and Romberg Model S meters. A permanent magnet provides damping. Most of them are operated by the U. The Kss30 sensor uses a capacitance transducer to keep the vertical spring-mass system near a nulled position. Bodenseewerk Kss30/31 The first shipborne meter to go into service. The sensor has no inherent cross-coupling and is very rugged in terms of tolerating high accelerations. and outperformed an Askania Gss2. Bell meters have been in operation since 1967. S. Bell BGM-3 The Bell BGM-3 uses an accelerometer design that was originally used in a military inertial navigation system. Without using cross-correlation corrections on the standard meters. which is early version of the Gss20. compared to about 100 of the Model S meters. Askania introduced the Gss3 in 1971. Navy. with cross-correlation corrections applied to the standard meters. which was later manufactured by Bodenseewerk as the Kss30. Another difference in the Straight Line meter is that most analog electronics are replaced by a microprocessor. in 1958. The BGM-3 performed well in extensive tests reported by Bell and Watts (1986). and a coil attached to the mass senses a current which is a measure of the varying damping force. This makes it possible to use much larger damper clearances. was the Gss2 manufactured by Askania.

[See Sandwell (1991) for a review of applications. Seasat. the average accuracy of gravity from satellite altimetry is better than at the shortest wavelengths. 1989). and waves.1 to 0..2 to 1. Heiskanen and Moritz. In the absence of disturbing forces such as tides.] Satellite altimeter data have also been used to map continental margin structure. including important references: "Satellites such as Geos-3.5 mGal Cost per Station $30 to $50 $30 to $100 $150 to $500 .. and sum the new series to construct gravity anomaly (Rapp and Pavlis. 1983). Survey costs and accuracies vary substantially depending on conditions. Calm weather and large vessels are needed to obtain the best possible marine results. and airborne gravity similarly benefits from smooth flying conditions. For example. They found that the satellite gravity profiles in their study were accurate to 6. hotspot chains. high startup costs will greatly increase the average cost per unit for a small survey. From this theory it is clear that geoid height and gravity anomaly are equivalent measurements of the Earth's external gravity field. Survey Costs and Accuracies The following tables give an idea of costs and accuracies that should be helpful in the early stages of survey planning. multiply each of the coefficients by a known factor. LAND SURVEYS Application Microgravity Detailed Exploration Regional Exploration Station Spacing 3 to 30 m 0. 1967) is commonly used to compute geoid height from the gravity anomaly.Satellite altimetry over the Earth's oceans has provided resolution of much shorter wavelengths. seamounts.51 mGal for wavelengths greater than 25 km. The two-dimensional (2-D) Stokes' integration formula (e. They concluded that anomalies with wavelengths as short as 25 km can be resolved using satellite altimetry available at that time." For many of these applications it is desirable to compute gravity anomalies from geoid heights so the satellite data can be compared and combined with shipboard gravity measurements. mid-ocean ridges and a multitude of previously undiscovered features in the world's oceans. the sea surface conforms to the geoid or gravitational equipotential surface. In the introduction to their paper comparing satellite and shipborne gravity. and so it can be useful as a leveling reference for sparse or unconnected shipborne gravity data sets. For longer wavelengths. An alternate approach is to expand the geoid height in spherical harmonics. Small and Sandwell (1992) compared detailed shipborne gravity measurements to the free-air gravity field inferred from satellite altimetry. currents. and it is straightforward to invert the Stokes formula to compute the gravity anomaly directly from the geoid height. Small and Sandwell offer a concise introduction and summary. 1990.g. The accuracies achieved on marine and airborne surveys depend heavily on survey conditions. particularly in remote areas where little shipboard data are available (Bostrom. and Geosat use microwave radar to make high precision (±2 cm vertical) measurements of the sea surface height relative to the reference ellipsoid. The short wavelength components of these geoid height profiles have been used to map fracture zones.0 km 2 to 5 km Required Vertical Accuracy 1 cm ± < 0. World-wide coverage has improved in quality and density.2 mGal 0. and the data are widely available from government and academic sources at low cost. Gravity derived from satellite altimetry provides low-cost regional information that is often useful in exploration. Haxby et al.5 m 1 to 2 m Gravity Accuracy 2 to 10 mGal 0.

. this might include: salt mapping. Aiding in discovery of additional oil & gas fields in a region.5 km/2 mGal Cost/km $30 to $60/km 1. including 1. Identifying "unknown intrusives" observed on seismic record sections (i. Extending subsurface interpretations based on seismic data into areas where no seismic data are available We should adhere to the following general procedure whenever we need to acquire or interpret gravity/magnetic data: Step 1: Analyze geology of the area.2 to 1. once some fields are discovered. igneous intrusive. etc.0 to 10 km 2 to 5 km/0. etc.5 to 2 mGal same as above 1. basement mapping.. definition of volcanic-covered areas and interrelated volcanic and clastic sequences 2.0 to 10 km 10 km/2 to 5 mGal $100 to $200/km Surface Gravity Interpretation Geologic Applications and General Knowledge Gravity interpretation is the determination of subsurface information from gravity maps or profiles.) 3. There are a number of geologic applications for gravity interpretation (particularly for integrated gravity/magnetic interpretation). mapping top of high density rocks. salt dome vs.0 k m Shortest Wavelength/ Amplitude Resolution 0. reef vs. Mapping subsurface geology reconnaissance basement mapping in frontier areas definition of areas to acquire mineral rights or to conduct more extensive exploration structural mapping in mature exploration areas.AIRBORNE AND MARINE SURVEYS Application High-resolution. marine with GPS (part of seismic operation) $50 to $100/km (stand alone) Conventional Marine (most surveys before 1991) Airborne Gravity Typical Primary Line Spacing 0. lava flow.e. to help define the gravity/magnetic signature 4.

which would be a reflective appraisal of the interpretation relating to the possibility of alternate solutions. Applying Local Geological Knowledge As is true for any kind of geophysical interpretation. or other such features expected? . either in the sedimentary section or in the basement? • What type of basement rock is likely to be present? • Are volcanic rocks likely to be present? • Is the sedimentary section clastic. we must learn as much about the local geology as possible. Determine residual gravity (or magnetics) b. carbonate or both? • If both clastic and carbonate rocks are present. the reliability of gravity/magnetic interpretation depends to a great extent on the reliability of local geological knowledge. or analyze the quality of the available data to solve the geological problem at hand. gypsum. incorporating seismic data into gravity and magnetic analysis can yield even more information. is there a distinct geologic contact between the two? • Are minerals such as salt. To produce reliable gravity/magnetic interpretations. salt structures. in the case of magnetics). we should answer the following questions: • Is there a lateral density/magnetic susceptibility contrast present in the area. which can be used to fit an observed anomaly. and which would include some sensitivity analysis of the conclusions. igneous intrusives. shale diapirs. When making a gravity/magnetic interpretation. Perhaps we could add a fifth step to the four listed above. There are literally an infinite number of possible geometries and densities (or susceptibilities. Step 4: Interpret the gravity and magnetic data from known to unknown areas: a. Step 3: Design gravity and/or magnetic survey." Using gravity and magnetic data together can provide much more information than we could obtain by using either tool separately. It is therefore essential to integrate all available information into the gravity or magnetic model in order to best constrain the final result. we can use both gravity and magnetic data to solve a certain part of the geologic problem.Step 2: Determine gravity/magnetic response to known or expected geological features (a) by modeling and (b) empirically. It is important to keep in mind that all interpretations and modeling results for gravity and magnetic data are non-unique. or we can independently provide different pieces of information to help solve the overall geological "puzzle. Interpret residual gravity (or magnetics) In many geologic provinces. or anhydrite present? • Are reefs.

by modeling known or expected geologic features • empirically. Ideally. and average total field? • What is the near-surface geology? Determining Gravity/Magnetic Response It is critically important to determine the gravity and magnetic response to the local geology. Another potential anomaly enhancer might consist of structurally high basement blocks occurring over high density basement blocks. Determination of the amplitude and shape of the anomaly can be useful both for interpretation and for survey design. The combined structural and density change effects produce a larger anomaly than would be caused by the structural effect alone ( Figure 1 . and is there a predominate structural ?strike? expected? • What is the expected depth of burial and areal extent of the features of greatest geological interest? • What is the local magnetic inclination.. we will need to adjust the geologic and/or density model the first time we make a comparison.• What subsurface control and surface geology control is available? • What density and magnetic susceptibility control is available? • What is the dominant structural style of the region.g. by examining gravity and/or magnetic data within the study area and comparing actual responses to known geologic features. In some cases. This discrepancy could be due to the anomaly-enhancing effect of differential compaction over a reef. Empirical Response If gravity data exist over some or all of our area of interest. In unsurveyed areas. reefs of a certain size and depth or faults of a certain depth and throw)? If so. . Differential compaction enhancement of basement horst gravity anomaly ). we can construct simple gravity models to determine the probable amplitude and shape of the expected gravity anomaly. Model Response The concepts behind determining density and calculating gravity effects of geologic models apply to calculating the theoretical gravity response to the local geologic features of interest. we see a greater gravity response than we expected. declination. we should compare the observed gravity anomalies with the calculated gravity response. For example. One reason that our initial gravity model may not fit the observed gravity is that our initial estimate of the subsurface density distribution is incorrect. Generally. we should estimate this response in two ways: • theoretically. for example. Adjusting the geologic/density model gives us valuable information that we can apply throughout the area we are interpreting. the empirical analysis of gravity and/or magnetic response will not be possible. do we expect to encounter salt domes of a certain geometry (e.

This can happen if local geologic features of interest have little or no lateral density contrast. or if the gravity effects from local structure are obscured by geologic features that cause much larger anomalies.Figure 1 In some cases. Have we selected a reasonable Bouguer density or densities? . The basement density or "normal" sedimentary section densities could be considered "background geology. there appears to be no relationship between the observed gravity response and the original model. and to determine the suitability of existing gravity data to solve a given geologic problem. we should perform all relevant empirical model response work to design the interpretation procedures. In any case." Large changes in background geology could obscure the gravity anomaly from the target. abrupt changes in basement lithology. Such obscuring features could include shallow basalt or anhydrite thickness variations. Analyzing Data Suitability We can use the gravity response model studies to design a survey to detect the geologic features of interest (if this is possible). preferably before the survey is acquired. Questions to address in designing a survey design or determining the suitability of existing data include the following: 1. or changes in overall sedimentary section density.

polynomial residuals. In the case of shipborne or airborne gravity surveys. sedimentary. mathematical residual methods provide qualitative local anomaly enhancement. for that matter) to incorporate all known relevant geological constraints." "minimum depth to basement.) • other relevant surface geologic controls. these methods require selecting a particular residual operator (such as second-derivative) and then performing the operator calculation on a computer. including • known basement or other "mapped formation" outcrops • subsurface depth control (such as "top of salt. and represents the difference between the regional gravity field and the Bouguer gravity field. have the data been over-filtered such that target anomalies might be oversmoothed? Determining Residual Gravity We should design our interpretations (or new field surveys. The residual gravity field is the portion of the Bouguer gravity field that is related to the geologic problem at hand. or modeling of "known" seismic structures We can use all of the above geologic constraints to help interpret residual gravity Bouguer gravity data contain the superimposed effects of basement. and magnetic susceptibility. bandpass filters. etc. and "eyeball"/profile/geologic. upward or downward continuation. Eyeball/profile/geologic residual determination methods allow the interpreter to put a geologic bias into the residual gravity. In the case of shipborne or airborne gravity surveys. Determination of Residual Gravity is also known as anomaly separation." "projected depth to basement. and derivatives. Eyeball/profile/geologic anomaly separation includes graphical profile analysis and other . Is the contouring of the data reasonable? 4. • subsurface and/or seismic control on bulk densities and magnetic susceptibilities from well logs. If terrain corrections are needed. but the data distribution suited for only 2-D analysis? 7. Is the vertical accuracy of the surveying adequate to provide sufficient data accuracy to solve the geologic problem? 5. density. Essentially. but do not produce a residual gravity field that is suitable for gravity modeling. and crustal structure and lithologic changes.2. such as areas of volcanic outcrop. which we can group into two general categories: purely mathematical. samples. have adequate eotvos and line leveling corrections been made? (line leveling problems often cause an obvious "herringbone" pattern in the map contours) 6. Purely mathematical methods include ring residuals. sedimentary rock outcrop patterns. There are a number of different anomaly separation methods. basement composition." "depth to basement." etc. Generally. known faults and fault patterns. Is the line spacing and station spacing sufficient to define the anomalies related to our target and to prevent aliasing of near-surface noise? Is the geologic problem 3-D. have we performed such corrections in a reasonable manner? 3.

Example Problem 1 illustrates the use of geologic and magnetic depth constraints to assist in designing a gravity regional. where the Bouguer gravity often increases in a seaward direction due to crustal thinning and closer proximity to the higher density oceanic crust. the interpretation shows that the residual is caused by the valley fill. In this case. it is possible to construct residual gravity anomalies from eyeball/profile work without any geologic input. Figure 1 ( Regional gravity determination from Warm Springs Valley.Regional Gravity Determination . incorporating any known geologic input into eyeball/profile residual determination results in a much more reliable interpretation. Example Problem 1 . however. Remember that the residual gravity anomaly is simply the difference between the Regional and Bouguer.methods that require qualitative and quantitative input of geologic constraints. Strictly speaking. A common problem with Residual Gravity determination occurs at the continental margin. Figure 1 Note that the Regional is designed to remove the effects from the gravity which are not related to the valley fill material.Continental Margin . Nevada ) shows an example of removing a graphical Regional gradient by eyeball analysis.

000 ft. Figure 2 Surface geology data indicate granite outcrop along the left end of the figure to X = -60. Solution Figure 3 ( Geologic constraints on "regional gravity" ) illustrates the use of the two geologic constraints to design the "Regional Gravity" Curve. Design a Regional Gravity Profile such that the resulting Residual Gravity Profile will correlate to changes in the thickness of the sedimentary section. Assume that the density contrast between sedimentary rocks and basement rocks is -.5 g/cm3. A magnetic basement depth of 5000 ft occurs at X = +60.000 ft.A Bouguer gravity profile is observed as shown on Figure 2 ( Bouguer gravity profile ). Figure 3 .

The regional gravity curve is drawn close to the Bouguer gravity over the known granite outcrop. in this case. is shown on Figure 4 ( Continental margin Bouguer gravity and geologic model ).77 (-.000 is estimated as follows: ∆g = 12. Constraint B) Using the "slab formula". Figure 4 . instead of merely being a straight line. Note the actual geologic model with the sedimentary basin and a petroleum prospect. within the basin. the amount of residual gravity anomaly at X = +60.77 ∆ρt = 12. a basement high. thus the regional gravity has a broad curve.5 g/cm3)(5 kilofeet) = -32 mGal Then a smooth Regional can be drawn to give near-zero residual gravity over the granite outcrop and -32 mGal of residual gravity at X = +60. The Regional Gravity is smooth (contains no short wavelength anomalies) because the crustal structure that causes the Regional is deeply buried.Constraint A) The "Residual Gravity" (the difference between the Regional and the Bouguer gravity should be nearly zero at the left end of the cross-section.000 ft.

Qualitative Analysis Figure 1 ( Graphical residual vs.38 mGal/(data interval)2. Given the Bouguer gravity anomaly shown in this figure.60) = +. constructed from a three-point operator having coefficients of -1. We can see that the second-vertical derivative does not provide a residual referenced to a datum for modeling.80) -1(3. we can recognize a fault anomaly signature superimposed on a regional gravity increase toward the right side. Figure 1 also shows a simple approximation to a second-vertical derivative. negative on the downthrown side.62) + 2(3. Note that the second-vertical derivative is positive over the upthrown part of the fault anomaly. the calculation would be -1(3. but it does produce an anomaly (which approximately corresponds to the curvature of the Bouguer gravity field) that correlates with the . for the second-vertical derivative point at the Bouguer value of 3.80 mGal shown. and about zero over the steepest gradient of the graphical residual (which is located approximately on the projection of the fault plant to the surface). Using some simple model calculations. The resultant Residual is suitable for input to a 2-D inverse modeling program or for further analysis for fault geometry and depth using simple graphical models and a trial and error solution method. Figure 1 grid residual for fault anomaly ) illustrates the difference between a graphical and grid residual for defining a fault anomaly. 2. we can construct the linear graphical "Regional" and subtract it from the Bouguer gravity to produce the Residual. For example. and -1.

We can also see that the units of the second derivative are not mGal. Los Angeles Basin ) : . but are mGal/(data interval)2. Figure 2 Los Angeles Basin ) and Figure 3 ( Detail of second vertical derivative of gravity.fault geometry in a qualitative sense. For an example of the usefulness of qualitative analysis. refer to Figure 2 (Detail of Bouguer gravity map.

we can be further analyze these anomalies using various depth estimation techniques. once a few fields have been found. No local gravity closures are present over any of the fields shown in Figure 2 . 1951). In the Los Angeles Basin example. Long Beach. and Wilmington Fields are all more or less defined by second vertical derivative positives (maxima). Figure 3 shows a more detailed second vertical derivative of the Bouguer Gravity map. This figure shows the steep northeasterly decrease in Bouguer gravity from the Torrance Field to an area several miles northeast of the Dominguez Field. It is often necessary to experiment with various grid residual operators to find an operator which defines our geologic features of interest. The second-vertical derivative provides a data enhancement. due to the northeast regional gravity decrease related to the increase in sedimentary section thickness in that direction. Dominguez. by learning and applying the knowledge of the gravity and grid residual signature. but not a residual gravity field that we can model. Note that the Rosecrans. We can use grid residuals to help find more oil and gas fields in an area.Figure 3 Figure 2 shows a detailed Bouguer gravity map of a portion of the Los Angeles Basin (Elkins. Depth Estimation Once we construct residual gravity anomalies by using an "eyeball"/profile/geologic technique to remove regional effects. Most . the second-vertical derivative seems to enhance the local gravity anomalies (related to local structural highs over oil fields) from the northeast regional gravity decrease (related to increase in sedimentary sections thickness). This example illustrates the use of the second-vertical derivative to qualitatively outline oil and gas prospects.

depths determined from rule-of-thumb methods are useful only as first approximations. • For a spherical body. In other words. a 2-D anomaly (elongated) with a half-width of 5000 ft would imply a 5000 ft depth to the center of a horizontal cylinder. or Zc = X1/2 . Once we determine the approximate depth.X1/2 .depth estimation techniques are "rules of thumb" based on simple geologic models.X1/4 Other depth estimation techniques can be found in Grant and West (1965). Note that real-world geologic structures are much more complex than the simple cylindrical. We can check estimated depths for given local gravity anomalies by computing the gravity effect of the model when located at the depth estimated. we can show that any gravity anomaly can be fit exactly by a variable density layer located at the observation surface. the maximum depth to the center of the sphere is Zc = 1. The gravity highs would thus be underlain by higher density portions of the surface layer. Figure 1 the maximum depth to center of a horizontal cylinder is Zc = X1/2 This model can be useful for anticlines or horst blocks which are two-dimensional in map view. spherical or straight vertical-throw fault models used to derive these simple guidelines. for example. LaFehr (1987) described . the depth to the center of throw is Zc = X3/4 . the horizontal cylinder depth is the deepest possible depth to the center of a 2-D body. So. Three examples are as follows (refer to Figure 1 ): • For an enlongated body (in map view). LaFehr (1987) provides more detailed thin plate estimation techniques.3X1/2 • For a thin vertical fault. we can use it as a "starting point" in designing and constructing computer models. Using potential field theory. in reference to Figure 1 . In actuality.

knowledge of local structural style). if we know a depth to the high density layer at Well A in Figure 2 . then it represents a valid solution to the local gravity problem. subsurface data points. to a large extent. Example of ambiguity ). and have a good idea of the density distribution. We essentially use our knowledge of the depth to the upthrown side of the fault. We then check our calculations by calculating the gravity effect of the model. Figure 2 We can overcome this ambiguity problem. For example. Even one geologic data point such as Well A can greatly increase the reliability of a gravity interpretation. to determine a geologically reasonable model for the amount of throw on the fault. by applying independent geologic or geophysical constraints (e.a unit half-width circle method that provides somewhat more definition of maximum depth for thin plate models. However. then the likely geologic solution is very much constrained. no depth estimation method can overcome the fundamental ambiguity that any given gravity anomaly that is fit by a subsurface structure and density model can also be fit by a shallower model ( Figure 2 . density data.. along with our knowledge of the density contrast.g. If the final model fits the residual gravity and all of the available geologic constraints. Quantitative Gravity Modeling Gravity modeling is divided into two broad categories: .

Only quantitative modeling. It is possible. determining the density distribution. and incorporate other geophysical/geological data sets. in inversion programs. there is a unique calculated gravity field. can fully verify density and depth assumptions. Nevada. we may not obtain convergence to a solution. as defined by depths and densities. and certain known parts of the geologic model as input. For example. usually defined by either an upper or lower surface.Warm Springs Valley. the following data serve as input for an inverse gravity modeling program: • residual gravity. Y and Z. Typically. . This approach is ambiguous. Because the modeling process is inherently ambiguous. USA ) shows an example of inverse 2-D gravity modeling. This process is the best method for analyzing residual gravity. A computed gravity effect that matches the observed gravity indicates that the geologic model is at least a possible correct solution. Direct gravity modeling is unambiguous--that is. then there is something wrong. or forward modeling • Inverse modeling Direct gravity modeling involves calculating the gravity effect caused by a given geologic situation. and identifying a reasonable starting point for the model. Inverse modeling determines 3-D or 2-D geologic structures and/or densities such that both the residual gravity and geologic constraints are precisely honored. however. for a given geologic model. to assume parameters for an inverse gravity modeling program that do not allow convergence to a solution. interpret subtle anomalies. fully account for body geometry and interference between adjacent bodies. • density distribution as a function of X. in that more than one solution can fit the residual gravity field. If the modeled result violates the observed field. geologic constraints are very useful for assisting in the determination of reasonable solutions. we generally use an iterative technique to calculate the unknown parts of the geologic model. It is important to use all available information for designing the residual gravity. • an initial geologic model. which produces a close fit between residual gravity and the computed effect of the model. however. such that the calculated effect of the geologic model on the last iteration closely matches the input residual gravity. Inverse modeling programs use iteration to determine the "free" model parameters (usually the unspecified upper or lower surface). if we assume a density contrast that is too low.• Direct. Figure 1 ( Regional gravity determination. Inverse gravity modeling usually employs the residual gravity field (with a geologically derived "regional" removed).

Figure 1 The calculated gravity effect of the Valley fill material (having a density contrast of -0.Gulf of Mexico primarily due to salt effects. B and C.5 g/cm3 with the underlying higher-density rocks) shown on the cross-section fits the residual gravity. note salt interpetation at A. Figure 2 . Figure 2 ( Residual gravity.

000 ft.2 mGal ) and Figure 3 ( Structure map. top of Salt?Gulf of Mexico derived from 3-D inverse gravity modeling. .Contour interval = 0. and salt nosing ) shows a second example?this time of a 3-D inverse gravity model of the Top of Salt from the Gulf of Mexico?that fits the residual gravity and geologic constraints. Salt domes at A and B. Figure 3 Contour interval = 1. Note how subtle features on the salt dome are defined by the 3-D inverse modeling. illustrating the use of 2-D gravity modeling is taken from the "Overthrust Belt" of the western United States (Gray & Guion. A third example. 1978).

Summit County. USA ). Pineview field.Figure 4 Figure 4 ( Index map. Utah. . USA ) is a local index map showing the cross section location of Figure 5 ( Structural cross-section. Pineview field. Summit County. Utah.

Figure 6 ( Density vs. on the upthrown fault block of the Tunp Fault.Figure 5 This is a structural cross section through the Pineview Field. . The overridden downthrown block is Cretaceous rock. along the fault plane.

the lower density Tertiary wedges. Figure 7 ( Comparison of combined gravity effect of models with observed gravity. Pineview Field ) shows the superimposed gravity effects of all significant density contrast components compared to the residual gravity. which flowed into the core of the structure. The density contrast pieces include the lower density Jurassic salt. formation curve.Figure 6 formation. The significant density contrast pieces to the thrust fault puzzle were each modeled separately. . which generally thicken away from the crest of the anticline. which are on the upthrown side of the Tunp Fault. Pineview field ) shows a density vs. and the higher density rocks in the core of the anticline.

Figure 8 ( Correlation between Bouguer gravity and significant natural gas discovery. Wyoming. Overthrust Belt. .Figure 7 This previous example illustrates we can use 2-D modeling to understand why particular gravity anomalies are present in structurally complex areas. This understanding of the anomaly signature from known production helps operators in the thrust belt locate additional oil and gas fields. USA.

1982).Figure 8 Contours are in units of mgal ) shows the gravity anomaly over another significant Overthrust oil and gas field (Guion. Figure 9 ( Reconnaissance gravity profile ) illustrates the Eagle Springs Field on the east side of Railroad Valley Nevada. together with the USGS Bouguer gravity profile (Guion & Pearson. . et al. 1978)..

along with its density contrast distribution. to some extent. For the layer of interest. Gravity analysis indicated that the principal basin bounding fault on the western side of Railroad Valley is several miles east of the Paleozoic outcrop. We then compute the gravity effect of the layer. Gravity Stripping Gravity stripping. If gravity stripping can be used to mathematically replace the gravity effect of the "post-salt" sediment with salt. for example.Figure 9 The principal basin-bounding fault is located near the Paleozoic outcrop on the east side of Railroad Valley and forms part of the oil trapping mechanism. with the pre-salt structures. and subtract it from the observed gravity to produce a residual gravity with the effect of the "stripped" layer removed. in a province which has salt structures and in which presalt structure is of interest. Gravity stripping might prove useful. . Anomalous Mass Estimates A unique property of gravity exploration is that a given gravity anomaly is caused by a certain amount of anomalous mass or tonnage. This western basin-bounding fault formed part of the oil trapping mechanism for the Trap Springs Field. we first define the depths of its upper and lower surfaces. then the resulting residual may correlate. the process of removing the gravity effect from a known geologic layer. involves direct gravity modeling. which was discovered with the assistance of the gravity data.

although acoustic basement may not correlate with high density basement). attributed to the mass: (2) We can thus determine the mass of an ore body or salt dome by integrating or summing the residual gravity.This is shown by Gauss's Theorem: (1) where = gravity attraction vector = an infinitesimally small unit of surface area vector (direction is outward normal to surface s) = surface integral over closed surface s G = universal gravity constant M = the anomalous mass inside of the closed surface We may calculate the anomalous mass below an observation surface may be calculated from the residual anomaly. Rock Typing of Unknown Seismic Events . the anomalous mass directly relates to ore tonnage and to economic evaluations. to provide some independent confirmation of gravity-derived local basement and/or intrasedimentary structures (this constraint applies if the anomaly source is generally magnetic and is generally denser than the surrounding rocks). to provide some "top of high density" depth constraint and assist in producing a more detailed "top of high density rocks" map from gravity modeling (often. gravity data will cover an entire area on a more or less uniform grid. • incorporation of seismic and gravity data. to provide density information from interval velocities and compare calculated gravity effects of seismic structures with observed gravity data (seismic data can also provide another source of basement depth estimates. We can then convert this mass into an ore reserve estimate or an estimate of salt mass. whereas seismic control may be more sporadically located). to help constrain a gravity-derived basement depth model. • seismic data. over the area covered by the anomaly. possibly in the form of a few scattered seismic lines. Such non-gravity geophysical constraints to gravity interpretations may include the following: • magnetic basement depths. • local magnetic anomalies. az. In the case of an ore body (such as a massive sulfide). az. Integration of Gravity The quality of any gravity interpretation depends on the quantity and quality of available geologic and geophysical constraints.

3. we may use the following procedure: 1. using various different density and magnetic susceptibility assumptions for the unknown intrusive. For example. such assumed cases might include the following: Unknown Intrusive Density Susceptibility . where little is known about local geologic features. Compute gravity and magnetic models of the unknown intrusives. Figure 1 We can observe such events when acquiring seismic data in frontier areas. Construct seismic time-depth curve and construct a structure map or cross-section of the anomalous body.Figure 1 ( "Unknown" seismic intrusive ) depicts a typical unknown seismic event. 2. To analyze unknown seismic events using gravity and/or magnetic data. utilize seismic interval velocity data in areas between unknown intrusives to estimate background densities of the various principal layers of the sedimentary section. If necessary.

show examples of salt.40 g/cm3 2. and igneous gravity effects. shale. and Figure 4 .Salt Gabbro intrusion High pressure shale Pinnacle reef Figure 2 . .16 g/cm3 2.5 x 10-6 cgs 3000 x 10-6 cgs 0 cgs 0 cgs Figure 3 .95 g/cm3 2.60 g/cm3 -0. Figure 3 2.

Figure 2 4. .

they measure quantities from which we can interpret porosity. None of the porosity tools (BHGM included) actually measure porosity. The range of BHGM applications is defined on one extreme by density logging and on the other by remote sensing of structure.Figure 4 Compare the calculated and observed gravity and magnetic anomalies among the various assumed cases. Rather. It may be necessary to adjust the density or magnetic susceptibility of the initial model guess to get a good fit between calculated and observed (residual) gravity or magnetic fields. neutron and velocity logs. the borehole gravity meter (BHGM) is unique among porosity tools for its deep radius of investigation and ability to log inside of casing. Figure 1 ( Example BHGM log ) is an example of both applications. As a logging tool." he correctly anticipated that the well logging applications of borehole gravity would prove more important than the use of borehole gravity data to interpret surface gravity. The first sometimes focuses strictly on formation and reservoir evaluation questions. . Borehole Gravity Applications When Smith (1950) wrote "The Case for Gravity Data from Boreholes. Select the most reasonable fit. Other porosity measurements are derived from gamma-gamma density. the other extends to basic exploration.

the tool's useful radius of investigation is approximately 50 ft. in this case. and produced commercial quantities of oil and gas). For such a model.Figure 1 In this figure. On the other hand. the broad discrepancy between the BHGM and gammagamma logs over the depth range of the logged section is typical of a structural effect. the edge of the reef complex. For this objective. Density from Borehole Gravity The underlying assumption in computing apparent density from a BHGM survey is that of an Earth model consisting of a layer cake of horizontal infinite slabs. . and the slabs above and below it have no effect on the gradient within it. which is within a few hundred feet. The sharp negative density anomaly observed between 6330 ft and 6370 ft suggests porosity obscured by near-borehole effects or poor volume sampling (the zone was perforated. the exact density of any slab is given by the gravity gradient through that slab. the gradient measured at any point within the slab is constant. Figure 1 ( Density of an infinite slab from borehole gravity ) shows the measurements that lead to a computed density for an infinite slab. the purpose of the survey was to detect carbonate porosity in a reef environment that was missed by the other logs.

Figure 1

This simple assumption serves effectively in a majority of cases. Modeling of more complex geometry is not difficult and is routinely used in computing structural corrections to apparent density. In Figure 1 , the gravitational attraction at the top of the slab is 2pG∆z (from the slab formula). The attraction at the bottom of the slab is exactly the opposite, so the change in gravity from the bottom to the top of the slab is

∆g = -4πρG∆z where ∆g is the gravity difference from the bottom and top of the slab ρ is the density of the slab G is the Universal Gravitational Constant ∆z is the thickness of the slab The sign in equation 1 is negative, because the sign conventions for g and z are positive downwards. For measurements on the real Earth, the density computation must take the free air gradient and latitude effect into account:

(1)

∆g = (F-4πρG)∆z This is the equation we use to derive densities from borehole gravity measurements. When the appropriate constant values are inserted in Equation 2 (from Robbins, 1981), we obtain: ρ = 3.68270 - 0.005248sin2φ +0.00000172z-0.01192708(∆g/ ∆z) where ∆z and z are in meters, f is latitude and ∆g is in mGal.

(2)

**Advantages and Features of Borehole Gravity
**

One of the BHGM's great advantages as a density logging tool is that, unlike other porosity tools, it is practically unaffected by near-hole influences. Casing, poor cement bonding, rugosity, washouts and fluid invasion have practically negligible influence on the measurement. Another advantage is the fundamental simplicity of the relationships between gravity, mass, rock volume and density. Complex geology can be easily modeled so that the response of a range of hypothetical models can be studied and understood before undertaking a survey. The normal calculated result of a BHGM survey is apparent density, which is a simple function of the measured vertical gradient of gravity. To obtain an apparent density measurement, we measure gravity at two depths. The accuracy of the computed density depends on the accuracy of both measured differences: gravity and depth. Operationally, BHGM surveys resemble VSP (vertical seismic profiling) surveys. The BHGM is stopped at each planned survey level for a five-to-ten-minute reading. The blocky appearance of the log, reflects the station interval. The log is not continuous. BHGM measurements are taken at discrete depths usually at intervals of 10 to 50 feet, depending on the vertical and density resolution required. While the BHGM has remarkable resolution in the measurement of density over intervals of 10 feet or more (less than 0.01 g/cm3), surveys requiring closer vertical resolution must sacrifice density resolution. Figure 1 ( Density accuracy for various levels of ∆g measurement uncertainty )

Figure 1

and Figure 2 ( Density accuracy for various levels of ∆z measurement uncertainty ) illustrate the BHGM measurement's sensitivity to errors in measuring changes in gravity and depth interval.

∆z. Gas saturations are therefore the easiest to measure. The radius of investigation is 2. For a 20-foot zone. Gas-saturated sands are a particularly easy target because gas is low in density. the larger the measured effect.45 times the zone thickness. The BHGM measurement samples a large volume of rock. the greater the radius of investigation.01 g/cm3. . Figure 2 shows that a 2-inch error in ∆z over a 20-foot depth interval will result in a density error of 0. which provides a density-porosity value that is more representative of the formation. BHGM density measurements have been used to calculate hydrocarbon saturations: the larger the fluid density contrast. measurement of ∆g to an accuracy of 5 mGal will give a density accuracy of 0. the depth of investigation depends on the physical dimensions of the zone of density change. Radius of investigation is normally defined as that radius within which 80 percent of the response is generated. poor hole conditions. This is especially beneficial in carbonate and fractured reservoirs. For the BHGM. BHGM surveys have been used to find hydrocarbon-filled porosity missed by other logs in both open and cased holes.Figure 2 Figure 1 shows that over a 20-foot depth interval. The density differences measured by the gamma-gamma log and the BHGM can be used to calculate the difference in oil saturation between the invaded and undisturbed zones. The thicker the zone. and all but the deepest fluid invasion.01 g/cm3. BHGM Density Logging Borehole gravity density measurements are unhindered by casing. provided there is no error in the depth interval. Similarly. the radius of investigation is about 50 feet. The BHGM's wide radius of investigation has also been successfully used to determine gas-oil and oilwater contacts in reservoirs where other measurements have been ineffective. which can in turn give an estimate of moveable hydrocarbons.

gas sand ) show old neutron and natural gamma logs combined with BHGM densities measured through the well casing. Figure 1 tight sand ) and Figure 2 ( Cased hole BHGM with open hole Gamma Ray-Neutron log. Modern neutron porosity logs can also be run through casing to help plan a BHGM log. and perhaps an older form of neutron porosity log. Wells cased before the advent of modern porosity logs will often have a suite of old resistivity and SP logs available. the neutron log will read less than the true porosity due to the absence of hydrogen ions in water or oil.The BHGM tool is especially suited to finding bypassed hydrocarbon production (especially gas) behind casing in old wells. which can be provided by the BHGM tool because the neutron log alone can't distinguish between tight and gas bearing sands. . Figure 1 ( Cased hole BHGM with open hole Gamma Ray-Neutron log. A valid calculation of water saturation in a gas zone in this case requires additional porosity information. If gas is present in the formation.

defines the effective radius of investigation of the BHGM.55 g/cm3 indicating that the top of this zone is. in fact. tight. and in particular the thickness of local density units.Figure 2 In Figure 1 . Local geology. the neutron log shows a pattern that could be interpreted as the result of an upwards increase in gas saturation in the sand from 4390 to 4514 feet. unless the density contrast is very high and little other noise is present. . A channel sand 20 feet thick would be detectable no more than 40 feet away. Remote Sensing A practical rule of thumb for BHGM remote sensing applications is that a remote body with sufficient density contrast can be detected by the BHGM no farther from the well bore than one or two times the height of the body. but to the contrary. A salt dome with 15. Figure 2 shows the opposite situation where the BHGM log confirms that gas exists in the zone from 4230 to 4246 feet.000 feet of vertical relief would have a definitive signature a few miles away. Figure 1 ( Apparent density anomaly of a truncated slab ) illustrates the basis for this rule of thumb. the BHGM density log shows densities that increase towards the top of the sand to 2.

In one case.Figure 1 The linear relation between apparent density and the angle a subtended by the vertical face of a truncated semi-infinite slab is analytically exact for the example. .08 g/cm3 and a were 90°. Computer modeling of BHGM measurements can help to develop relatively detailed salt-dome-flank or reef-flank model interpretations. Figure 2 shows the computer-modeled effect of a salt dome for different well positions. if the true density contrast in the remote zone were 0. For example. Modeling is particularly effective where seismic data can be integrated into the modeling process.02 g/cm3. the apparent density anomaly would be 0. a model is sought that is consistent with both data sets. the BHGM interpretation led to a sidetracked hole and an economic discovery. the presence of an imbricate thrust sheet was confirmed by the BHGM.

.Figure 2 The sharpness of the density anomaly curve will be diagnostic of lateral offset from the well to the structure. provided there is a vertical change in density.

Sign up to vote on this title

UsefulNot useful- The Similarities and the Differences Between Geomagnetic and Gravity Survey
- AGARDAG6331
- Chapter 1
- Oil Water
- Crop Density Download Able PDF
- Hydraulics
- MC-3 Operating Manual
- Perancangan Menara
- GA
- Oil Sump Pit
- Catalogue SB
- Grease Trap - Original Calculation
- CARBOTECH - Technical Presentation
- Gravity
- CONTROL SURVEY-GEODESY By D.M Siddique
- Calculator Konversi
- Chapter 6 - Fluids and Pressure
- Mud Density
- Soil Mechnaics t1 With Solutions 2nd Sem 2014-15
- Quality Control Pmgsy
- GRAVITY AND MAGNETICS JOURNAL by Deji AllenWalker
- Technology in Industry Ass#4
- Downhole Coupon Holder Manual RevA
- Well Testing
- Mass Density of Different Material
- gravity & magnetic
- Density, Porosity, And Compaction of Sedimentary Rocks
- Bulk Density
- Bulk Density Guide
- wangzhiliang109612-201008-12 (1)
- Gravitry Theory

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.