You are on page 1of 104

What is an Oil and Natural Gas Reservoir?

Oil and natural gas were formed from the remains of prehistoric plant and animals. Hundreds of
millions of years ago, prehistoric plant and animal remains settled into the seas along with sand,
silt, and rocks. As the rocks and silt settled, layer upon layer piled up in rivers, along coastlines,
and on the sea bottom. Geologic shifts resulted in some of these layers being buried deep in the
earth. Over time, the layers of organic material were compressed under the weight of the
sediments above them, and the increasing pressure and temperature changed the mud, sand,
and silt into rock and the organic matter into petroleum. This rock containing the organic matter
that turned into petroleum is referred to as source rock. The oil and natural gas is contained in the
tiny pore spaces in these source rocks, similar to water in a sponge.

Over millions of years, the oil and gas that formed in the source rock deep within the earth moved
upward through tiny, connected pore spaces in the rocks. Some seeped out at the surface of the
earth. But most of the petroleum hydrocarbons were trapped by nonporous rocks or other barriers
that would not allow it to migrate any further. These underground traps of oil and gas are called
reservoirs. Reservoirs are not underground "lakes" of oil; reservoirs are made up of porous and
permeable rocks that can hold significant amounts of oil and gas within their pore spaces. The
properties of these rocks allow the oil and natural gas within them to flow through the pore spaces
to a producing well.

Some reservoirs may be only hundreds of feet below the surface. Others are thousands, and
sometimes tens of thousands of feet underground. In the U.S., a few reservoirs have been
discovered at depths greater than 30,000 feet (9.15 km). Many offshore wells are drilled in
thousands of feet of water and penetrate tens of thousands of feet into the sediments below the
sea floor.

Most reservoirs contain oil, gas, and water. Gravity acts on the fluids to try to separate them in the
reservoir according to their density, with gas being on top, then oil, then water. However, other
parameters, such as fluid/rock properties and solubility will restrict complete gravitational
separation. When a well produces fluids from a subsurface reservoir, typically oil and water, and
often some gas will be recovered.

The larger subsurface traps are the easiest deposits of oil and gas to locate. In mature production
areas of the world, most of these large deposits of oil and gas have already been found, and
many have been producing since the 1960s and 1970s. The oil and gas industry has developed
new technology to identify and access smaller, thinner bands of reservoir rock that may contain
oil and gas. Improved seismic techniques (such as 3D seismic) have improved the odds of
correctly identifying the location of these smaller and more difficult to find reservoirs. There is still
a lot of oil and gas left to be discovered and produced. Future discoveries will be in deeper
basins, and in more remote areas of the earth. There will also be a lot of small reservoirs found in
existing oil and gas areas using advanced technologies.

Technological innovation not only makes it easier to find new deposits of oil and gas, but to get
more oil or gas from each reservoir that is discovered. For example, new drilling techniques have
made it feasible to intersect a long, thin reservoir horizontally instead of vertically, enabling the oil
or gas from the reservoir to be recovered with fewer wells. Technology advances have greatly
enhanced the oil and gas industry's ability to find and recover more of the finite amount of oil and
gas created millions of years ago.

How Does the Industry Find Oil and Natural Gas?

Through the early 1900s, finding oil and gas was largely a matter of luck. Early explorers looked
for oil seeps to the surface, certain types of rock outcrops, and other surface signs that oil might
exist below ground. This was a "hit or miss" process. More on industry history

But science and technology quickly developed to improve the industry's ability to "see" what lies
below ground. Seismic technology uses the reflection of sound waves to identify subsurface
formations. A crew working on the surface sets geophones at intervals along a straight line. Then
a loud noise is created at the surface. The noise moves throught the ground and reflects off of
underground formations. How quickly and loudly that sound is reflected to the geophones
indicates what lies below ground. This process is repeated many times. Different types of
formations reflect sound differently, providing a picture of the types of rocks that lie below. If the
geophones are laid out in straight lines, the results are called 2-dimensional (2D)seismic. If they
are in a grid pattern, the result is called 3-dimensional (3D) seismic. Reading 2D seismic images
to find possible traps and reservoir rocks was as much art as science. Today, sophisticated
technology and high-speed computers help geophysicists process massive amounts of seismic
data. From these data, they can develop three-dimensional underground maps that significantly
improve the industry's ability to locate possible oil or gas deposits. But until a well is drilled, it is
impossible to know for certain whether the resource is there, whether it is oil or gas, and whether
it can be recovered in commercial quantities.

Once a company identifies where the oil or gas may be located, it then begins planning to drill an
exploratory well. Drilling a well is expensive; shallow offshore wells or deep onshore wells can
cost more than U.S.$10 million each to drill. In deep water offshore, or in remote areas such as
the Arctic, wells can cost substantially more. Companies must analyze all of the available
information in determining whether, and where, to drill an exploration well.

Before the technology advances of the past few decades, the best place to put a well was directly
above the anticipated location of the oil or gas reservoir. The well would then be drilled vertically
to the targeted oil or gas formation. Technology now allows the industry to drill directionally from a
site up to 5 miles (8 km) away from the target area. Computers on the drilling rig and steering
equipment in the drill-bit assembly enable guiding the wellbore with such accuracy that it can
target an area the size of a small room more than a mile underground. This directional drilling
technology means that the industry can avoid placing wells in environmentally sensitive areas or
other inaccessible locations, yet still access the oil or gas that lies under those areas. Advanced
drilling technologies, linked with satellite communications, mean that an engineer can monitor and
guide drilling operations in Peru, in real time, from an office in Houston.

In simplified terms, the drilling process uses a motor, either at the surface or downhole, to turn a
string of pipe with a drill bit connected to the end. The drill bit has special "teeth" to help it crush
or break up the rock it encounters to make a hole in the ground. These holes can vary in diameter
from a few inches to approximately two feet (0.6 m), but are usually in the range of 8 to 12 inches
(20-30 cm). While the well is being drilled, a fluid, called "drilling mud," is circulated down the
inside of the drillpipe, passes through holes in the drill bit, and travels back up the wellbore to the
surface. The drilling mud has two purposes: 1) it carries the small bits of rock (cuttings) from the
drilling process to the surface so they can be removed, and 2) it fills the wellbore with fluid to
equalize pressure and prevent water or other fluids in underground formations from flowing into
the wellbore during drilling. Water-based drilling mud is composed primarily of clay, water, and
small amounts of chemical additives to address particular subsurface conditions that may be
encountered. In deep wells, oil-based drilling mud is used because water-based muds cannot
stand up to the higher temperatures and conditions encountered. The petroleum industry has
develped technologies to minimize the environmental effects of the drilling fluids it uses, recycling
as much as possible. The development of "green" fluids and additives is an important area of
research of the oil and gas industry.
Even with the best technology, drilling a well does not always mean that oil or gas will be found. If
oil or gas is not found in commercial quantities, the well is called a dry hole; it will be plugged with
cement. Sometimes, the well encounters oil or gas, but the reservoir is determined to be unlikely
to produce in commercial quantities.

Technology has increased the success rate of finding commercial oil or gas deposits. In the U.S.,
for example, dry holes still accounted for 13% of all wells drilled in 2003. But this compares with
37% in 1973, 32% in 1983 and 26% in 1993. For wells seeking new deposits of oil or gas
(exploratory wells), technology has decreased the dry hole rate from 78% (only 22% of wells
finding commercial quantities of oil or natural gas) in 1973 to 56% in 2003.[1] The use of better
seismic and drilling technologies means fewer wells are required to add the the world's oil and
gas supplies.

New and better technology has made it possible for the industry to continue finding oil and gas
with fewer wells, less waste, less surface disturbance, and greater efficiency.

How are Oil and Natural Gas Produced?

Once an oil or gas reservoir is discovered and assessed, production engineers begin the task of
maximizing the amount of oil or gas that can ultimately be recovered from it. Oil and gas are
contained in the pore spaces of reservoir rock. Some rocks may allow the oil and gas to move
freely, making it easier to recover. Other reservoirs do not part with the oil and gas easily and
require special techniques to move the oil or gas from the pore spaces to a producing well. Even
with today's advanced technology, in some reservoirs more than two-thirds of the oil in the
reservoir rocks may not be recoverable.

Before a well can produce oil or gas the borehole must be stabilized with casing, which is lengths
of pipe cemented in place. The casing also serves to protect any fresh water intervals that the
well passes through, so that oil cannot contaminate the water. A small-diameter tubing string is
centered in the wellbore and held in place with packers. This tubing will carry the hydrocarbons
from the reservoir to the surface.

Reservoirs are typically at elevated pressure because of underground forces. To equalize the
pressure and avoid the "gushers" of the early 1900s, a series of valves and equipment is installed
on top of the well. This wellhead, or "Christmas tree," as it is sometimes called, regulates the flow
of hydrocarbons out of the well.

Early in its production life, the underground pressure will often push the hydrocarbons all the way
up the wellbore to the surface, much like a carbonated soft drink that has been shaken.
Depending on reservoir conditions, this "natural flow" may continue for many years. When the
pressure differential is insufficient for the oil to flow naturally, mechanical pumps must be used to
bring the oil to the surface. This process is referred to as artificial lift. In the U.S., above-ground
pumping units are often called "horsehead" pumps because of their unique shape and movement.

Most wells produce in a predictable pattern called a decline curve. Production will increase for a
short period, then peak and follow a long, slow decline. The shape of this decline curve, how high
the production peaks, and the length of the decline are all driven by reservoir conditions. Some
wells may stop producing in economic quantities in only a few years. In the U.S., 8 oil and gas
fields have been producing for more than 100 years.

Engineers can do a variety of things to affect a well's decline curve. They may periodically
perform an operation called a "workover," which cleans out the wellbore to help oil or gas move
more easily to the surface. They may fracture or treat the reservoir rock with acid around the
bottom of the wellbore to create better pathways for the oil and gas to move through the
subsurface to the producing well.

As a field ages, the company may choose to use a technique called waterflooding. In this case,
some of the wells in the field are converted from production wells to injection wells. These wells
are used to inject water (often produced water from the field) into the reservoir. This water tends
to push the oil out of the pores in the rock toward the producing well. Waterflooding will often
increase production from a field.

In more advanced cases, the company may use more sophisticated techniques, collectively
referred to as enhanced oil recovery (EOR). Depending on reservoir conditions, various
substances [from steam to nitrogen, carbon dioxide to a surfactant (soap)] may be injected into
the reservoir to remove more oil from the pore spaces and increase production.

Throughout their productive life, most oil wells produce oil, gas, and water. This mixture is
separated at the surface. Initially, the mixture coming from the reservoir may be mostly oil with a
small amount of water. Over time, the percentage of water increases. On average in the United
States, oil wells produce 8 barrels of water for each barrel of oil. Some older wells may produce
as much as 100 barrels of water for each barrel of oil. This produced water varies in quality from
very briny to relatively fresh. In arid areas of the western U.S., produced water may be used for
agricultural purposes, such as livestock watering or irrigation. Where it cannot be used for other
purposes, this produced water may be reinjected into the reservoir either as part of a
waterflooding project or for disposal (returning it to the subsurface).

Natural gas wells usually do not produce oil, per se, but do produce some amount of liquid
hydrocarbons. These natural gas liquids are removed in the field or at a gas processing plant
(which may remove other impurities as well). Natural gas liquids often have significant value as
petrochemical feedstocks. Natural gas wells also often produce water, but the volumes are much
lower than is typical for oil wells.

Once it is produced, oil may be stored in a tank and later moved by means of truck, barge, or ship
to where it will be sold or enter the transportation system. Most often, however, it goes from the
separation facilities at the wellhead directly into a small pipeline, which then feeds into a larger
pipeline. Often, pipelines are used to bring the production from offshore wells to shore. Pipelines
may transfer oil from a producing field to a tanker loading area for shipping. Pipelines also may
be used to move oil from a port area to a refinery to be processed into gasoline, diesel fuel, jet
fuel, and many other products.

Natural gas is almost always transported through a pipeline. Because of the difficulty in moving it
from where it exists to where potential consumers are, some known gas deposits are not
currently being produced. Many years ago, the gas may have been wasted as an unwanted by-
product of oil production. The industry recognizes the value of clean-burning natural gas and is
working on improved technologies for getting gas from the reservoir to the consumer. Gas-to-
liquids (GTL) is an area of technology development that will allow natural gas to be converted to a
liquid, transported by tanker. Some countries have installed facilities to export gas as liquefied
natural gas (LNG), but the number of countries with facilities to use LNG is still limited.

Where are Oil and Natural Gas Produced?

The oil and natural gas that power our homes, transportation,
and businesses are produced in more than 100 countries
around the world. Most of those countries produce both oil
and natural gas; a few produce only natural gas.

The ten largest oil producing countries in 2002 were: Saudi

Arabia, Russia, United States, Iran, China, Mexico, Norway,
Venezuela, United Kingdom, and Canada.[1] Many factors
can affect the level of production, such as civil unrest,
national or international politics, adherence to quotas, oil
prices, oil demand, new discoveries, and technology
development or application. Click on image for a larger version

In 2002, the ten largest natural gas producing countries were:

Russia, United States, Canada, United Kingdom, Algeria,
Netherlands, Iran, Indonesia, Norway, and Uzbekistan.[1]
Natural gas is difficult to transport over long distances. Thus,
in most countries, natural gas is consumed within the country
or exported to a neighboring country by pipeline. Technology
for liquefying natural gas so that it can be transported in
tankers (like oil) is improving, but the volume of natural gas
exported in this manner is still limited. As technology expands
the options for gas transportation, demand for natural gas is
expected to grow.

Click on image for a larger version The Energy Information Administration (EIA) [U.S.
Department of Energy] estimates that world oil production in
2002 was 66.8 million barrels per day (B/D), or 24.4 billion barrels. EIA estimates that dry natural
gas production was 92.3 trillion cubic feet in 2002.

World oil production comes from more than 830,000 oil wells.[2] More than 520,000 of these wells
are in the United States, which has some of the most mature producing basins in the world. On
average, an oil well in the United States produces only 11 B/D, compared with 199 B/D in Russia,
3,674 B/D in Norway, and 5,404 B/D for a well in Saudi Arabia. Comparable data for natural gas
wells are not readily available.

A growing percentage of the world's production is from offshore areas, such as the Gulf of
Mexico, the North Sea, western Africa (Angola, Nigeria), and Asia (China, Vietnam, and
Australia). Offshore production represents significant technical challenges, yet technology
advances have enabled the industry to increase offshore production dramatically in the past
decade. Remotely operated vehicles (ROVs) that can maintain wellheads and equipment on the
oceans floor are just one example of the technology that has expanded the world's producing

Seismic Technology

Evolution of a Vital Tool for Reservoir Engineers

In the Duri field on the island of Sumatra, Caltex Pacific Indonesia is using four-dimensional (4D)
or three-dimensional (3D) time-lapseseismic technology on a large scale to improve oil
recovery and optimize energy use in the worlds largest steamflood. In the Ekofisk field in the
North Sea, Phillips Petroleum has integrated borehole seismic images (also known as vertical
seismic profiles) with 3D and 4D surface seismic data to monitor the reservoir and to position new
wells more confidently to revitalize production. In the Kinsler field of southwestern Kansas,
Amoco, Texaco, and Conoco have used crosswell seismic imaging to aid in drilling decisions and
reservoir characterization. These accomplishments mark significant steps forward in the evolution
of seismic technology beyond exploration to becoming a vital tool for field development and
production. Similarly, these achievements exemplify the potential for turning untapped resources
into recoverable reserves that can be realized when engineers and geoscientists work together in
true multidisciplinary asset-management teams. Success in the oil field always has depended on
minimizing risk and uncertainty. That, in turn, has driven the quest to acquire information about
the subsurface and also to store, process, and manage that information to optimize production
and minimize risk and cost. Petroleum seismic technology was used initially to generate structural
images of subsurface targets. With the advent of 3D seismic and steady advances in acquisition,
processing, and interpretation, seismic data now deliver not only more structural detail but also
stratigraphic information and direct hydrocarbon indicators. When seismic data are integrated
with well logs, core data, and other subsurface information, reservoir description and monitoring
and, thus, economicsare significantly enhanced.1

In the Beginning
In 1924, the discovery of an oil field beneath the Nash salt dome in Brazoria County, Texas, was
the first to be based on single-fold seismic data.2 Before that, oilfield exploration was very much a
guessing game based on surface signs. Stakes were high, and rewards could be tremendous, but
losses from dry holes could be devastating. Then, engineers and geoscientists discovered that
they could use low-frequency sound waves to map subsurface geologic structures and locate
possible hydrocarbon traps.

Seismic instruments to record and measure movements of the ground during earthquakes were
first developed during the middle of the 19th Century. John C. Karcher gave birth to the formula
that is the basis of reflection seismology.3 This formula heralded a revolutionary change from
refraction to reflection as the basis of oilfield seismology. In reflection seismology, subsurface
formations are mapped by measuring the time it takes for acoustic pulses generated in the Earth
to return to the surface after reflection from interfaces between geological formations with
different physical properties.

In the early 1900s, Reginald Fessenden, chief physicist for the Submarine Signaling Co. of
Boston, used sound waves to measure water depths and to detect icebergs. In 1913, seismic
instruments he invented were used to record both refractions and reflections through Earth
formations near Framingham, Massachusetts. In September 1917, the U.S. Patent Office issued
a patent for Method and Apparatus for Locating Ore Bodies.4

During World War I, Ludger Mintrop invented a portable seismograph for the German army to use
to locate Allied artillery. By recording Earth vibrations from positions opposite Allied
bombardments, Mintrop could calculate gun locations so accurately that the first shot from a
German gun often would make a direct hit.5 The Germans discovered that varying velocities
among the geological formations through which their vibrations passed introduced errors into their
distance calculations and that certain assumptions about geology had to be made to compute the
distances. After the war, Mintrop reversed the process by measuring the distances and computing
the geology from the Earths vibrations recorded on his portable seismograph and, in April 1923,
was awarded a U.S. patent for the new process.

This was shortly after the 1921 Olahoma City tests conducted by Karcher, William Haseman,
Irving Derrine, and William Kite that proved the validity of the reflection seismograph as a useful
tool in the search for oil.6 In 1925, Karcher and DeGolyer persuaded Fessenden to sell his ore-
bodies patent to Geophysical Research Corp. [On 16 May 1930, Karcher and Eugene
McDermott, with the financial backing of DeGolyer, founded Geophysical Service Inc. (GSI).]

On 25 March 1925, Dabney Petty, Associate State Geologist for the Texas Bureau of Economic
Geology, wrote to his brother, a structural engineer in Dallas, about application of Mintrops
method by his company, Seismos, on the Texas gulf coast. In his return letter of 1 April, O. Scott
Petty wrote of his idea to use the then-new vacuum tube to develop a seismograph that could
operate without dynamite. It occurs to me that if we had a seismograph that we could operate
without using great quantities of dynamiteno dynamite at all, I meanwe would be able to put it
all over these big companies, Petty wrote. Lets try to invent a seismograph using a vacuum
tube to detect the Earth vibrations so that it will be sensitive enough to register the vibrations
made by simply dropping a heavy chunk of lead on the ground . This correspondence led to
the invention and development of the first displacement-sensitive seismograph and also gave
birth to the third of the pioneering geophysical firms, the Petty Cos.

Applying the seismic principles developed by these pioneers revolutionized the search for
hydrocarbons and brought remarkable discoveries. Cecil Green, who was also a founder of Texas
Instruments, once reminisced that geophysics was a perfect combination of technology and
people. The high demands of science breed integrity, and modesty as well, he said. Show me
a geologist, a geophysicst whos brimming with ego, and Ill show you a probable newcomer to
the business. Mother Earth has a way of quickly showing you youre always the upstart.7

Transistors, Tape, and CDP

Following World War II, GSI acquired a license to build transistors that ultimately resulted in the
birth of Texas Instruments, of which GSI then became a subsidiary. The move to transistorized
equipment dramatically lightened the load for field crews.
Another advancement during the mid-1950s was the recording of seismic signals in variably
magnetized tracks along the length of a magnetic tape. Changing from paper to taped records
pointed the way to machine processing, development of the analog processor, and a total change
in the way seismic data were collected and processed.

A third advancement in the 1950s was W. Harry Maynes invention of common-depth-point (CDP)
data stacking. Maynes invention, also referred to as common midpoint or common reflection
point, proved to be the main signal-to-noise-enhancing technique in seismic exploration and is
still the basis from which novel techniques of economic continuous subsurface coverage depart.8
A fourth advancement, Conocos development of Vibroseis, made it possible to substitute
manmade vibrations or waves for those caused by dynamite-generated explosions. Vibroseis
relies on specially designed vibrating or weight-dropping equipment to create waves that
penetrate the surface, strike underground formations, and reflect back to the seismograph in
exactly the same manner as explosion-generated waves. The introduction of Vibroseis meant that
the multiplicity of source points necessitated by CDP would be feasible without the associated
increase in cost that was inevitable when dynamite was the only energy source

Digital Technology: The Second Revolution

According to Graebner, the second revolution in petroleum seismology occurred in the early
1960s with the arrival of digital technology. In a joint effort with Texas Instruments and several oil
companies in 1961, GSI introduced the first digital field system and computer for seismic-data
processing. Three years later, IBM introduced its 360 series of digital computers, and computers
suddenly moved from novelty to commercial popularity. Geoscientists began moving data from
bookshelves and file cabinets into computers, and processors had a heyday generating
processing algorithms.

The evolution of modern petroleum seismic technology and the evolution of information
technology (IT) are closely related. In fact, the two have developed in tandem, and the petroleum
geophysical industry continually is one of the largestand bestusers of IT outside the high-tech
industry itself.

With digital technology, signals detected by sensitive geophones could be read at millisecond
intervals and recorded as binary digits across the width of a 1-in. format tape. Early well files
mimicked the paper files from which they had descended. They contained primarily raw data.
Seismic sections were correlated by hand to sonic logs to evaluate prospects. Computer filing
quickly grew more sophisticated, however, and databases evolved. Complete digital gathering
and processing systems were developed, systems approaches were adopted, and the amount of
real subsurface information available improved dramatically.

Computing added a third dimension to reservoir modeling and increased the number of grids,
which improved resolution. It also made it possible, for the first time, to model Earth properties
with nonlinear characteristics.

Three-Dimensional Seismic: The Third Revolution

Graebner characterizes the move from two-dimensional (2D) to 3D seismic as the third major
revolution in seismic technology. The concept of 3D-seismic surveying has existed since the
earliest days of geophysics. However, the ability to implement that concept was restricted by the
efficiency and accuracy of data acquisition and the cost and computing power necessary to
condense, process, display, and help interpret data. All that changed in just over 1 decade and
made 3D seismic a reliable and cost-effective method of optimizing field development and
By the early 1970s, the industry had developed a data-processing arsenal that contained, among
other things, programs for single and multichannel processing, deconvolution, velocity filtering,
automated statics, velocity analysis, migration, inversion, and noise reduction. These processing
accomplishments and the accompanying improvements in data collection advanced seismic
prospecting by levels of magnitude, but imaging methods were still 2D.9

The first 3D seismic survey was shot by Exxon over the Friendswood field near Houston in 1967.
In 1972, GSI enlisted the support of six oil companiesChevron, Amoco, Texaco, Mobil, Phillips,
and Unocalfor a major research project to evaluate 3D seismic. The site selected for the
experiment was the Bell Lake field in southeastern New Mexico.

The Bell Lake field was a structural play with nine producers and several dry holes. It also had
sufficient borehole data to ensure that 3D seismic could be correlated to subsurface geology. The
acquisition phase took only about 1 month, but processing the half million input traces required
another 2 years, and producing migrated time maps without workstations or any other form of 3D
interpretation aid was also a lengthy process. Nonetheless, the project was a defining event in
seismic history because the resulting maps confirmed the fields nine producers, condemned its
three dry holes, and revealed several new drilling locations in a mature field. The development of
3D seismic was one of the most important technological breakthroughs in an industry in which
profitability is closely tied to innovation and technology. Finally, the subsurface could be depicted
on a rectangular grid that provided the interpreter with detailed information about the full 3D
subsurface volume. The images produced from 3D data provided clearer and more accurate
information than those from 2D data. Any desired cross section could be extracted from the
volume for display and analysis, including vertical sections along any desired zigzag path.

Lateral detail also was enhanced by the dense spatial coverage in 3D surveys. Slicing the data
volume horizontally at fixed reflection times yielded comprehensive overviews of subsurface
structural features, particularly faulting. Attributes could be mapped and displayed along curved
reflector surfaces. The accurate positioning of events made possible through 3D migration also
improved subsurface imaging of flatter-lying stratigraphic targets. The result was an extension of
the value of seismic data for exploration and production functions.

Modern 3D Seismic Technology

Today, 3D-seismic technology is applied to solve problems and reduce uncertainties across the
entire range of exploration, development, and production operations. Surveys are used to
characterize and model reservoirs, to plan and execute enhanced-oil-recovery strategies, and to
monitor fluid movement in reservoirs as they are developed and produced. These capabilities
have been made possible by advancements in data acquisition, processing, and interpretation
that have both improved accuracy and reduced turnaround time.

Acquisition. Reduction in 3D data-acquisition time has reduced the price of 3D data and
dramatically increased the amount of data available. Better and more reliable instrumentation,
better and more streamers per swath, improved and faster navigation processing, and onboard
quality control and data processing have dramatically reduced downtime. Today, marine seismic
vessels used for 3D acquisition have, on average, four or five streamers, although some
supersized vessels can tow up to 16 streamers simultaneously.10 Another dramatic improvement
in acquisition technology has come about through ocean-bottom-cable (OBC) surveying methods.
Once considered a specialized technique, OBC acquisition has become competitive with
streamer operations in water depths of up to 650 ft. With the OBC method, cables connected to
stationary receiver stations are deployed on the ocean bottom, and a marine vessel towing an
array of air guns serves as the energy source. This makes it possible to survey congested areas
safely and uniformly.11 Additionally, resolution is higher because the quality of measurements is
less affected by noise and other disruptions and because control of actual positioning makes
repeated surveys more reliable. Processing. Commercialization of 3D depth migration, the
process by which geophysical-time measurements are processed into depth readings, owes itself
to parallel computing. In places where lateral changes in the Earth take place quickly, time
images of the subsurface are distorted by those changes. When these data are processed into
depth, a substantially more accurate picture of the Earths subsurface is yielded if velocities of the
rocks are known. Today, 3D depth migration is emerging as a truly interpretive data-processing
method that is closing the communications gap between geologists, geophysicists, and reservoir

Interpretation. Maturation of 3D interactive workstations has played a key role in the widespread
acceptance of 3D-seismic data. The amount of raw data to be interpreted per survey has
increased by a factor of more than 5,000 over the past 15 years, placing a premium on efficiency
in the interpretive process.12

One of the most exciting advancements in 3D interpretation is 3D visualization. Humans perceive

the 3D world through a variety of visual cues that include perspective, lighting and shading, depth
of focus, depth cueing, transparency and obscuration, stereopsis, and peripheral vision. With the
addition of each visual cue, 3D-seismic interpretation has become more efficient, accurate, and
complete. Large amounts of data have been integrated into easily understood displays, and
communication between the various members of asset and management teams has improved.

Methods for 3D visualization have evolved from a lighted 3D horizon surface to desktop
visualization to the current immersive environments that engage peripheral vision. In 1997 Arco,
Texaco, and Norsk Hydro each installed large immersive visualization environments. The Texaco
facilities are visionariumsthat is, 8- to 10-ft-tall screens that curve horizontally through
approximately 160, with data projected by use of three projectors that each covers one-third of
the screen. Arco and Norsk Hydro use immersive visualization rooms based on the virtual reality
interface CAVE, invented at the U. of Illinois at Chicago. In a CAVE, three walls and the floor are
used as projection surfaces, and the images on the walls are backprojected, while the image on
the floor is projected from the top down. In these environments, the data not only surround the
interpreters but actually appear to fill the room. Members of the asset team literally can walk
through the data and discuss the reservoir with one another. With a 3D pointer, a new production
well can be planned from inside the reservoir and the effects of any changes experienced

Four-Dimensional Seismic
Time-lapse, or 4D, seismic, consists of a series of 3D-seismic surveys repeated over time to
monitor how reservoir properties (such as fluids, temperature, and pressure) change throughout
the productive life. Consequently, fluid movements can be anticipated before they affect
production. Similarly, placement of extraction and injector wells can be fine tuned, bypassed oil
and gas can be recovered, and production rates can be accelerated.14

For example, in the Duri steamflood project in Sumatra, which produces approximately 300,000
B/D of high-viscosity oil, placing injector wells with the aid of 4D seismic is expected to help
operator Caltex (a Chevron and Texaco affiliate) raise recovery efficiency in a complex reservoir
from 8% primary recovery to nearly 60%. A 4D-seismic pilot was conducted in the field with a
baseline survey and six monitor surveys recorded at various intervals over 31 months. The pilot,
which consisted of a central steam-injection well surrounded by six production wells,
demonstrated that the horizontal and vertical distribution of steam could be tracked over time. On
the basis of the quality and detail of reservoir information from the pilot study, a multidisciplinary
asset team assessed the economic feasibility of large-scale 4D monitoring of the Duri field. The
assessment took into account the benefits of time-lapse seismic as well as the cost of seismic
data and the risk probability of various outcomes. Benefits included shutting off injection in swept
zones; putting steam into cold zones; and locating observation wells in the right places, possibly
eventually reducing the need for them. When these benefits were weighed vs. seismic-data cost
and risk factors and compared with other operating scenarios for the field, the conclusion was
that the largest net present value could be obtained by aggressively managing the steamflood
with 4D seismic.15 Four-dimensional reservoir-monitoring projects are also being conducted in
numerous other parts of the world, including the North Sea, Southeast Asia, and the Gulf of

Crosswell Seismic Technologies

Detailed understanding of reservoir flow and barrier architecture is crucial to optimizing
hydrocarbon recovery. Crosswell seismologythat is, using seismic sources in a wellbore and
recording the wave propagation in another wellboreis the only spatially continuous, very-high-
resolution method that can image such features as faults, stratigraphic boundaries,
unconformities, sequence porosity, fracturing, and additional untapped reservoir bodies away
from the well. Crosswell data currently are expensive to acquire, and processing the data through
topographic inversion and migration requires considerable expertise. However, the fact that
answers to many of the most challenging geophysical problems reside within this high-resolution,
wide-azimuth illumination of rocks on a macroscale is driving the technology to become a viable
and regularly used tool.16

Looking Ahead
Integration, miniaturization, and production are likely to be operative words in describing seismic
technology in the 21st Century. With most, if not all, of the worlds more obvious reservoirs
already discovered and in production, and given the likelihood that unstable oil prices are here to
stay, emphasis in the oil field will focus on integrating technologies and disciplines to optimize
recovery in existing fields and develop new fields quickly. All this means that seismic technology
will become more and more a tool for production rather than exploration work. It also means that
geoscientists and reservoir engineers will work together more closely and cooperatively.

Full-Vector Wavefield Imaging: The Fourth Revolution. According to Graebner, a fourth revolution
in seismic technology, full-vector wavefield (or multicomponent) imaging, which includes both
shear and compressional waves (S- and P-waves, respectively) to capture rock properties
between wells, will add further value to seismic as a production tool.

P-waves, the traditional waves of seismic exploration, are influenced not only by rock frame
properties but also by the nature of the fluid in the rock pores. S-waves, on the other hand, are
insensitive to the type of fluid in sediments. Full-vector wavefield imaging makes it possible,
among other things, to see through gas chimneys that plague economically important areas,
such as the North Sea. These chimneys, which are caused by free gas in the sediments, destroy
P-wave continuity but hardly affect S-wave reflections. Combining P- and S-waves also helps
asset-team members discriminate among sands and shales and is valuable in helping detect
fractures.17 Using multicomponent imaging to detect fractures and stratigraphic traps will bring
engineers and geoscientists closer together. They will learn one anothers jargon and paradigms
both on the job and in university education. They will monitor reservoirs routinely with time-lapse
seismic and process data both in the field and in their offices essentially in real time, thanks to
cost-effective, high-bandwidth satellite communication. Finally, of course, they will continue to
search for even better and faster ways of improving success ratios and reducing risk in the oil


The Reservoir/Wellbore Connection

Well completions are as old as the petroleum industry itself. In fact, on 27 August 1859, the oil
from Colonel Drakes 691/2-ft well in Titusville, Pennsylvania, had to be pumped to the surface.
Pumping that oil was an example of the outflow phase of well completionsthat is, methods of
transmitting fluids to the surface. This article traces the evolution of inflowthe phase of
completion operations that deals with opening the wellbore to the producing formation. (Outflow
will be discussed in a later installation of this series.)

In 1859, the petroleum industry consisted of two oil wells in the U.S. producing a total of 2,000 bbl
and having a combined value of U.S. $40,000. Suman 1, in his 1921 book Petroleum Production
Methods discussed the vastness of an industry that had expanded to 35,000 new wells the
previous year in the U.S. alone, at a cost of approximately U.S. $575 million. In the preface, he
wrote, It is quite probable that, as time goes on, the production of petroleum per well in the U.S.
will gradually decline to the point where operators will become very much interested in doing
things in a more efficient manner. That the time would come when operators would give more
than a little attention to economy and efficiency was unquestionable. But Suman and those who
worked alongside him probably never could have imagined that need would be driven by a global,
low-price market for the worlds chief energy source.

There is no question that economy and efficiency drive todays completion operations. Neither is
there any question that creativity, perseverance, and some risk-taking by completions engineers
(both in developing new technologies and in continually finding new ways to apply existing
techniques) have enabled --- and will continue to enable --- the petroleum industry to produce oil
and gas efficiently while lowering the cost per unit and raising net present value. Recently,
completion operations have become more specialized, and perception has evolved from the idea
of completion equals plumbing (i.e., seals, tubulars, valves, and packers) to completion equals
well optimization.2 This change certainly is evident in inflow technologies.

Completion Basics
The economic success of a well depends in large part on making the optimum connection
between the wellbore and the reservoir system. That optimum connection must perform three
1. Let oil into the well, where it can then flow or be pumped to the surface.
2. Keep over- or underlaying water out of the well.
3. Keep the formation out of the well.
Although completion has never been universally defined, this concept is its basis. Neither is
there universal agreement on the point at which completion begins. Probably the most widely
held view is that completion begins when the bit first makes contact with a productive formation.
Because formation damage that affects later productivity begins at this point, completions
engineers stress the importance of planning wells as the steps that lead to a successful well are
complex and interconnected. A multidisciplinary team working cooperatively and interactively can
avoid expensive misunderstandings and environmental problems that could result from
improperly executed operations. Completion design is a function of numerous reservoir
characteristics, such as permeability, porosity, saturation, pressure, stability, and
compartmentalization. According to King3, a noted authority on completion, the key to a good
initial completion is to collect and assess as much data as possible that are relative to these
interrelated characteristics at the earliest possible time.

Porosity and Permeability

Porosity and permeability are the reservoir storage and pathway of flowing fluids. Porosity is the
void space between the grains where fluids can be stored. Permeability is a measurement of the
ability of fluids to flow through the formation. The higher the permeability, the more easily a fluid
can flow through the rock matrix. Most productive formations are between 0.001 and 1,000 md.
Porosity does not always relate directly to permeability. Materials, such as shales and some
chalks, for example, may have very high porosities but low permeability because they lack
effective connection of the pores. When evaluating a reservoirs economic potential, a porosity or
permeability cutoff level often is used to establish minimum pay requirements. This level can be
determined from porosity logs and flow tests.

In almost every porous formation, there is at least a small amount of water saturation. The
remaining fraction of the pore space that contains oil or gas is the hydrocarbon saturation. In
general, the most productive parts of a reservoir usually are those with the higher hydrocarbon-
saturation values. Water saturation also may be a key determinant of pay because extremely high
water saturation could indicate hydrocarbon depletion or movement of an aquifer into the pay.
Closely related to porosity and saturation are recoverable hydrocarbon volumes. Not all oil in
place can be recovered. The amount of oil that will flow from a rock depends on the size of the
pore spaces, the oil saturation and type, and the amount of energy that is available to push the oil
toward the wellbore.

Reservoir pressure --- the pressure that the reservoir fluids exert on the well at the pay zone ---
dictates how much fluid ultimately is recovered. Reservoir pressure varies throughout the
productive life of a reservoir. Initial reservoir pressure is the pressure at the time of discovery, but
there are other forces involved. These forces, or drives, include solution-gas drive, gas cap, and
waterdrive. While many pressure regimes are present and important during the life of a well,
pressure differential toward the wellbore is essential for fluid flow during completion and

Reservoir stability can affect the initial completion as well as repairs or recompletions throughout
a reservoirs life. Many geologically young formations lack sufficient strength for formation
coherency during all phases of production. These younger rocks often require stabilizing, or sand-
control, types of completions to support the formation while allowing it to flow fluids.

Compartmentalization is the division of a reservoir into compartments that are partially or fully
pressure isolated by faults, permeability or porosity pinchouts, folding, shale streaks, barriers, or
other factors. The more that is known about these reservoir characteristics and their interactions
with one another, the better the chances of selecting the optimal pay, deciding where to place the
wellbore, and establishing the critical link between the wellbore and the formation. 3

Types of Completions
There are three primary inflow completion types: natural, stimulated, and sand control. Natural
completions are those in which little or no stimulation is required for production. Sandstone and
carbonate systems with good permeability and mechanical stability are prime candidates for
natural completions. Stimulated completions generally are applied to improve the natural
drainage patterns of hard, low-permeability formations or to remove barriers within the formation
that prevent easy passage of fluids into the wellbore. Acidizing and hydraulic fracturing are
examples of stimulated completions. Sand-control completions are performed in young,
unconsolidated or less mechanically competent sandstones to support the formation while
allowing it to flow fluids.

Letting Oil In: The First Priority

Originally well completion was thought to mean nothing more than drilling into the pay and letting
it flow. However, it quickly became apparent that oil does not have any inherent ability to expel
itself from a reservoir, but rather must be displaced from a porous formation to a wellbore. 4 Thus,
the concept of creating and stimulating paths of least resistance to the wellbore evolved.

Nitroglycerin Shooting
As early as the 1860s, hard, tight oil sands in Pennsylvania were being shot with gunpowder, then
nitroglycerin, to rubblize or shatter the rock at the bottom of the wellbore. The practice of
shooting explosives increased flow, but the increase was often temporary, and the wellbore was
often destroyed. The process was also dangerous. Nonetheless, explosive fracturing continued to
be the basic method of stimulating wells until the 1930s.

Early Acidizing
Acid was first used for well stimulation in 1895 by the Ohio Oil Co. 6 Hydrocloric acid (HCL) was
pumped into the microscopic flow channels of limestone formations to dissolve the rock and
enlarge the passages.7 The treatment was effective, but the well casing was severely corroded.
Acidizing declined in popularity until the 1930s, when inhibitors were added to the acid to protect
tubulars and treating equipment.

Perforating creates a direct link between the wellbore and the producing formation by placing
holes through the casing and the cement sheath that surrounds it.
In the early 1900s, mechanical puncturing methods were tried. These included the single-knife
casing ripper, which involved a mechanical blade that rotated to puncture a hole in the casing. 2

The first perforating mechanism used on a large scale was the bullet gun in 1932. 3 In bullet
perforating, a hardened-steel bullet is fired from a very short barrel. The resulting perforations
cause little damage to the cement sheath and casing, however, the perforation depth is generally

Today, shaped-charge, or jet perforating is the accepted industry standard. In this method, a
pencil-like jet of gas formed by detonating explosives in a cone-shaped charge penetrates the
casing and cement at high velocity and provides clear access to the producing formation.

Modern Perforating
Today, shaped-charge-perforating programs are tailored to completion types and evaluated based
on how effectively they accommodate well geometry and reservoir properties. Determining factors
for success include the proper differential between reservoir and wellbore pressure and gun
selection, which determines shot geometry. Shot geometry is characterized by perforation length
and diameter, density (i.e., shots per foot), and phasing (angular separation).

Natural, stimulated, and sand-control completions each have their own perforating requirements.
Custom-built guns are often designed for special completion objectives. 9

Underbalance and Extreme Overbalance

Perforating produces a zone of reduced permeability, referred to as a crushed zone, around the
perforation. In the late 1950s, Kruger et al. proved the effectiveness of underbalance perforating
(i.e., with the pressure in the wellbore is lower than that in the formation) for removing the
crushed zone and improving flow channels.3 Investigation of underbalanced perforating continued
for 20 years, then boomed in popularity in the 1970s, when it was tied to innovative designs for
tubing-conveyed perforating.

Subsurface Equipment/Artificial Lift

Maximizing Production from the Well

In October, 1859, Colonel Edwin Drake rigged up a pump to produce an oil and water mixture a
distance of ten feet to the surface. It was the worlds first commercial oil well and the first use of
artificial lift to commercially produce oil. Today, 140 years later, pumps are employed in more
than 80% of all artificial-lift wells.

When oil or gas is being produced the reservoir pressure reduces. At a certain point in time it can
happen that the pressure in the reservoir becomes too low for production and artificial lift can be
required. Artificial-lift methods fall into two groups, those that use pumps and those that use gas.

Beam Pumping
The walking beam pump, was an idea borrowed from the water-well industry. One end of a heavy
wooden walking beam set on a pivot was attached by a stiff rod to a steam engine. Attached to
the other end of the beam was a string of long, slender sucker rods, which were connected to a
pump at the bottom of the well. The engine cranked the rod up and down and actuated the pump
to pump oil to the surface. Since the introduction in 1925 the Trout-designed pumps have become
the dominant artificial-lift beam-pumping unit. Over the years developments focused on
improvement of the reliability of the pump parts and design methods. For example the sucker rod
material changed from wood to fiber glass steel and plastic reinforced. The design methods were
significantly improved by the Gibbs sucker-rod diagnostic technique. The technique uses
mathematical equations to model the elastic behavior of long sucker rod strings, says Gibbs.

Electrical Submergible Pumps (ESPs)

The first electrical submergible pumping unit was developed in Russia in 1917 by Armias, who
later migrated to California. Although initially not very successful, the use of ESPs in the oil
industry was assured by the help of Frank Phillips of Phillips Petroleum Co. in Bartlesville,
Oklahoma. Since that time, the concept has proved to be an effective and economical means of
lifting large volumes of fluid from great depths under a variety of well conditions. Todays ESPs
are essentially multistage centrifugal pumps that employ blades, or impellers, attached to a long
shaft. The shaft is connected to an electrical motor that is submerged in the well. The pump
usually is installed in the tubing just below the fluid level, and electricity is supplied through a
special heavy-duty armored cable.

Subsurface Hydraulic Pumps

There are two types of hydraulic pumps for artificial lift. One is fixed-pump design; the other is
free-pump design. In fixed installations, the downhole pump is attached to the end of the tubing
string and run into the well. Power fluid is directed down an inner tubing string, and the produced
fluid and the return power fluid flow to the surface inside the annulus between the two tubing
strings. Free-pump installations allow the downhole pump to be circulated into and out of the well
inside the power-fluid tubing string, or they can be installed and retrieved by wireline operations.
Jet pumps are a special class of hydraulic subsurface pumps and are sometimes used in place of
reciprocating pumps. Unlike reciprocating pumps, jet pumps have no moving parts and achieve
their pumping action by means of momentum transfer between the power fluid and produced

Gas Lift
This method artificially injects gas in the well through gas lift valves. The gas reduces the weight
of the liquid column which in a reduction of bottomhole pressure. The effect is an increase of
liquid production. Since its development in the 1930s, several important developments took place.
Modern gas lift installations have valves that can easily be retrieved installed in side pocket

The Future
Artificial-lift technologies of the future will involve software, electronics, sensor technologies and
data transfer and data management. This will require an effort of developers to explore the limits
of technology.

Surface Production Facilities

Separating and Treating Produced Oil and Gas

The evolution of todays surface production facilities actually began several thousand years ago.
Seepages of asphaltic bitumen in Mesopotamia around 3000 B.C. were the raw material for a
petroleum industry that flourished for about 3,000 years. It primarily produced a mastic and
caulk used in construction. The production system was crude and involved merely the recovery of
hydrocarbons in jars and casks. In time, surface facilities for this production involved a crude type
of distillation. This evolution is better illustrated in the next few sections.

Storing the Oil

Early oil pioneers concentrated on finding likely drilling sites for exploration and on completing oil
wells on those locations. Sumps (pits) were dug on these leases to serve as surface reservoirs
for production. Any gas produced with the oil was vented into the atmosphere. Sump separation
systems had difficulties as surface water, dirt and other debris ran into these sumps.

By 1861, larger wooden tanks replaced many of the on-site sumps as the loss of oil can be
reduced to a minimum from a flowing well by providing a flow tank connected to the casing head.
Gas and oil can be separated in the flow tank, the oil being drawn off into receiving tanks and the
gas escaping into the air.

These were replaced with bolted-iron tanks by 1867. Most of these tanks were open at the top,
allowing rain and debris to enter, so it wasnt long until wooden roofs were placed over them.

The Need for Separation

Early on, it was recognized that the gas associated with the oil could be captured to serve as fuel
for the drilling engines and, by removing the gas from the vicinity of the wells, safety could be
increased. As a result, the first separator was invented around 1863. Separators usually were
mounted on top of storage tanks and were, in essence, a barrel rigged so the liquid ran out the
bottom into the storage tank below through a trap that kept the gas out. A gas line was connected
to a bung on the upper part to move the gas where it was needed. Without realizing it, these early
oil producers had, through necessity, reinitiated the centuries-old evolution of surface production

Improvements Come Quickly

As pressures increased, bolted-iron tanks replaced barrels as separators. As they grew in size, it
was necessary to move the separator from the top of the storage tank to the ground and to apply
some form of level control to keep the gas from flowing out the liquid outlet.

By 1904, separators were available with level controls and working pressures as high as 150 psi.
Experience indicated that oil recovery was higher when a separator preceded the tank than when
the oil was allowed to flow directly into the tank. Therefore, separators became standard surface
equipment for gas recovery and for increasing oil-recovery efficiency.

From 1904 until the early 1950s, more sophisticated controls, designs, and improved construction
materials highlighted the evolution of separators. Horizontal, dual-barrel separators were
developed and tested in the late 1940s to handle a growing need for high-gas-flow/low-liquid-flow

The size of separators changed dramatically when large gas transmission lines were constructed;
offshore leases were opened for development; and discoveries of larger, higher-pressure gas
reserves occurred. The typical single-stage vertical separator suitable for separating casinghead
gas from oil was no longer viable. Horizontal, single-barrel separators were developed because
they were more efficient at high flow rates. Up to three stages of separation became common to
stabilize the hydrocarbon liquids produced from high-pressure wells.

Knocking Out the Water

In many early production systems, the separation of water was accomplished in the oil tank. The
separated water was disposed of by merely opening a valve on the bottom periodically to drain off
the water, which was allowed to run into the nearest ditch.

Three phase separators were used as higher-pressure wells were drilled. This is a pressure
vessel with a liquid retention time that was long enough to allow the water and oil to separate. An
internal weir or baffle arrangement was positioned so that water can be drawn from the bottom,
oil from the side, and gas from the top. The free water separation method functioned very well as
long as the oil and water did not emulsify so much that gravity separation alone was

Around 1900, a hay tank containing excelsior was placed between the 3 phase separator and
the oil tank. This provided surface area for small droplets of emulsified water in the oil to coalesce
into large enough drops, separate by gravity, and then be removed. It was found that heat, certain
chemicals, and/or turbulence at the proper point also helped break down these emulsions. This
led to the development of a single compact unit called a vertical heater-treater.

As oil fields became unitized in the 1920s, a move to centralize treating systems that handled
several wells gained in popularity, resulting in higher production rates per treater. This led to the
development of horizontal heater-treaters. In offshore environments, many operators have
elected to use heat exchangers upstream of the treater to eliminate safety hazards associated
with standard heater-treater fire tubes.

Managing Produced Water and Salt Water

State laws regulating discharges began to appear as early as 1909 in Oklahoma, but they werent
aggressively enforced until more stringent regulations were passed in the 1950s. Even so, most
states continued to allow surface discharges to freshwater streams until well into the 1960s.

Until the early 1950s, the most common means of saltwater disposal in areas such as West Texas
and Oklahoma were evaporation ponds. Water was typically routed through a string of earthen
pits with baffles and skimmers to the pond. However, it is probable that much of the water leaving
these ponds seeped through the bottom rather than evaporating into the atmosphere. Many
cases of ground water contamination occurred around the retention and evaporation ponds.

Sometime during the 1920s, producers hit upon the idea of injecting the unwanted saltwater into
older, abandoned wells or into dry holes for disposal. Experience quickly indicated that it was
necessary to further treat the water to remove solids and free oil.
With the advent of production from the Middle East, where high levels of dissolved salts were
found to exist in the produced water, crude-oil purchasers insisted that desalting be carried out
in the field. To accomplish this, the produced water is diluted with fresh, or low-salinity, water prior
to treating the emulsion in a process commonly called washing. This enables a lower outlet salt
content in the crude, since the salinity in the residual water is lowered. Often, a two-stage system
of dilution followed by treating is required to meet the lower-salt-content mandates.

Handling Produced Water Offshore

As late as the mid-1960s, produced water from offshore three-phase separators was routed
directly overboard. In other installations, water was first routed to a precipitator or skim tank
before flowing overboard for disposal. In the Gulf of Mexico during the late 1960s, the corrugated-
plate interceptor (CPI) using plate-coalescence technology first came into general use for
separating small water droplets.

The CPI was quickly followed by the development of gas flotation units. The first units used
offshore were adaptations of dissolved-gas flotation units used in refineries and chemical plants.
However, they did not perform satisfactorily.

During the next decade, the 1970s, dispersed-gas flotation units, first mechanical and then
hydraulic designs, were introduced offshore in the Gulf of Mexico and in the thermal-flood regions
of California. They were adaptations of units used in the mining industry for ore benefication.

In the 1980s, hydrocyclones were introduced to separate oil from produced water. Hydrocyclones
had been used a decade earlier for separating and cleaning solids from produced water, but it
had been thought that they could not work effectively to separate two liquid phases with very little
difference in density. The first oilfield use of hydrocyclones was in the Bass Straits in Australia,
followed by the North Sea and the Gulf of Mexico.

Natural Gas Processing

The commercialization of natural gas wells with higher pressures made the development of
processing facilities for them a necessity. These wells have a large pressure drop between the
wellhead and the high-pressure separator, which led to the design and manufacture of low-
temperature separation units (LTSs).

In an LTS unit, the gas expands across a valve in a manner that produces a specific temperature.
The hydrates formed by this expansion fall into a liquid bath maintained at a temperature that
melts the hydrates and keeps them from plugging the unit. The LTS units exit temperature is kept
below the minimum pipeline temperature so no liquid will form in the line. Due to the low
temperature of separation, additional natural gas liquids (NGLs) are recovered, and the gas
heating value is reduced to meet marketable standards.

Since LTS units are difficult to operate and only work effectively when the surface wellhead
pressures and temperatures are within specific ranges, a more desirable high pressure gas well
hook-up evolved during the 1950s. This hook-up uses a line heater to ensure that the
temperature downstream of the choke is always above the hydrate point, and it uses one or two
stages of separation to stabilize the condensate before it flows to a tank.

Dehydrating The Gas

Gas dehydration at, or near, the production site usually is considered mandatory. In the early
days of the petroleum business when the value of gas was less, it was common to use gas
heaters to keep the flowing temperature above the hydrate formation point in a gathering system.
Initially, gas dehydration was accomplished by the use of solid desiccants, which became
available in the 1920s. These were large, expensive units and were used only where large
quantities of gas were involved, particularly at the inlet end of a transmission line.
These were replaced by wellhead triethylene glycol dehydrators sometime in 1949 as developed
by Laurance S. Reid that was both suitable and economical for dehydration of gas to pipeline
specifications. The use of these became standard and largely replaced the heater. Ultimately,
larger and more efficient glycol units replaced solid-desiccant ones as the primary dehydration
method. Today glycol absorption is the routine choice for gas dehydration.

Compressing The Gas

Gas compressors were developed for installation downstream of the separators as line pressures
increased above wellhead pressures. As the gas cooled in the line below its dewpoint
temperature, liquids formed that hindered the gas flow rate, and separation was required again
before the gas was burned.

Once again, necessity triggered advances in technology. After compression, the gas needed to
be cooled with water prior to separation so its dewpoint temperature would be above the
temperature in the line, states facilities designer Ken Arnold. Thus, the first cooler appeared in
1903 and consisted of an old boiler filled with water with cooling coils running through it.

Despite some advancements during the next five decades, natural gas transmission lines were
hampered by slow speed (i.e., 200 to 400 rev/min) integral or steam- driven compressors.
However, the design of high-speed-compressor valves during the early 1950s enabled the direct
coupling of standard engines to high-speed-compressor frames operating at 900 to 1,200
rev/min. This development greatly advanced gas-compression technology. Later advances in
reciprocating-compressor technology enabled compressor speeds to increase to as much as
1,800 rev/min.

In the late 1960s, turbine-driven centrifugal compressors became available for oilfield service.
Although less fuel efficient than engine-driven compressors, turbine-driven centrifugals weigh less
and occupy less space per unit of power than their engine-driven counterparts. They have seen
increasing use offshore, where large horsepower requirements are necessary, and at remote
locations where transportation and installation costs are important.

The Future
In recent years, there has been a considerable amount of research and field testing driven by the
need to lower topsides weight and size for deepwater developments and to allow for the
possibility of downhole and subsea separation equipment. This research has led to development
of compact separators that employ centrifugal force to accomplish a gas/liquid separation in a
shell of much smaller diameter and length than a standard gravity separator. The number of
proprietary compact-separator designs on the market has begun to grow and is expected to
expand tremendously during the next decade.

Likewise, multiphase pumping equipment capable of pumping a mixture of gas and liquid was not
even considered practical until the mid-1990s. Now the equipment is available, and the key to
success for this technology will be in handling high gas-volume fractions and inlets that see a
great deal of slugging and surging flow.

Cross-flow-membrane technology for offshore produced-water-treating applications has received

a great deal of research. And centrifuges have been installed offshore on difficult-to-treat streams.
While both technologies have been highly effective, neither has been used widely thus far due to
their high purchase and maintenance costs. However, both are expected to be developed further
in the next decade. It should be noted, however, that the future of this evolving technology
depends more on its application than its development.

In the past, the slow evolution of advancements in surface-facility equipment allowed design
engineers to specify process needs and be certain that the offers of various suppliers would be
similar. It was considered beneficial, but not essential, for facilities designers to know how to size
and specify equipment.

Today, because of dramatic changes in the range of new technologies available, it is essential
that design engineers understand both the benefits and detriments of any newly developed
technology. Now, and even more often in the future, process and equipment choices will have to
be made before costs are definitely known. Balances between costs and technologies will have to
be struck. Depending on the specific application, it may make more sense to use an older, proven
technology instead of a newer one because the application does not require a cutting-edge
solution. In either case, the costs associated with the facilities may be determinant in deciding
whether the field is commercially viable. Whatever the future holds, it is certain that surface
production facilities will continue to play a large role in the upstream business of oil and gas.

Horizontal and Multilateral Wells

Increasing Production and Reducing Overall Drilling and Completion Costs

Cost experts agree that horizontal wells have become a preferred method of recovering oil and
gas from reservoirs in which these fluids occupy strata that are horizontal, or nearly so, because
they offer greater contact area with the productive layer than vertical wells.1 While the cost factor
for a horizontal well may be as much as two or three times that of a vertical well, the production
factor can be enhanced as much as 15 or 20 times, making it very attractive to producers.
Despite these facts, it took several decades for the industry to embrace the technique.

Some of the earliest development toward horizontal drilling took place during the early 1940s
when John Eastman and John Zublin developed short-radius drilling tools designed to increase
the productivity of oil wells in California, explains Frank Schuh, a horizontal-drilling consultant.

The tools were designed to drill 20- to 30-ft (6.096 to 9.144 m) radii and horizontal distances of
100 to 500 ft (30.48 to 152.4 m), and they permitted the drilling of numerous laterals in the same
formation in various directions around the wellbore. Typical designs used between four and eight

The equipment preceded downhole survey tools and included extraordinary knuckle-jointed
flexible drill collars that could be rotated around the extremely high curvatures. Also, it allowed for
the employment of a drilling technique that was the perfect completion companion to standard,
vertical open-hole completions being used at the time. Basically, Eastman and Zublin were
instrumental in drilling the first multilaterals, Schuh states. Todays multilateral wells are simply
modern versions of these earlier efforts.

Unlike a directional well that is drilled to position a reservoir entry point, a horizontal well is
commonly defined as any well in which the lower part of the wellbore parallels the pay zone. And
the angle of inclination used to drill the well does not have to reach 90 for the well to be
considered a horizontal well. Applications for horizontal wells include the exploitation of thin oil-
rim reservoirs, avoidance of drawdown-related problems such as water/gas coning, and
extension of wells by means of multiple drainholes.2

Early Experimentation
True development and employment of horizontal-well techniques began in the U.S. during the
mid-1970s. However, horizontal-drilling experimentation began much earlier.

The U.S. Dept. of Energy (DOE) marks the starting date as 1929 in Texon, Texas. Here, says the
DOE, the first true horizontal well was drilled. Additionally, the DOE cites a well drilled in Yarega,
U.S.S.R., in 1937 and a 500-ft (152.4 m) well drilled in 1944 in the Franklin Heavy Oil field in
Venango City, Pennsylvania, as being some of the first wells to be drilled horizontally.

During the 1950s, the Soviet Union drilled 43 horizontal wells, a considerable effort with respect
to the equipment available then. Following their foray into horizontal drilling, the Soviets
concluded that while horizontal wells were technically feasible, they were economically
disappointing or, in other words, not profitable. As a result, they abandoned the method.

In the mid-1960s10 years after the Soviet experiencethe Chinese drilled two horizontal wells.
The first, 500 m (1,640.4 ft) in length and not cased, collapsed after a week of production. The
second was interrupted by the Cultural Revolution. Like the Russians, the Chinese concluded
that horizontal drilling was uneconomical and abandoned the method for more than 20 years.3

True Development Begins

North American Horizontal Wells
From 1979 to 1982, a renaissance of true horizontal-well development work occurred in North
America. It was during this period that Alan Barnes, an engineer for a major oil company, used a
complex reservoir-simulation model to promote the benefits of the Eastman/Zublin short-radius
technique to his superiors.

Following his modeling studies, the company drilled approximately 12 horizontal wells in the
Empire Abo reef in New Mexico. They targeted a thinning oil column in a massive limestone
reservoir with a significant gas cap and active water drive. Oil recovery of the first hole exceeded
the production of a comparable vertical well by more than 20 times before breakthrough of the
gas cap. The success of the Empire Abo project led the company to look for means of a broader
application. The company appointed Schuh to lead the search.

We developed what is generally referred to now as medium radius (20/100 ft) horizontal
drilling, Schuh says as he recalls the project. The development determined the maximum hole
curvatures possible in drilling horizontal wells without damaging conventional drillstring and
drilling tools. We found that the unique application of horizontal drilling allows hole curvatures that
are five to 10 times greater than can be used in conventional directional drilling. We utilized the
latest advancements in downhole motors and measurement-while-drilling (MWD) equipment to
develop methods for establishing long, low-cost horizontal boreholes. Using their technique,
Schuh and his colleagues drilled their first medium-radius well in January 1985. During the 1980s,
more than 300 horizontal wells were drilled in North America including the first one in Prudhoe
Bay, Alaska, in 1985. During this period, Texas Austin Chalk trend also received a great deal of
attention from horizontal-well operators who, at the time, drilled some of the highest-producing-
rate wells in the U.S.

But the decade of the 1990s most certainly will become known as the decade of the horizontal
well. Through 1998, the number of horizontal wells drilled in the U.S. has totaled more than
3,000, an increase of 1,000% over the previous 10-year period. By the late 1990s, a dramatic
shift in corporate philosophy regarding horizontal drilling occurred when one major operator set a
requirement that prior management approval was necessary for all vertical wells.4

European Horizontal Wells

The renaissance of horizontal-well drilling techniques in Europe began about the same time as in
North America. In 1977, Elf Aquitaine and LInstitut Franaise du Ptrole (IFP) began work on the
FORHOR project, which eventually led to the success of the Rospo Mare field, the only oil field in
the world at that time that produced systematically through horizontal wells. Drilled in the Adriatic
Sea in water depths ranging from 200 to 300 ft (60.96 to 91.44 m), the technical and economic
success of this field is credited with triggering the worlds interest in horizontal drilling.
Jacques Bosio, a former R&D deputy director and Vice President of Elf Aquitaine, was one of the
pioneers in the field of horizontal drilling as a project manager of the Elf/IFP FORHOR horizontal-
drilling research study.

What I remember about that period, when nobody in the world would believe that horizontal wells
could become a new tool for the industry, is that it was more difficult to change, by 90, the way
people were thinking than it was to do it with the wells, says Bosio, recalling those early days in
Italy. We had been raised with the idea that the maximum possible inclination for a well could not
exceed 70. I dont know why, thats just the way we were taught. But, one of the main reasons
the FORHOR project succeeded was because we had the perseverance to go one step further
with a rotary drilling rig. Remember, we didnt have downhole motors then.

When we talked to our drillers [about going beyond 70 inclination] . . . they first laughed and
then turned real mad at those crazy R&D people, Bosio muses. Even supposing that you could
drill it, a horizontal well made no economic sense, they said. It will cost at least 10 times as
much as a nearby vertical well but will never produce 10 times more. Besides, no coring, logging
or testing will be possible, and it will collapse on you before a liner can be run.

In spite of the ridicule and disbelief of others, Bosio and his colleagues pressed on in May 1980 to
drill the Lacq 90 (a total coincidence that this was the name of the well) in southern France, the
first well drilled at 90 inclination.

We had to swear that we would plug the well if it happened to disturb the drainage of the
reservoir so production could go back to normal, Bosio says as he stifles a laugh. Lacq 90 went
275 m (902.2 ft) within the reservoir with 100 m (328 ft) purely horizontal at a cost of 3.2 times
that of a vertical well, he continues. It did produce . . . much more water than its neighbors since
the reservoir was 90% watered out. This led to claims that horizontal wells were only good for
producing water, an unfair statement that did nothing to advance the technology. Shrugging off
such comments, Bosio had much better luck later on with the wells successor, the Lacq 91.

With their data in hand, Bosios group set out to apply it in the Rospo Mare field, a perfect
laboratory for the development of horizontal-drilling techniques. The field is unique because the
nature of its reservoir and the characteristics of its oil prevent it from being produced through
conventional vertical wells. By early 1981, five wells, all vertical, had been drilled from a platform
at the center of the field to appraise, set the fields limits, and begin exploitation.5

Our attention now turned to the Rospo Mare field, states Bosio enthusiastically. We drilled the
Rospo Mare 6 in January 1982, 370 m (1,213.9 ft) of which was horizontal at a cost factor of 2.1
times more than a vertical well. More importantly, it was an immediate success, producing 20
times more oil than a neighboring vertical well and boosting the fields recoverable reserves from
near zero to 70 million barrels, says Bosio proudly.

Bosio believed the Rospo Mare 6 wells success would jolt the industry into jumping aboard the
horizontal-well bandwagon. Unfortunately, the success was greeted with a big industry yawn.

Bosio recalls his experience in giving a paper on the well at the 1983 World Petroleum Congress
(WPC) meeting in London. When I went to the chair to present the first paper ever presented on
horizontal wells, more than half the room, which was full from the preceding paper, got up and
left! They simply werent interested, Bosio explains. At the next WPC in 1987 in Houston, the
paper I presented attracted a small crowd. Then, at the 1991 WPC in Buenos Aires, we had a full
session on horizontal wells.
Finally, producers had begun to realize that horizontal wells can increase production rates and
ultimate recovery, reduce the number of platforms or wells required to develop the reservoir,
reduce stimulation costs, and bypass environmentally sensitive areas.6

Multilateral Wells
The acknowledged father of multilateral technology is Alexander Grigoryan. In 1949, Grigoryan
became involved in the theoretical work of American scientist L. Yuren, who maintained that
increased production could be achieved by increasing borehole diameter in the productive zone.
Grigoryan took the theory a step further and proposed branching the borehole in the productive
zone to increase surface exposure.

Grigoryan put his theory into practice in the former U.S.S.R.s Bashkiria field (todays
Bashkortostan). There, in 1953, he used downhole turbodrills without rotating drillstrings to drill
Well 66/45 in the Bashkiria Ishimabainefti field. His target was the Akavassky horizon, an interval
that ranged from 10 to 60 m (32.8 to 196.8 ft) in thickness. He drilled the main bore to a total
depth of 575 m (1,886.4 ft), just above the pay zone, and then drilled nine branches from the
open borehole without cement bridges or whipstocks. When completed, the well had nine
producing laterals with a maximum horizontal reach from kickoff point of 136 m (446.1 ft). It was
the worlds first truly multilateral well, although rudimentary attempts at multilaterals had been
made since the 1930s.

Compared to other wells in the same field, 66/45 was 1.5 times more expensive, but it penetrated
5.5 times the pay thickness and produced 17 times more oil each day. Grigoryans success with
the 66/45 well inspired the Soviets to drill an additional 110 multilateral wells in their oil fields
during the next 27 years, with Grigoryan drilling 30 of them himself.

Like horizontal wells, multilateral wells justify their existence through their economics. Defined as
a single well with one or more wellbore branches radiating from the main borehole, they can be
an exploration well, an infill development well or a re-entry into an existing well. But they all have
a common goal of improving production while saving time and money.

Multilateral-well technology has not yet evolved to the point of horizontal-well technology. The
complexity of multilateral wells ranges from simple to extremely complex. They may be as simple
as a vertical wellbore with one sidetrack or as complex as a horizontal extended-reach well with
multiple lateral and sublateral branches.7 While existing techniques are being applied and fresh
approaches are being developed, complications remain, and the risks and chances of failure are
still high.

The Future
As indicated earlier, it took several decades for the industry to endorse the concept of drilling
horizontal and high-angle wells. Producers had to be convinced that the two- or three-fold cost
increase of horizontally drilled wells would be justified. Once producers got a taste of the 15- to
20-fold production increases, they wholeheartedly jumped on the bandwagon.

This initial growth of horizontal drilling has been quite rapid and now represents about 10 to 15%
of all drilling activity. The future growth of horizontal wells depends on how the industry handles
the next rounds of technological advancement, Schuh says.

The present state-of-the-art is economically attractive in easily drilled formations where the
reservoir can be efficiently produced without the need of mechanical intervention. The greatest
growth potential is in harder-to-drill formations and reservoirs that require selective completions,
selective isolations, and stimulation operations. Success in these areas will require new drilling
equipment, a great expansion of completion options and development of new completion
equipment and well-repair techniques, Schuh concludes.
It seems that the future of multilateral technology will follow that same course. According to Jim
Longbottom,8 a service/supply company engineer in multilateral technology and a highly
published author, multilateral completions have a bright future, but it will be some time before that
future is realized. Drilling and completion of multilateral wells is at the same development state
as horizontal drilling and completion was 10 years ago, he says. Acceptance and expansion of
multilateral drilling indicate that within a decade, multilaterally completed wells will be as
commonplace throughout the industry as horizontal wells are now.

Asset managers have at their disposal the tools and technology to extract more value than ever
before from their holdings, he continues. Horizontal and re-entry multilateral drilling has
increased 50% during the past 5 years and will likely grow at more than 15% a year through

However, if Longbottoms predictions are to come true, multilateral technology will have to win
over the Gulf of Mexico (GOM) operators, who seem to possess a mysterious lack of enthusiasm.
Apparently these producers, who by nature are conservative, differ with their more risk-oriented
counterparts operating in other parts of the world. GOM operators have a long tradition of
resisting innovation, opting instead for systems that are dominated by near-term profit. They tend
to shun new, exotic solutions to their daily problems.9

Some believe the future of multilateral-well development is tied to advances in the methods for
drilling these wellsdirectional and horizontal drilling techniques, advanced drilling equipment,
and coiled- tubing drilling. This may be true. However, it is also important to note that the
industrys ability to analyze the production and reservoir performance of multilaterals, particularly
in a cost-effective manner, has fallen behind. Currently, drilling technology has temporarily
outstripped the industrys capabilities in production and reservoir-engineering analysis. It will
catch up, but these factors are also a major impediment to more widespread application of
multilaterals, particularly where improved-recovery methods are expected to be used.

Perhaps the biggest push on operators to install multilaterals in the future will come from the
technologys economics. Historically, when operators have found themselves in extended periods
of depressed oil prices about which they could do nothing, they have reduced operating and
capital expenditures to help the bottom line. Then, to help squeeze more oil from every drilling
and completion dollar spent, they have turned to new technologies, even if they hadnt endorsed
them before. Most recently that technology has included geosteering, improved seismic data, and
horizontal wells.

Also, multilateral technology offers an attractive package of economic incentives to producers

looking for bottom-line help. Multilaterals allow multiple wells to be drilled from a single main
wellbore, eliminating costly rig days for drilling an upper hole section for each well. And the ability
to tap several zones from branches off a single wellbore, rather than a number of vertical ones
drilled through the same section, holds the added attraction of risk reduction.

But the biggest economic driver will be deepwater offshore wells, where risks are high and the
huge cost of deepwater installations can be reduced by multilaterals that shrink the number of
wells and the amount of ancillary drilling and completion work needed to access high-production-
rate fields.

As for horizontal wells, their future is assured. For multilateral wells, the pendulum is beginning to
swing in their favor as operators steadily realize that the advantages of these systems are
increasingly outweighing their risks. This is making their future look a lot more secure.

When we talked to our drillers [about going beyond 70 inclination] . . . they first laughed and
then turned real mad at those crazy R&D people.
Subsea Completions

Enabling Early Production From Deepwater, Remote, and Marginal Fields

While subsea-well completions occupy a small niche in the offshore petroleum industry, their
evolution has attracted a lot of attention because they offer a means of producing field extremities
not reachable by directional drilling from existing platforms. Also, they offer production options
where field economics do not justify the installation of one or more additional platforms.1

During the past four decades, subsea-well-completions technology has grown from untested
engineering theory to viable, field-proven equipment and techniques that are accepted by the
petroleum industry and the governments of producing countries. In the 37 years since the first
systems were installed, approximately 1,100 subsea wells have been completed. Two-thirds of
those wells are still in service. Among these completions is a variety of configurations that
includes single-satellite wells, which employ subsea trees on an individual guide base; subsea
trees on steel-template structures with production manifolds; and clustered well systems, which
are essentially single-satellite wells connected to a nearby subsea-production manifold. All of
these configurations typically are tied back to platforms, floating production and storage vessels,
or even to shore.

Subsea-Completions Technology Evolves

Historically, water depth and cost have consistently challenged operators engaged in exploration
and production in the worlds offshore areas. In an effort to handle both of these challenges,
producers deemed subsea completions their most economical choice.

For years, water depth alone was the driver of the development of subsea equipment due to the
physical limitations that become increasingly difficult as depth increases. However, in more recent
years, cost has become an additional driver as lower oil prices have mandated that companies
receive more value for their deeper-water investments. A third historical driver has been the
speed with which subsea completions can be installed to establish a stream of revenue for

The greater emphasis on using subsea completions has been to produce marginal fields to
existing platforms, stated industry expert Harvey Mohr in 1989 and 1991 trade-journal articles
on subsea-well completions.2 With emphasis on producing in deeper and deeper water, the
need to bring fields on stream economically greatly enhances the attraction of subsea

During the 1960s, wellhead components that enabled operators to get their newly drilled subsea
wells on stream were among the first pieces of subsea-completion hardware to emerge from
supplier drawing boards. During this period, the first subsea trees were placed on the floor of the
Gulf of Mexico (GOM). Of the 68 subsea completions installed in that decade, virtually all were in
U.S. waters. Wells were tied back to fixed platforms in maximum water depths of 400 ft (189 m).
Subsea trees took on a strange appearance as through-flowline (TFL) technology was developed
to provide a means of sending downhole tools into the completion.

The 1970s and early 1980s saw subsea production activity increase in all parts of the world.
Rising crude prices led to a frenzy of offshore-development projects. Investments in production
facilities reached huge proportions. During this period, the first subsea-tree system was installed
totally below the seabed. It was part of a caisson completion system that involved the installation
of a master-valve block within a caisson below the sea floor to protect the well and its
components from icebergs. TFL technology improved, and completions were extended to 650-ft
(221-m) water depths during the period. Also, the pull-in flowline-connection technique was
developed to allow completions to produce back to remote facilities. During the late 1980s and by
the early 1990s, advancements in the technology necessary to economically develop deepwater
oil and gas fields using floating production systems and subsea satellite installations were
developed. Also, the first horizontal tree was installed and a modular approach to design
emerged, as operators pushed suppliers to develop interchangeable modules that used field-
proven components to bring more cost-effectiveness to early-production projects.

Diver/Diverless Installation Techniques Evolve

Even though deepwater exploration successes were yet unknown, much of the early
development of subsea-completion technology focused on diverless techniques, as operators
anticipated future deepwater requirements. Meanwhile, operators wasted little time in using
industry-proven diver-assist technology for installing their subsea-completion equipment in
shallow-water fields. By using surface hardware adapted to diver-assist underwater use, subsea-
field completions progressed off North America from 1961 to 1970. Gradually, subsea technology
evolved as refinements and improvements were made on the basis of field experience.4

In the 1970s, an offshore pilot test of a deepwater (170 ft, 52 m) subsea system was conducted
on West Delta Block 73 in the GOM. While it was still accessible to divers, this test project
demonstrated the capabilities of diverless technology to install, operate and maintain a remote,
deepwater production system from field development through abandonment. In 1971, four
subsea-well completions producing to a jackup rig were installed using diver-assist technology in
250 ft (76 m) of water in the Ekofisk field. This marked the beginning of subsea-well completions
in the North Sea.

Wet vs. Dry Environments

During the 1970s and 1980s, both wet- and dry-environment technologies were developed. Wet
technology was developed and installed first, since it was easy to take off-the-shelf equipment
and install it in a subsea environment. However, the wet environment exacted a maintenance toll
that led to the development of dry-environment technology. This technique employs steel
chambers to provide a dry, 1-atmosphere environment for standard oilfield equipment.
Maintenance is performed by transporting men from a surface support vessel to the seafloor
chamber in a service capsule with a lift line and a life-support umbilical. Of the two technologies,
operators eventually opted for the wet environment. Since that time, it has been the only method

Completions in U.S. Waters

In the mid-1950s, initial work began on the first remote underwater drilling- and completion-
system project for the GOM. This marked the start of what was to become the petroleum
industrys first subsea-wellhead completion. Installed in 1961 in the GOMs West Cameron Block
192 in 55 ft (16 m) of water, the completion set the stage for future production in deeper offshore

At about the same time, the first full-field subsea development occurred when 20 subsea satellite
wells with multiple-zone completions were installed and connected to a platform at Californias
Conception field. Over the years, numerous projects have produced technical milestones in the
evolution of subsea-completion technology. Many of these projects are well known because of the
publicity that has surrounded them from their inception. U.S. achievements include the following.

During late 1997, work began on the Mensa project. Extreme water depth, high flow rates, and
erosion-resistance requirements made this project a pioneer in subsea-tree design and
installation equipment/technique.

Located on Mississippi Canyon Block 687 some 147 miles southeast of New Orleans, the Mensa
gas-well-development plan initially used three satellite wells with 10,000-psi working pressure and
guidelineless, diverless subsea trees, which produce to a subsea manifold 5 miles away. A single
63-mile flowline (worlds longest offset from a host platform) carries the commingled production
from the manifold to a shallow-water platform.5

Clustered subsea-completion developments arrange wells around, but keep them separate from,
a central manifold structure. Such systems employ the drilling rig to install the inherently smaller
system components.

The Troika system is a subsea cluster-type development that was installed in 1997 in 2,700 ft
(823 m) of water in the GOM. The manifold is tied back to, and controlled from, Shell Oils
Bullwinkle platform (approximately 14 miles distant) by means of two 103/4-in. flowlines. Among
other things, this project accomplished the cost-effective installation of a subsea cluster-system
module (combined template/ manifold) using the rigs drillstring. By maneuvering the carrier under
the rigs moonpool, lifting the module off the boat using slings attached to the drillstring, and then
lowering it onto preinstalled piles on the sea floor, the module was set in less than 12 hours.

In March 1997, a gas well was abandoned in the mudline beneath 883 ft (269 m) of water on the
GOM Green Canyon Block 20. The well tapped marginally economical reserves, but completing it
was deemed uneconomical because its shut-in surface pressure exceeded 12,000 psi. Such
pressures dictated a structure for which the capital cost would exceed the value of the expected
reserves. Also, as of that time, no high-pressure subsea completions had ever been

Two years later, in 1999, another operator is attempting to breathe new economic life into the
well. The resulting completion will be the worlds first 15,000-psi subsea well completion.
Expected to come on stream in mid-2000, the achievement will be known as much for
overcoming the previous operators economics-killing costs as for its high pressures.

North Sea Development

Historically, the most active area for subsea completions has been the North Sea. Some 40% of
all subsea-tree installations worldwide have been done there.6 Both the U.K. and Norwegian
sectors have seen numerous subsea completions, but the most ambitious projects traditionally
have been in Norwegian waters.

In 1971, North Sea subsea completions originated with the Ekofisk field early-production system.
It consisted of four subsea satellite wells producing to a jackup drilling rig modified for production
processing with offloading to shuttle tankers.7 This was the first North Sea field development
using subsea trees and the first use of subsea-well completions. Situated in 230 ft (70 m) of
water, the wells were completed with diver-assist technology that was well established by this

The development of the Argyll field in 250 ft (76 m) of water marked the worlds first application of
a floating production system (FPS) and the first production of oil from the U.K. sector of the North
Sea. The field began in 1975 with four satellite subsea wells flowing to a subsea riser base
beneath an FPS vessel. Soon, more wells were added. Eventually, two additional fields were
produced over the projects life. The project was abandoned in 1992.

Buchan and Balmoral

In 1981, the Buchan field was developed in 390 ft (119 m) of water using an FPS and an
arrangement of satellite and template subsea wells tied to a subsea manifold. The Balmoral field
came on stream in 1986 using an FPS and subsea-well system that included satellite and
template wells producing to multiple subsea manifolds.

Snorre and sgard

In 1992, the Snorre field was developed in 1,100 ft (335 m) of water using subsea completions to
produce to a tension-leg platform approximately 4 miles away. The sgard field, developed in the
late 1990s, featured a total of 59 subsea completions grouped together in 17 standardized four-
well templates connected and tied back by pipeline bundles to floating production and processing
vessels in 984 ft (300 m) of water. The sgards SO3 pipeline bundle includes closed-circuit, hot
water heating lines to ensure that hydrates and paraffins do not form. This technology was first
applied in the early 1990s in the Britannia gas field in the North Sea.

Numerous other subsea-well systems have been completed in the North Sea since these fields
were developed, and a variety of designs and configurations have emerged. Traditionally, U.K.
water depths have favored diver-assist technology, but some developments in the deeper waters
of the Norwegian sector required diverless technology.

Offshore Brazil
In addition to North Sea and U.S. offshore fields, much of the historical development of subsea-
completion systems has been offshore Brazil, mostly in the Campos basin. Petrobrs, the state
oil company, is the most active operator worldwide in terms of the total number of subsea
completions, with 329 installations so far and another 250 planned for the 19992004 period.

In 1974, Brazil found itself in an ironic situation. The nations daily production was decreasing in
spite of an increase in reserves from new discoveries in the Campos basin. To correct this, early-
production systems were planned to reduce the time to initial production, to better define the
reservoir conditions, and to improve cash flow.8

Subsea completions provided a means of achieving these needs, said Ricardo Juiniti, Senior
Staff Petroleum Engineer for Petrobrs. The first completion was installed in 1977 in the Campos
basins Enchova field. This completion was located in 384 ft (117 m) of water and produced from
a single satellite well through a subsea test tree to the semi-submersible Sedco 135D, the first
drilling vessel in Brazil to be converted to a floating production facility. Two years later, the first
subsea tree was installed at 620 ft (189 m). From 1979 to 1981, seven early-production systems
with wet trees were installed to accelerate Campos basin production, while seven fixed platforms
were being built.

With discoveries in water depths deeper than 656 ft (200 m) came the routine use of floating
production vessels as economical and feasible alternatives to fixed platforms. Initially built to
accelerate production, many of these temporary-use vessels became permanent installations.

Dry-environment technology was used in the Garoupa and Namorado fields beginning in 1979,
Juiniti states. Initially, dry chambers were installed on eight wells in 394 to 525 ft (120 to 160 m)
of water, but the technology was deactivated in 1986 due to the high risk associated with
performing well interventions through the chambers and excessive operational costs associated
with using a dedicated vessel. Those wells are still producing with wet trees. The development of
all other Brazilian fields used wet-environment technology.

By the end of 1982, 32 non-TFL, 4-in. 2-in.5,000-psi wet trees had either been installed, were
being installed, or were on order. Four different manufacturers were used to allow a performance
comparison of the different tree designs. The first subsea manifold also was installed during 1982,
and it introduced a variety of new options for subsea layouts. The manifolds were diver-assist
installations and could accommodate up to eight wells.
By 1984, movement toward deeper waters necessitated the installation of subsea trees in depths
that exceeded the limits of divers. This forced us to go to diverless installation methods, Juiniti
recalls. But problems with the pull-in of flowlines prompted a decision by management to pull in
tree flowlines in waters up to 985 ft (300 m) deep using diver-assist only, when feasible. Remote
flowline pull-in would be used only when diving was not possible or was too expensive.

Discoveries in the 1300-ft (400-m) Marimba, 1,900-ft (600 m) Albacora, and 3600-ft (1100-m)
Marlim fields caused Petrobrs to develop the Lay Away System for pulling in tree flowlines and,
later on, the guidelineless (GLL) subsea tree.

We used this technology to install the first GLL tree in 1991 in 2,366 ft (721 m) of water and
subsequently to develop the Marlim field, Juiniti continues. The Marlim field development, which
will comprise 148 subsea wells producing to six floating production units, when completed,
contributed to the development of a standardization program for GLL subsea trees as well as
more effective and less costly diverless flowline pull-in. In 1994, the first installation of a GLL tree
in 3,370 ft (1,027m) of water was made, a world record at the time.9

Petrobrs also set the world record for a subsea-tree installation when it installed a subsea tree in
early 1999 on a well in the Roncador oil field in the Campos basin. The subsea tree was set in
6,080 ft (1853 m) of water and produces via a rigid riser to a dynamically positioned floating
production, storage, and offloading vessel.10 All these achievements were obtained after massive
investments in research through ProCAP 2000, Petrobrs Technological Innovation Program on
Deepwater Exploitation Systems, which aims at steep reductions in production costs and
increased productivity in deepwater fields while enabling oil production at water depths greater
than 3,281f (1000)m.

According to Ronaldo Dias, head of the Campos Basin Drilling and Completion Div., the
investment made by Petrobrs in subsea completions allowed the company to develop offshore
fields in a very profitable way by reducing the time to initial production.

The standardization of Christmas trees played an important role in terms of reduced cost and
project optimization, says Dias. Nature didnt give us much choice. We had to go for the oil,
which was much deeper than we would have liked. Subsea completions seemed to be the best
solution, although we had a hard time making them work properly sometimes. However, I think it
was worth the effort.

The Future
The DeepStar project, an R&D consortium operated by Texaco that seeks the development of low
risk methods of producing oil and gas in the deepwater GOM, is espousing what could become
the deepwater-completions philosophy of the future. If development of deepwater fields is to
proceed, operators have to be convinced that they have commercially viable development
options, said Texacos Steve Wheeler, who has been involved with DeepStar since its inception.
These options must maximize the operators ability to avoid large capital commitments prior to
his verification of acceptable reservoir performance.

DeepStars consortium of 21 operating companies and 40 supplier organizations are cooperating

to find solutions to the challenges that face them in developing the deepwater Gulf of Mexico.11
The project seeks to utilize partnering to jointly explore and research deepwater production
technology, hardware and software, innovative tools, centralized processing facilities, production-
sharing operations, and other innovative concepts, stated Wheeler.

We believe that subsea completions will play a key role in helping manage risk in future
deepwater-field development. Since about 60% of the cost of a subsea development is built into
the well cost, they can be scaled up or down quickly. Therefore, they offer operators a way of
managing their costs in new fields where reservoir performance, production rates, and size are
unknown. They cant do that with other field development concepts, such as high-cost deepwater
FPSs, Wheeler said. Weve named this the inchworm philosophy. Unlike a lot of deepwater
field-development philosophies, our research indicates that operators should progress slowly by
drilling and completing only a few wells initially in a new deepwater field. Next, they should place
them on production using subsea completions and tiebacks to existing GOM infrastructure in
order to establish an early-production revenue stream. Once the size of the reservoir and other
important performance parameters have been determined, the operator can then expand to the
level of development that is deemed appropriate. This philosophy protects capital by lowering
overall risk until the fields parameters are fully known. Also, Wheeler and his DeepStar
participants are pushing for the standardization and modularization of subsea-completion
components because it speeds up field development and often reduces costs.

Most operators agree that subsea-completion standardization is an irreversible trend. Like

Wheeler, they believe standardization of interfaces will, for example, allow the replacement of a
damaged subsea tree with a new one off-the-shelf without lengthy interruptions to the wells
production, thereby saving revenue that would be lost while waiting on a tailor-made tree.

We want standard interfaces between vendor components that will allow us to prebuild subsea
trees, Wheeler said. Then we will stock them so they are ready when we need them. Thats
cost-effective for us, and it allows our vendors to better manage their manufacturing efforts.

Traditionally, the decision to develop an economically marginal oil or gas field and the choice of a
production system has been governed largely by the presence of available infrastructure, existing
technology, and the cost-effectiveness that can be obtained by marrying both. However, emerging
technology is now playing a role equal to or greater than existing infrastructure and cost in the
development of 5- to 20-million bbl fields. Looking to the future of subsea completions means
looking back at the technology that has been proven during the past four decades and then
finding better, more cost-effective ways of applying this technology to solve new, more complex

The greater emphasis on using subsea completions has been to produce marginal fields to
existing platforms

Drilling Technology

The Key to Successful Exploration and Production

Using steam-powered cable-tool rigs that pounded the Earth with fishtail bits until it gave up its
oil, turn-of-the-century drillers managed to find oil with surprising frequency. The drilling
technology wasnt much by todays standards, but it worked on the shallow prospects of
Pennsylvania and Ohio during the late 1800s and early 1900s.

However, greater drilling challenges occurred when oilmen expanded their efforts to Texas, where
the oil lay encased in deeper pay zones. Here, drilling targets could not be reached using
standard cable-tool methods. Instead, rotary rigs were brought in to drill the deeper plays. These
rigs represented the latest technology, and they quickly became the preferred method for drilling.

The Rotary Rig

Between 1915 and 1928, rotary rigs slowly began to replace existing cable-tool rigs. The
invention of the rotary-drilling rig made cable-tool rigs obsolete, for all intents and purposes. The
rigs, named for the rotary table through which drillpipe is inserted and rotated, could make deeper
holes because they used a bit that drilled rather than pulverized the rock formation.

Also, rotary rigs eliminated the laborious, time-consuming bailing process used by cable-tool rigs
to remove rock cuttings from the hole. Instead, drilling fluid was circulated down the drillpipe,
through the bit, and up to the surface. As a result, rock cuttings created by the bit were lifted by
the fluid and carried to the surface for disposal.

The new rigs created a need for experienced drillers who knew how to use them. Experienced
cable-tool drillers had to learn quickly how to operate the new rotary rigs so they could make the
transition to them. Some did, and some didnt. Those who made the transition stayed employed.
Those who didnt became unemployed. One of those drillers who successfully made the transition
was John Goddard, a cable-tool driller who eventually became one of the original stockholders of
Humble Oil and Refining Co.

Because of his reputation for drilling successes in the oil fields of Ohio, Goddard was brought to
Texas to drill with the rotary rig. When he got to Texas, Goddard was careful not to mention to
anyone that he had never even seen one of the new rotary rigs. Instead, he quickly and quietly
learned how to run the new equipment and, over the years, contributed greatly to Humble Oils
growth to an oil giant. From 1928 to 1934, some of the largest oil fields of all time were
discovered, and drilling was highly competitive. There were insistent demands for equipment that
could drill and complete wells in minimum time periods. During this period, heavy and more
powerful rigs were developed.

After 1934, rates of penetration with rotary rigs increased more rapidly than in any period, before
or since. And the use of steam-powered engines gave way to the internal-combustion engine as
the most important prime mover. During World War II, further development and refinement of
rotary rigs was put on hold since most of the nations resources were largely diverted to the
manufacture of war implements. However, with the wars end in the mid-1940s, an increase in
demand for petroleum products led to a rise in oil- and gas-well drilling. Practically all of the
drilling equipment that was available was either obsolete or worn out, which led to the
development of new and better drilling equipment, especially rigs. Todays modern rotary-drilling
rigs are powered by diesel and diesel-electric prime movers. 1

Rolling Bits
Another milestone was the development of bits for use on rotary-drilling rigs. In 1908, Howard
Hughes Sr., a wildcatter and speculator in Texas oil leases, turned his ingenuity and hobby of
tinkering with mechanical devices into good fortune when he invented the rolling bit (later called
the roller-cone or rock bit).

In the period following the turn of the century, existing drilling technology was unable to penetrate
the thick rock of southwest Texas. Until 1910 or 1911, the only drilling bits available for rotary rigs
were the fishtail, the diamond point (mainly for sidetracking), and the circular-toothed bitall of
which limited the rigs to soft formations. Oilmen could extract only the oil that lay just beneath the
surface. Frustrated, they were forced to ignore the vast resources they knew were locked in the
deeper formations.

Fortunately for them, Hughes had an idea for a bit that used 166 cutting edges arrayed on the
surface of each of two metal cones mounted opposite each other to tear away the hard-rock
formations. He also solved the problem of how to cool and lubricate the bit in the high
temperature produced by the friction of the metal and rock contact.

Hughes, along with his partner Walter B. Sharp, formed the Sharp-Hughes Tool Co. and produced
a model of his new bit. Rather than sell his bits to oil drillers, Hughes and Sharp opted to lease
the bits on a job basis, charging U.S. $30,000 per well. With no competitors to duplicate their
drilling technology, they soon garnered the lions share of the market. Flush with their success,
the partners built a factory on 70 acres east of downtown Houston, where they turned out the
roller-cone bits that quickly revolutionized the drilling process. 2

About the same time Hughes developed his bit, Granville A. Humason of Shreveport, Louisiana,
patented the first cross-roller rock bit, the forerunner of the Reed cross-roller bit. 3 That bit, built in
1913, used two rolling cutters that were placed in the bit face in a + shape. This bit, screwed
onto the end of the rotating drill pipe, cut the rock formation as it turned, enabling the rig to
penetrate the formation without destroying the bits cutting surfaces.

The rotary rig, rolling bit, and cross-roller rock bit were pioneering inventions that paved the way
for the development of a great many other devices that improved drilling processes and

Drill Collars
One of the earliest problems drillers encountered in rotary drilling was that of keeping their
boreholes straight. The deeper drillers went, the more the boreholes deviated from vertical. It was
common practice at that time to use only large drill pipe and all available weight (weight indicators
were not yet available). Often, deviation didnt matter because the targeted formation was
eventually reached and the well declared a success. In fact, most drillers were never aware of
their deviation from vertical.

For the most part, the entire petroleum industry was unaware of the problem of hole deviation
until the Seminole, Oklahoma, boom of the mid-1920s. Town lot spacing was the primary factor
contributing to the experiences of the industry. There are actual recorded incidents of two rigs
drilling the same hole, offset wells drilling into each other, drillers wandering into producing wells,
and wells in the geometric center of the structure coming in low or missing the field completely.

These experiences led to the development of the drill collar for weight and rigidity and the use of
stabilizers at various points in the string to control deviation and to provide rigidity. This helped
control unintentional deviation, but a total understanding of the forces associated with borehole
deviation didnt occur until Arthur Lubinski performed his study of the problem in the 1950s. His
successful studies led to the development of directional drilling, a method used extensively by the
industry to cost-effectively drill and complete multiple wells from a single location. 4

Well Control
With continued increases in drilling depth came increasingly higher formation pressures that had
to be controlled during the drilling process. If released by the penetration of the formation by the
drill bit, these enormous pressures could spit drillpipe out of the hole, unleashing raging inferno-
like fires that would instantly destroy the drilling rig. Such a catastrophic event could delay drilling
operations for days or even months, until the fires could be extinguished. This made the
development of some sort of blowout-prevention device a priority among those engaged in oilwell

In the 1920s, driller James Abercrombie sought out Harry Cameron, a machine-shop operator, to
design and build a device that would prevent catastrophic well blowouts during drilling operations.
Following a period of experimentation, Abercrombie and Cameron designed and manufactured
the first successful blowout preventer (BOP). The preventer was capable of containing formation
pressures of 2,000 to 3,000 psi in 8-in. boreholes. It did not take long for the revolutionary device
to dominate the industry.

The need for control greatly expanded when drilling began on the U.S. gulf coast and in the Gulf
of Mexico. Containing normally pressured formations is relatively simple when compared with
containing and controlling highly pressurized geopressured zones. Today, offshore BOP stacks
that can hold pressures of 15,000 psi in 183/4-in boreholes are available, if needed.

Louis Records also made great contributions in well-control equipment. His company, Drilling Well
Control, offered well-control expertise and a monitoring service. Records was one of the first to
truly understand the mechanics required to circulate oil and gas from a well without allowing
additional gas or oil to enter the wellbore. C.C. Brown, another industry drilling pioneer, invented
the wellhead packoff, a device that allowed a well to be completed without letting it flow freely
during the operation.

Drilling Fluids
In the earliest years, the drilling fluid used by the cable-tool rigs was probably water. It was used
to soften the earth and make it more pliable for drill-bit penetration. With the advent of rotary rigs
and roller-cone bits, more elaborate drilling fluids, called muds, were introduced into the
borehole to cool and lubricate the bit, circulate the rock cuttings from the bottom of the hole to the
surface, and hydrostatically balance the drilling-fluid column.

These drilling muds were originally natural and were formed from the formation drilled or from
native material. Later, they were weighted up using barite or similar products to counter the
higher pressures that were experienced as formation depths increased. This helped prevent the
dreaded blowouts. Initially, mud materials were low-cost waste products from other industries, but
as the drilling goals required drilling deeper, hotter holes and higher fluid densities, the industry
began developing specialty chemical products designed for specific purposes.

Over the years, drilling-fluid challenges and the resulting solutions have moved the drilling-fluids
industry from the use of fluids costing very little to complex oil- and water-based materials that
typically cost U.S. $300 to $400/bbl.

Well Cementing
With drilling operations routinely penetrating multiple rock and sediment layers that contained
water, oil and gas, it became necessary to install steel casing to isolate these layers from one
another and from the wellbore. The casing was installed in a variety of sizes. Large-diameter pipe
was installed near the surface, and as the depth increased, the pipe diameter became
progressively smaller.

Originally, the formation was mudded off, and extra-thick drilling mud was left behind each string
of casing in an effort to minimize fluid communication. Mechanical devices also were used on
occasion. However, this was ineffective and soon led to the use of cement. Casing had been
cemented in cable-tool-drilled wells prior to 1900.

Early cementing jobs were very rudimentary. Cement was mixed on location by hand and
installed with a dump bailer. After depositing the cement, the casing, which had been held several
feet off bottom, was lowered into the cement so it would remain behind the pipe after hardening to
shut off the water from above. Later, tubing was used to convey the cement with pumps. But, in
1921, Erle P. Halliburton put an end to these laborious cementing methods and, in the process,
revolutionized well cementing.

Halliburton started the Halliburton Oil Well Cementing Company in 1919 in his one-room wood-
frame home in Wilson, Oklahoma. Two years later, Halliburton perfected his Cement Jet Mixer, an
on-the-fly mixing machine that eliminated the hand mixing of cement at the wellsite. This
invention set the stage for Halliburtons domination of the business of well-casing cementing.
Then, in 1924, Halliburton convinced seven of his customers --- all large oil companies --- to
invest in Howco. The company went public, and Halliburton became the companys first President
and Chief Executive Officer.

We intend to build up and maintain a complete organization, he stated when asked about his
intentions in those first days after going public. We will cover all phases of oil well cementing
service. We will maintain an aggressive and sustained program of research. We shall give
uniform quality and service. Well get there somehow, regardless of location, he continued. 5 This
focus on service and technology research has kept the company in the forefront of the oil and gas
services industry, where it remains a leader today.

Formation-Evaluation Logging
The earliest drillers were severely limited in their opportunities to evaluate potential reservoir
rocks without testing to see what they produced. Wells were drilled without any possible means of
measuring the inclination angle of the borehole, which, as we learned more, turned out to be quite
high. The whole series of developments in that area first produced crude inclination indicators by
lowering a glass tube with an etching fluid inside and mechanical instruments that measured the
inclination angle. This eventually led to tools that could be used to determine the azimuth of the
hole and finally offer the ability to calculate the position of the bottom of the hole with fair

Formation-evaluation efforts began with the art of mud logging, in which samples of the cuttings
were analyzed to determine the formations that were being drilled. Drillers also began recording
the penetration rate vs. depth to define more precisely where formation changes occurred.

In 1911, electrical well logging originated with two brothers, Conrad and Marcel Schlumberger.
The science of geophysics was new, and the use of magnetic or gravimetric methods of exploring
the internal structure of the Earth was just beginning. Their electric logs extended the electrical
prospecting technique from the surface into the oil well.

The technique was actually invented by Conrad Schlumberger. As a physics teacher at Ecole des
Mines, Conrad had an interest in the Earth sciences, particularly those involving prospecting for
metal-ore deposits. He believed that among the physical properties of metal ores, their electrical
conductivity could be used to distinguish them from their less-conductive surroundings.

In 1912, he used very basic equipment to record his first map of equipotential curves in a field
near Caen, France. The plot of curves derived from his surveys not only confirmed the methods
ability to detect metal ores but also revealed features of the subsurface structure. This information
led to an ability to locate subsurface structures that could form traps for minerals.

To understand the measurements made at the surface better, Conrad and Marcel knew they had
to incorporate resistivity information from deeper formations. In 1927, in a 1,640-ft (500-m) well in
Frances Pechelbronn field, Conrads son-in-law, Henri Doll, an experimental physicist,
successfully produced the worlds first electric log using successive resistivity readings to create
a resistivity curve.

Four years later, in 1931, the discovery of a spontaneous potential produced naturally between
the borehole mud and formation water in permeable beds introduced a new basic measurement.
When recorded simultaneously with the resistivity curve, permeable oil-bearing beds could be
differentiated from impermeable nonproducing beds. Thus, electrical well logging was born. 6

Blowout Control
Most drillers believed that, if you drilled hard enough and long enough, a blowout was inevitable.
And, for the most part, they were right. Blowouts did occur. Equipment failures, improper
technique, and bad luck created blowouts that needed to be handled quickly, safely, and properly.
This meant a call to the wild-well-control experts. A special breed of firefighter, the well-control
experts took on the raging inferno and brought it under control using a series of steps that
deprived the fire of oxygen, allowing it to be safely capped.

The first heroes of this parade were Myron Kinley and Red Adair. Whenever anyone had a
blowout, the solution was to call Kinley or Adair to control, then cap, the blowout. While their
skills, capabilities, and performance allowed them to handle most any blowout in a workmanlike
manner, the nature of their work and the publics perception of it made them legendary. This, in
turn, allowed them to become very effective public relations men for their clients.

Their experiences also led to the development of a number of active and passive firefighting
devices for offshore rigs. Since firefighting equipment was not readily available at offshore drilling
sites, technology was developed that could be permanently mounted on the offshore drilling rig.
In the event of a blowout, it helped cool the tremendous heat generated by the blowout, allowing
workers to safely evacuate the rig. It also enabled the firefighting team to place cooling water
where they needed it when they arrived on the scene to cap the well.

Wells Get Deeper

The development of technology for controlling blowouts enabled the drilling of deeper wells, but it
was not until 1938 that the drill passed 15,000 ft, a record that stood until 1947. The 20,000-ft
barrier was penetrated two years later in 1949, a record that held until Phillips Petroleum drilled
the University EE-1 well to 25,340 ft in 1958 in west Texas. 7

The 1960s and 1970s saw wells attain ultradeep status. Improved metallurgy and techniques for
handling higher temperatures and pressures and corrosive atmospheres made ultradeep wells
attainable and less formidable than before. The 30,000-ft barrier was broken in 1974 by the
31,441-ft Bertha Rogers No. 1 in Oklahomas Anadarko basin. But British Petroleums Wytch
Farm M11 well garnered the depth record at 34,967 ft when it was drilled and completed in 1998.

Offshore Drilling
Some of the most significant technical achievements in the evolution of drilling technology have
occurred in the offshore drilling arena.

When Kerr-McGee Corp. drilled the first offshore well out of sight of land in 1947 to officially begin
todays offshore industry, it wasnt the first well drilled offshore. According to oil historian J.E.
Brantly in his book History of Oil Well Drilling, operators actually took their first steps in drilling
submarine as early as 1897 when a well was drilled from a wharf in Californias Summerland

Eight years later, in 1905, Unocal Corp. drilled an offshore well near Houston, and others followed
by drilling in swamps and transition zones during the next 20 years. In the 1940s, operators took
more definitive steps by mounting land rigs on piers jutting several hundred feet into lakes, bays
and coastal waters. However, Kerr-McGees Kermac 16 well drilled in 20 ft of water from a
platform 43 miles southwest of Morgan City, Louisiana, severed the industrys umbilical cord with
land.8 The early 1950s saw a major expansion beyond the Gulf of Mexico to the California coast
and to the bountiful offshore basins of Brazil and Venezuelas Lake Maracaibo. Another big
advance occurred in the Gulf of Mexico in 1954 with the introduction of the moveable,
submersible offshore drilling barge. The portability of the submersible drilling barge produced a
major increase in the attractiveness of drilling offshore, but brought a new set of challenges to

The biggest challenge involved finding a solution to drilling from a floating barge while using
conventional rigs, casing heads, and BOP equipment. Needless to say, the motions of the barge
and rig derrick made it a daunting experience. One of the earliest solutions to rig movement was
the submersible drilling barge, the Mr. Charlie. The rig was the brainchild of A.J. LaBorde, a
marine superintendent for Kerr-McGee Oil Industries in Morgan City, Louisiana. In this capacity
he had a front row seat for observing the problems of offshore oil drillers in varying conditions of
water depth, wind, and wave action. Knowing their problems, he designed a submersible drilling
barge and suggested to his employer, Kerr-McGee, that they build the barge. After considering
the proposal, they declined.

Having been rejected by Kerr-McGee, LaBorde promptly resigned. Shortly thereafter, he formed
the Ocean Drilling & Exploration Co., or Odeco, with John Hayward and Charles Murphy Jr. of
Murphy Oil Co. Hayward possessed a patent on submersible-barge methods, and Murphy was
looking for innovative technology that would let his small company compete with bigger
companies drilling offshore. Together they decided to name their new rig the Mr. Charlie in honor
of Murphys father.

On June 15, 1954, its builder, J. Ray McDermott Co.,turned over the completed drilling barge to
Odeco. It wasnt long before it set sail for its first job. Since no one had ever built a submersible
drilling barge before, skeptics anxiously crowded the site of its first job to see if it would work. For
LaBorde, there was no privacy if a mishap or problem occurred. However, to his reliefand to the
surprise of the skepticsthe barge worked perfectly. During the next 32 years, the rig went on to
drill hundreds of wells in the Gulf of Mexico. Mr. Charlie retired from service in 1986 when drilling
activity pushed into deeper waters beyond its capabilities. 9

The Future
Without a doubt, drilling technology will continue to progress toward more cost efficiency and
speed. That is what it has always done because operators demand it. Therefore, manufacturers
and service suppliers will continue to hone their technology to provide more efficient equipment
throughout every aspect of the drilling process. According to George Boyadjieff of Varco Intl.,
much of the technological gain in the future will be in the information area. The Internet will come
to the drilling industry. Rigs will be connected just as offices are connected now. Real-time well
site data will be provided routinely to offices of drilling engineers, operations managers,
geologists, asset managers, reservoir engineers, and the like, says Boyadjieff in an article
commemorating the 50th anniversary of the offshore industry. Also, I believe well see
underbalanced drilling become as common as horizontal and multilateral drilling. Its not just for

Reservoir Engineering: Primary Recovery

In 1904, Anthony Lucas, the discoverer of Spindletop, returned to Beaumont, Texas, from a job in
Mexico and was asked by a reporter to comment on Spindletops rapid decline in production. He
answered that the field had been punched too full of holes. The cow was milked too hard, he
said, and moreover she was not milked intelligently."1

Lucas comments were lost on early oil operators, who gave little thought to reservoir depletion
and behavior as they drilled well after well in their newly discovered fields. When natural flow
played out, they simply placed their wells on pumps. When the pumps could no longer bring up
economical amounts of oil or when water production became excessive, a reservoir was
considered depleted. In the late 1920s, methods for estimating oil reserves and the quantities that
might be recoverable hadnt been worked out. Of course, many of the pioneer oilmen knew that
the gas represented energy which, if it could be controlled, could be put to work lifting oil to the
surface. But control involved numerous problems, and everyone was more interested in
producing the oil and selling it. Regulation of drilling and production was still nonexistent, so
waste and overproduction were widespread.2 Gas associated with oil was flared or simply
released into the atmosphere.
Several years later, the U.S. federal government referred to the billions of cubic feet of gas that
had been lost and publicly deplored the practice. Remedial measures were proposed that
included cooperative production by field operators and legislation to control producing rates and
to prohibit gas waste.1 Once operators discovered the results of their wasteful ways, they quickly
initiated a series of technical studies of reservoir behavior and the physical properties that
controlled this behavior. Thus, the profession of reservoir engineering was officially born.

The Early Years

According to most authorities, reservoir engineering officially began in the late 1920s. At this time,
engineers engaged in the recovery of petroleum began giving serious consideration to gas-
energy relationships. They recognized their need for more precise information about hydrocarbon
activity in reservoirs that they were producing.

Actually, reservoir study can be traced to an earlier beginning when, in 1856, Frenchman H.
Darcy became interested in the flow characteristics of sand filters for water purification. This
interest led him to resort to experiments which, in turn, led to the real foundation of the
quantitative theory of the flow of homogeneous fluids through porous media. These classic
experiments resulted in Darcys law.3 Since 1928, the art of forecasting the future performance of
an oil and/or gas reservoir based on probable or presumed conditions has evolved steadily. In the
early 1920s, reservoir engineering was concerned largely with empirical performance, with the
exception of the laboratory work done on fluid and rock properties. Ultimately, this experimental
work provided a foundation for the mathematical equations that were derived later during the

From the beginning, engineers recognized that oil-recovery methods based on wellhead or
surface data were generally misleading.4 They knew they must obtain a more thorough
understanding of the functions of the reservoir in order to maximize the recovery of its
hydrocarbons. This fact set in motion the evolution that has resulted in todays engineered
reservoir. Along the evolutionary trail leading to the present, developments in applied
mathematics, numerical analysis, computer hardware and software, geology, geophysics, and
geostatistics became part of reservoir engineering.

Fluid Flow
Hydrocarbons are complex fluids that generally exist in an untapped reservoir in liquid and
gaseous states and are considered to be at equilibrium. Likewise, they are expected to behave in
accordance with predictable functional pressure/volume/temperature (PVT) relationships. If all the
gas is dissolved in the oil, the single phase, is considered to be a liquid phase, and the reservoir
is called a dissolved-gas reservoir. On the other hand, if there are hydrocarbons as vaporized
gas that are recoverable as natural gas liquids on the surface, the single phase is considered to
be a gas phase, and the reservoir is called a wet-gas reservoir. In some reservoirs, both liquid
and gaseous phases may exist. These are called gas-cap reservoirs. If an artesian water supply
is directly associated with any of these reservoirs or expanding water is the dominant producing
force, the reservoir is termed a waterdrive reservoir.

Challenges to reservoir engineers begin when the reservoir is opened to production and the flow
of hydrocarbons begins. At this point, reservoir pressures drop; fluids comprising gas, oil, and
water expand; phase equilibria are disturbed; and alterations in the physical properties of the fluid
phases occur in various degrees throughout the entire reservoir. In short, the oil has become
active. With further withdrawal of fluids, changes continue and difficult second-order partial-
differential equations are needed to describe the unsteady-state flow of expansible fluids.

From 1927 to 1930, Jan Versluys, a well-known hydrologist working for Royal Dutch Shell, wrote
numerous articles on the physics of oil producing formations that were widely published. In 1931,
Morris Muskat and H.G. Botset wrote several papers on the flow of reservoir fluids. These papers
and articles were instrumental in advancing the knowledge of reservoir dynamics to its present

Today, most reservoir engineers consider that, of the many great reservoir-engineering pioneers,
Muskat probably had the greatest impact, relates Joe Warren, a personal friend of the late Morris
Muskat. A native of Riga, Latvia, Muskat attended Marietta College and Ohio State U. and
ultimately received a PhD degree in physics from the California Inst. of Technology in 1929.
Following his graduation from Cal Tech, Muskat joined the Gulf Research and Development Co.
where, at the age of 31, he wrote The Flow of Homogeneous Fluids Through Porous Media, a
seminal publication for reservoir engineering. Twelve years later, in 1949, he wrote a second
book, Physical Principles of Oil Production. Together, these books provided a sound analytical
foundation for reservoir engineering by combining fluid mechanics with phase behavior.

Muskat also published technical papers in such diverse fields of interest as hydrodynamics,
lubrication theory, and the mechanics of shaped charges, Warren recalls. As a matter of fact, he
received an original patent for his work on the use of shaped charges in oilwell perforating

A paper written in 1933 by T.V. Moore, Ralph J. Schilthuis, and William Hurst advanced reservoir
science further. The paper presented the first equation for unsteady-state radial flow of
expansible reservoir fluids. It reported the development of a linear second-order equation similar
to the classic heat-flow equation that adequately described the flow of a single-phase
compressible (or expansible) liquid in a reservoir. A year later, in 1934, Schilthuis and Hurst
published the application of the equation to the calculation of reservoir-pressure changes in an
east Texas field and to the prediction of the effect thereon of changes in production rates.5

Phase Relationships
In considering the drive mechanisms influencing a reservoir, a reservoir engineer must determine
the fluid phases that exist, their compositions, and the changes that normally would take place
during natural flow under the drive in order to predict the behavior of the reservoir.

Among the first to realize the importance of fundamental studies of phase relationships were B.H.
Sage and W.N. Lacey. In the 1930s, they published a series of papers reporting the results of
their continuing research in the field of phase behavior. Among their significant contributions was
the recognition and characterization of condensate reservoirs.6

Sampling and Measurement Devices

Early reservoir engineers recognized that both temperature and pressure influence the behavior
of reservoir fluids. Since the measurement of reservoir pressure and temperature was basic to
enabling reservoir-performance calculations, the development of a method or device that would
measure them became a priority. The development of continuously recording instruments such as
the pressure gauges invented by P. Comins and Geophysical Research Corp. and subsurface
temperature-measuring devices developed by C.E. Van Orstrand contributed greatly to this new

Likewise, early pioneers realized that, in order to calculate volumes of oil and gas in place, they
would need to know the change in the physical properties of bottomhole samples of the reservoir
fluids with pressure. Accordingly, in 1935, Schilthuis described a sampler and a method of
measuring the physical properties of bottomhole samples.

Measurements included PVT relationships, saturation or bubble-point pressure, total quantity of

gas dissolved in the oil, quantities of gas liberated under various conditions of temperature and
pressure, and the shrinkage of the oil resulting from the release of its dissolved gas from solution.
These data made the development of certain useful equations feasible and provided an essential
correction to the volumetric equation for calculating oil in place.7

Material-Balance Equations
In 1935, D.L. Katz of the U. of Michigan proposed a tabular method of obtaining a material
balance for a closed reservoir. Basically, a material-balance equation is a statement that accounts
for the volumes and quantities of fluids that are initially present in, produced from, injected into,
and that remain in a reservoir at any state of its depletion.

Also, that same year, Schilthuis published a material-balance equation that included the same
terms of fluid volumes and changes with time as Katzs method. The application of Katzs method
required the experimental determination of phase equilibria data; the Schilthuis method
represented a simplification in that the requisite terms were reduced to simpler expressions.

A bit later, Schilthuis proposed a method to calculate water encroachment using the material-
balance equation, but his method required accurate production-history data. Several years later,
William Hurst developed a method for determining the rate of water influx that was independent of
the material-balance equation and production history; only data on pressure history and rock and
fluid properties were required.8

Displacement-Efficiency Equation
In 1940, S. Buckley and M.C. Leverett proposed two displacement-efficiency equations
concerning the displacement of immiscible fluids. These equations provided another powerful tool
for reservoir engineers and scientists. One equation describes the fraction of immiscible
displacing fluid flowing with the oil through a unit rock volume; the other describes the rate of
advance of a particular degree of saturation of the displacing fluid that exists in that volume.

These valuable equations are used in the calculation of recovery by an immiscible displacing
fluid, natural or induced. And they played a key role in allowing later engineered waterflood
predictions. Applications include prediction of the effects of relative viscosity or permeability,
volumetric rate, formation dip, differential fluid density, and wetting and pressure gradient on
recovery under specified conditions.9

Maximum Efficient Rate of Production

Through the years, it has been learned that oil is recovered by three different natural mechanisms
solution-gas drive, gas-cap drive, and waterdrive. These mechanisms may be effective
individually or in combination. They differ in recovery efficiency. Recovery can be increased by
controlling the reservoir so that the most efficient available mechanism becomes the dominant
one or by injecting gas or water to supplement or modify the natural drive.

In practice, one of the most effective means of achieving efficient recovery is through control of
the rate of production of oil, water, and gas. The knowledge gained through studies of reservoir
behavior led to the concept of maximum efficient rate of production. For each particular reservoir,
it is the rate that, if exceeded, would lead to avoidable underground waste through loss of
ultimate oil recovery. This concept has found widespread application by both industry and
regulatory bodies for the efficient recovery of petroleum.10

Reservoir Simulation
By the 1950s, most of the fundamentals of modern reservoir engineering were in place. The next
evolutionary milestone was the emergence of reservoir simulation. The earliest simulators (circa
1930) were essentially sandboxes constructed with transparent glass sides. These elementary
simulators allowed researchers to view fluid flow directly. During this era, most reservoir scientists
assumed that the reservoir was a single tank or cell in which the fluid flowed from one side to the
These early modeling attempts were used to study water coning, states Donald Peaceman, a
retired Exxon researcher and industry consultant. The models allowed researchers to see the
activity that occurs when a well is produced. The production of the oil causes the pressure around
the well to decrease, and that causes the water to cone up and be produced with the oil.

It wasnt until the 1930s that people in the oil industry started looking at reservoir mechanics in
any kind of a scientific way, he continues. So this was one of the first attempts to understand
why water starts to be produced with the oil and why the produced-water/oil ratio increases with

Twenty years later, with the advent of computers, reservoir modeling advanced from sandboxes
and electrical analogs to numerical simulators. In numerical simulation, the reservoir is
represented by a series of interconnected blocks, and the flow between blocks is solved
numerically. Early computers were small and had little memory, which limited the number of
blocks that could be used.

When I went to work in 1951, recalls Peaceman, we had nothing that you could call a
computer. We did have access to some accounting machines that the accounting department
would let us use, but only at night, he remembers. Our job was to model the flow of gas through
the porous rock of a field. To accomplish this, we had to use a converted accounting machine that
had a capacious memory of 56 words of eight decimal digits each, could not store a program, and
strained to complete five floating-point operations per second, says Peaceman as though he still
finds it hard to believe.

Our management did have the vision to see that digital computation was going to be the way to
do reservoir modeling in the future, but that vision was still pretty faint, he remembers.

In 1955 we significantly increased our computing capacity when we acquired a Bendix G-15,
explains Peaceman, as he recalls his past experiences involving the evolution of reservoir-
simulation computers. This [computer] had vacuum-tube electronics, but its storage was almost
completely on a magnetic drum. Within the next few years, we obtained IBMs first widely used
scientific computer, the 704. It was a binary machine, with built-in floating-point hardware. Its
central memory was magnetic core, and its secondary storage was magnetic tape, he continues.
Also, Fortran was not yet available. Our programs were written in assembly language, but that
didnt bother us, since we were already used to dealing with machines that were much less user

During the following decades, computing power increased, which, in turn, allowed engineers to
create bigger, more geologically realistic models that required greater data input. This demand
was met by the creation of increasingly complex and efficient simulation programs with easy-to-
use data preparation and results-analysis packages.

Over the years, numerical simulation has continued to evolve to the point that it has become a
reservoir-management tool for all stages of the life of the reservoir. No longer is it used only for
comparing the performance of reservoirs under different production schemes or for
troubleshooting failed recovery methods. Today, they plan field development, design
measurement campaigns, and guide investment decision-making.11

Reservoir Management
Webster defines management as the judicious use of means to accomplish an end. Thus,
reservoir management can be interpreted as the judicious use of various means available in order
to maximize the benefits from a reservoir. According to several authors who have written on
reservoir-management practices, reservoir management involves making certain choices: either
let it happen or make it happen. Without planning, they say, the generation of benefits from a
reservoir operation is left to chance.12 With sound management practices, they conclude, the
generation of benefits is enhanced, and chances of profit are maximized.

In 1963, John C. Calhoun Jr., in an article written for the JPT, described the engineering system
of concern to the petroleum engineer as being composed of three principal subsystems.
1. Creation and operation of wells.
2. Surface processing of the fluids.
3. Fluids and their behavior within the reservoir.

The first two depend on the third because the type of fluids (oil, gas, and water) and their
behavior in the reservoir will dictate where and how many wells to drill and how they should be
produced and processed to maximize profits, states Calhoun.13
Technically, reservoirs have been managed for more than a 100 years, but true reservoir
management has been practiced only when a major expenditure is planned, such as original field
development or waterflood installation. In fact, until 1970, most people considered reservoir
management as synonymous with reservoir engineering.14 However, during the past three
decades, its integration with other sciences, such as geology, has created a truer reservoir-
management approach. During its evolution from purely reservoir engineering to the more
integrated reservoir-management function, the science of forecasting the future performance of
an oil or gas reservoir went through two distinct periods.

In the first period --- the four decades before 1970 --- reservoir engineering was considered the
only item of technical importance in managing a hydrocarbon reservoir. In 1962, Wyllie
emphasized two key points --- clear thinking using fundamental reservoir-mechanics concepts
and automation using basic computers.15
In the second period --- the three decades since 1970 --- the concept of managing oil and gas
reservoirs has evolved more toward the integration of reservoir engineering with other scientific
disciplines, namely geology and geophysics.

Craig emphasized the value of detailed reservoir description using geological, geophysical, and
reservoir-simulation concepts.16 He challenged explorationists, with their knowledge of
geophysical tools, to provide a more accurate reservoir description that could be used in
engineering calculations.

In the last 10 years, it has become clear that reservoir management is not synonymous with
reservoir engineering and/or reservoir geology. Instead, it is a blending of these disciplines into a
team effort. Projects undertaken during the past 10 to 15 years have seen the integration of
efforts into multidisciplinary project teams that work together to ensure development and
execution of the reservoir-management plan.

The Future
The science of reservoir engineering will continue to evolve; newer and better methods of
predicting reservoir behavior will be found. However, when it comes to reservoir management,
true integration of the geosciences into reservoir engineering will take time because the
disciplines do not communicate well. Simply recognizing that integration is beneficial will not be
sufficient. True integration will require persistence.17

And, while a comprehensive program for reservoir management is desirable, every reservoir may
not warrant a detailed program because it might not be cost-effective. In these cases, reservoir
engineering alone may be sufficient.

Formation Evaluation
Logging and Testing
In ancient China, wells were drilled to depths of as much as 3,000 ft (914 m) to locate and tap
sources of salt brine. Their primitive drilling technique resembled cable-tool drilling methods used
centuries later to explore for oil. To obtain knowledge of the formation below, cuttings and fluids
brought to the surface by bailing operations were dumped on the ground and examined by
Chinese drillers. Cursory examination of the cuttings provided information on the formations
penetrated and enabled the drillers to determine how close they were to finding the needed brine.

When early oil pioneers drilled their wells 2,000 years later, they knew little about the formations
they were drilling and showed practically no interest in the stratigraphy their bits penetrated.
Instead, their interest was focused on making holes and looking for the presence of oil. Later,
realizing it was very helpful to know something about the formation, they examined and recorded
the characteristics of cuttings brought to the surface by bailing operations. Eventually
mineralogists applied a microscope to the cuttings and advanced formation-evaluation efforts
further by measuring the density, hardness, and electrical properties of the rocks and by making
chemical analyses of them.1

Core Sampling and Mud Logging

For almost 50 years, recorded descriptions of drill cuttings were the sole source of formation
knowledge. Around 1920, the first core-barrel sampling tools were put to work in California, west
Texas, and Colorado. These tools cut core samplings of the formations from the bottom of the
borehole. After collection, the cores were analyzed and underwent experimentation in the
laboratory to garner valuable reservoir data on the formations being drilled. Over the years,
diamond-coring tools became the preferred method of collecting core samples.

While mechanical coring was an improvement on simple cuttings records, it was expensive
because it had to be done continuously during the drilling of the well. In an attempt to be more
cost-effective, efforts were made to obtain as much data as possible from cuttings samples.

The advent of rotary rigs and the use of drilling mud to circulate cuttings from the bottom of the
hole to the surface produced an interest in comparisons of cuttings obtained from different drilling
depths. The cuttings were treated with acetone or ether and were exposed to ultraviolet light to
detect the presence of small amounts of oil. This test was repeated again and again during
drilling, and the results, or lack thereof, were documented.

During the late 1930s, John T. Hayward developed the continuous mud-analysis log, which shows
the combined results of mud analysis for both gas and oil and relates these results to factors such
as drilling rate and depth.

Actually, Hayward was more interested in the gases and liquids in the mud than the cuttings.
From appropriate measurements at the surface, he succeeded in determining the content of oil
and gas in the various formations traversed. He then correlated these observations with depth to
create a continuous diagram of the oil and gas content of the formations penetrated while drilling
was in progress.

As a result of his work, continuous mud-analysis logging furnishes a variety of useful formation-
evaluation data, including the amount of methane, liquid hydrocarbons, and oil in the cuttings.
This log will eliminate frequent mechanical coring. Cores will now be taken only when a show of
oil or gas make it advisable, proclaimed Hayward when asked about the long-term implications
of his technique.3

Electric Logging
In March 1921, Marcel Schlumberger and several associates used a 2,500-ft (820-m) borehole
and conducted downhole resistivity measurements to see if they could enhance the interpretation
of surface seismic data. They found that their measurements did indeed reflect the variation in the
nature of subsurface formations penetrated by the wellbore. Six years later, in a 1,640-ft (500-m)
well in Frances Pechelbronn field, experimental physicist Henri Doll successfully produced the
worlds first electric log using successive resistivity readings to create a resistivity curve.

The technique actually was invented by Conrad Schlumberger with the help of his brother,
Marcel. The Schlumberger brothers believed that, among the physical properties of metal ores,
their electrical conductivity could be used to distinguish them from their surroundings.

Very basic equipment was used to record their first map of equipotential curves in a field near
Caen, France, in 1911. Plots of curves derived from their surveys confirmed both an ability to
detect metal ores and a capability to reveal features of the subsurface structure. Subsequently,
this information led to the location of subsurface structures that could form traps for minerals.

To understand the measurements made at the surface better, the Schlumbergers knew they had
to incorporate resistivity information from deeper formations. The result was Henri Dolls 1927
Pechelbronn field log.4

The procedure used at Pechelbronn was crude and makeshift in nature.

But, it didnt take long to realize that the resulting resistivity log could be a valuable formation-
evaluation tool. Clays have a low resistivity. Porous sands are conductive if saturated with salt
water, are moderately resistive if the water is fresh, and are very resistive if the impregnating fluid
is oil. Thus, important clues could be deduced from the log as to the formations character. As for
oil sands, a relationship was determined to exist between resistivity and oil potentialthe higher
the resistivity, the better the production.

Following its first use in France in 1927, electric logging was introduced in Venezuela, the
U.S.S.R., and the Dutch East Indies in 1929. In 1932, after a series of demonstrations, electric
logging came to the U.S. when Shell Oil issued Schlumberger contracts for work in California. By
the end of 1933, the initiation period was over, and 12 crews were applying the technique

SP Curve
Four years later, in 1931, the discovery of a spontaneous potential (SP) phenomenon produced
naturally between the borehole mud and formation water in permeable beds introduced a new
basic measurementthe SP curve. Attempts by researchers to explain how the phenomenon
works resulted in agreement that it was due to electrocapillarity (filtration of liquid from the
borehole into the permeable formations).

However, this explanation did not prove itself in subsequent logging, and another cause was
added to explain the SP curvethe electrochemical effect. Laboratory and field work confirmed
the importance of the chemical effect, but everyone involved agreed that knowledge of the
phenomenon needed further clarification. During the ensuing decade, the work of several
researchers (Mounce, Rust, and Tixier) indicated that the SP effect was mainly a chemical
potential, with only a small and sometimes negligible filtration potential, but it was M.R.J. Wyllie, a
Gulf Oil Co. researcher, who provided a comprehensive explanation of the SP curve. In a 1948
technical paper titled A Quantitative Analysis of the Electrochemical Component of the S.P.
Curve, Wyllie suggested that SP consists of two different effects.

The explanation he offered greatly enhanced the science of SP logging. When recorded
simultaneously with the resistivity curve, permeable oil-bearing beds could be differentiated from
impermeable, nonproducing beds. The combination of the resistivity and SP curves considerably
increased the chances of probable conclusions as to the characteristics of the formation material.
In early uses, the SP curve was used exclusively as a tool for locating permeable beds and
defining their boundaries. Later, with the introduction of quantitative analysis methods, the SP log
was used to derive information on formation-water resistivity, an essential element for computing
water saturation from log data.

In the late 1940s and early 1950s, Schlumberger introduced the microlog, laterolog, and
microlaterolog. The microlog provided a more accurate determination of permeable beds and
their boundaries in limestone, sand, and shale, where the SP log wasnt satisfactory.

Laterologging began with a device called the guarded electrode, which was invented by Conrad
Schlumberger in the 1920s. From this, the laterolog was developed for use in wells drilled with
highly conductive mud because it more sharply defined bed sequences in hard formations. The
microlaterolog soon followed. It provided a more reasonable estimate of the resistivity of an
invaded zone (Rxo) and of residual oil saturation in practically all formation types.

In the early 1970s, Schlumberger introduced its Dual Laterolog-Rxo tool. The dual laterolog
answered the need for a tool capable of producing useful resistivity measurements even when
true formation resistivity and mud resistivity are very high, as in the case of carbonates and
evaporites drilled with salty mud. It also provided greatly improved thin-bed resolution.6

Induction Logging
Resistivity logging is a valuable tool, but it does not operate well under all conditions. This is
especially true in cases in which there is no liquid filling the borehole to allow contact to be
established between the electrodes and the formation or in cases in which oil-based mud is used.
For these situations, induction logging is much more suitable.

Applied in shallow ore exploration for over 25 years, the induction process had not been used in
oil exploration before 1942. Its oil industry application is credited to Henri Doll. While seeking a
way of using the induction process to create a military vehicle that could detect enemy mines in
its path during World War II, Doll realized the possibilities that might be gained by applying the
induction process to oil-exploration logging. His colleagues at Schlumberger strongly opposed his
use of the induction process in logging, citing problems posed by very small signal strength, high
direct mutual-coupling interference, and the lack of adequate supporting technology.
Nevertheless, Doll persisted in this complicated task, leading a team that was determined to
develop an order-of-magnitude improvement in logging technology.

In induction logging, which was introduced in the mid-1940s, a sonde that employs alternating
current of constant magnitude and a coil (transmitter) is lowered into the well to create an
alternating magnetic field from which eddy currents are introduced into the formation. The eddy
currents follow circular paths centered on the axis of the sonde. The eddy currents, in turn, create
a secondary magnetic field that induces an electromotive force, or signal, in a second coil
(receiver) also located in the sonde. The signal is amplified, rectified to direct current, then
transmitted to the surface, where it registers in the form of a continuous log.

History validated Dolls vision, perseverance, and faith with the eventual success of his induction-
logging tool. Since its first commercial use in 1946, induction logging has become one of the most
widely used logging methods in the world and has overtaken electric logging because it is
regarded as superior in many applications.7

In 1963, the dual induction-laterolog tool was introduced. This tool provided the simultaneous
recording of three resistivity measurements and the SP curve. All measurements are focused to
give true formation resistivity in a variety of conditions for wells drilled with freshwater muds.
Nuclear Logging
In 1896, H. Becquerel discovered radioactivity when his photographic plate was affected by a
preparation of uranium. By the early 1900s, it became evident that all terrestrial materials contain
measurable quantities of radioactive elements in extremely minute quantities. Over time, these
radioactive elements disintegrate and transform into other elements. As they disintegrate, they
emit energy in the form of alpha, beta, and gamma rays. In rock formations, this radioactivity can
be measured and logged to determine the types and nature of rock formations being drilled.

Gamma Ray Logging

During the late 1930s, electric logging had been accepted as a viable method of determining
formation materials as the borehole was drilled. However, it could not be used if the hole was
lined with steel casing. Therefore, the development of a logging method for use in these
applications was crucial. The result was gamma ray logging, which measures and records natural
gamma ray activity in formations contacted by the borehole.

A Tulsa, Oklahoma, group made the first gamma ray log in a well near Oklahoma City. The results
clearly demonstrated that the technology could reveal the lithology of the borehole. A company
was founded soon, and the first commercial gamma ray survey was done for the Stanolind Oil
and Gas Co. in May 1940 in Texas Spindletop field. Subsequent use of the technology
determined that it was particularly good for defining oil beds, spelling out the geology, and as a
substitute for the SP curve in hard formations or with salty muds.

The gamma ray spectrometry log, first used in 1970, is a refinement of the gamma ray log. Like
the gamma ray log, it detects naturally occurring gamma rays, but it also defines the energy
spectrum of the radiation. Because potassium, thorium, and uranium are responsible for the
energy spectrum observed by the tool, their respective elemental concentrations can be
calculated. Calculated-concentration curves show a correlation to depositional environment,
diagenetic processes, clay type, and volume. Also, it is useful in estimating shale content.

Neutron Logging
While experimentation and development was occurring in gamma ray logging, R.E. Fearon
advanced, in 1938, his idea for a different type of nuclear-logging service. Bruno Pontecorvo
subsequently perfected it in 1941. The process is called neutron logging, and it involves the
bombardment of the formations along the borehole with neutrons. Then, the secondary gamma
ray activity generated by the bombardment is measured.

Since the variations in gamma ray activity observed on neutron logs are a result of the hydrogen
content of the formations, they offer a measurement of porosity. Therefore, porosity determination
has become one of the most important applications for the neutron log.

Early data obtained in gamma ray and neutron logging were of a qualitative, not quantitative,
nature, and there was no zero line in the diagrams. However, improvements in the late 1940s
incorporated zero lines into the logs, and numerical scales of gamma ray and neutron intensities
were added.

The continued rapid and intensive development of both resistivity- and nuclear-logging
techniques resulted in a movement from qualitative interpretation of logs to quantitative
interpretation with the 1942 publication of a paper by G.E. Archie. The paper detailed his
discovery of a relationship between electrical resistivity and formation-water saturation. Archies
work inspired an intensive investigation of data that were obtained from logs of subsurface
surveys and their relation to fundamental reservoir properties, such as porosity, permeability,
water salinity, and reservoir limits.8
Formation-Density Logging
Another nuclear-logging technique introduced in the 1960s was the formation-density log. The
device, which uses a gamma ray source and detector, is placed in contact with the borehole wall
to measure the bulk densities of formations in situ.

Field applications have demonstrated that measurement of formation density is a useful and
revealing technique for determining the porosity, lithology, and fluid content of formations in
conditions that hamper other logging methods, such as the logging of empty or gas-filled holes.

Sonic Logging
Unlike nuclear logging, sonic or acoustic logging operates on the principle that sound waves
(elastic waves) travel through dense rock more quickly than through lighter, more porous rock.
The technique, which resembles electric logging, uses a transmitter and receivers combined in
one downhole tool to measure, in microseconds, the time differences required for sound pulses to
traverse formation beds.

Some of the first experimental sonic logs were made in the early 1930s, but the first commercial
logs werent made until 1954. Sonic logs provided a more accurate porosity interpretation of
formation fractures and water table.

Pressure-Transient Testing
Instruments for measuring pressures in oil and gas wells were developed during the 1920s, and a
study by Pierce and Rawlins in 1929 reported a relationship between bottomhole pressure and
potential production rate. The study encouraged the development of improved instruments and
recording devices. By 1933, there were more than 10 different kinds of pressure-measuring/-
recording instruments in use.

Early measurements were static and were acquired by lowering a pressure-measuring device to
the bottom of a well that had been shut in for between 24 and 72 hours. However, engineers soon
recognized that, in most formations, the static pressure reading was a function of shut-in time and
mainly reflected the permeability of the reservoir rock around the well. What engineers wanted
was another basic measurementthe pressure-transient reading, in which the pressure variation
with time is recorded after the flow rate of the well is changed.

Morris Muskat was first to offer an extrapolation theory. Muskats theory related the change in
pressure with time to the parameters of the reservoir. In 1937, he presented a method for
extrapolating the measured well pressure to a true static pressure, and he stated at that time that
his method was only a qualitative application because it did not take into account the important
aspect of fluid compressibility.

The first comprehensive treatment of pressure behavior in oil wells that included the effects of
compressibility was that of C.C. Miller, A.B. Dyes, and C.A. Hutchinson in 1950. The following
year, D.R. Horner presented a somewhat different treatment. The two papers still furnish the
fundamental basis for modern theory and analysis of oilwell pressure behavior.9

One of the greatest contributors to the field of pressure-transient analysis was the late Henry J.
Ramey Jr. His 1970 paper investigating wellbore storage and skin effect in transient liquid flow
ushered in the modern era of pressure-transient analysis.

And, despite the development of other tests, most engineers agree that the standard for
pressure-transient testing will always be the pressure-buildup test. It is the most direct method of
obtaining an average pressure for reservoir analysis, it is operationally simple, and the theory is
well developed, stated a reservoir engineer when asked about the importance of the pressure-
buildup test.
Drillstem and Formation Testing
It has long been realized that the sampling of fluids and pressure in the porous strata of the
formation being drilled can provide valuable information on the formation and its ability to yield oil
and/or gas. However, early methods of obtaining these data required the setting of casing. This
was expensive and, therefore, made testing expensive.

Working in El Dorado, Arkansas, in the late 1920s, E.C. Johnston and his brother M.O. Johnston
developed the first drillstem tester in 1927 and subsequently refined it in the early 1930s. The test
is a measurement of pressure behavior at the drill stem and is a valuable way for an engineer to
obtain important sampling information on the formation fluid and to establish the probability of
commercial production.

In the 1950s, Schlumberger Co. introduced a more advanced method for testing formations. The
Schlumberger formation- testing tool, placed in operation in 1953, fires a shaped charge through
a rubber pad that has been expanded in the hole until it is securely fixed in the hole at the depth
required. Formation fluids flow through the perforation and connecting tubing into a container
housed inside the tool. When filled, the container is closed, sealing the fluid sample at the
formation pressure. The tool then is brought to the surface, where the sample can be examined.
The information obtained by this method is often better than results obtained from drillstem

The Future
The future of logging and testing is bright indeed. Most of the basic logging and testing tools and
techniques developed during the past 70 years have been refined and, in some cases, reinvented
to perform the logging and testing function of formation evaluation. However, even newer and
more ingenious technology is being introduced thanks to advances in electronics miniaturization
and in computer hardware/software that can go downhole rather than remain at the surface.
These tools have made real-time logging while drilling possible.

This is a very fundamental change from the past, when it was necessary to halt drilling while logs
were run periodically to obtain measurements. In a sense, the continuous-logging version of the
original realtime continuous mud-analysis technique sought by John Hayward in the 1930s has
been attained.

Integration is revolutionizing logging tools as they are packaged into logging-tool strings that
weigh half as much, perform multiple functions in fewer trips, and are easier to use.
Interpretation-software advances are making the information that logging tools collect more useful
to operators who are looking for tools that provide more powerful solutions in difficult-to-interpret,
more-problematic zones. Higher resolution in the tools is making it possible to identify pay zones
that previously were overlooked due to complex lithology.

Reservoir Engineering: Augmented Recovery

In 1888, roustabout James Dinsmoor was working on several Third Venango sand wells on the
William Hill property in Venango County, Pennsylvania. On an adjoining property, a Third
Venango sand well was being deepened by its operator to tap the Speechley sand to obtain
natural gas for use on the lease. The operator found a considerable amount of gas in the
Speechley but did not have pipe available. So the well was shut in temporarily to save the gas.

Dinsmoor observed that the three Third Venango sand wells on the William Hill property
immediately experienced an improvement in oil production and that this increase was maintained
until the nearby Speechley well was completed by its operator. Following the wells completion,
Dinsmoor noticed that the Venango wells returned to their previous production levels. This
accidental repressuring represents the first known application of augmented recovery in the U.S.1

Additional instances of augmented-recovery methods occurred in the late 1870s or early 1880s
when other early well completions began to lose formation pressure and production started to
wane. Attempts by early operators to augment production included the application of gas
(vacuum) pumps to faltering wells. This practice established an increased pressure differential
between the reservoir and wellbore, increased the rate of fluid production slightly, and extended
the wells producing lifespan.2

During the ensuing period of approximately 119 years, a variety of augmented- recovery methods
have been developed and introduced. These methods include waterflooding, gas injection,
chemical flooding, and thermal recovery. Some methods, such as waterflooding and gas injection,
use materials native to the reservoir to either replace or augment natural drive forces. However,
they do not alter any of the fundamental factors that act to retain the oil within the reservoir. Other
techniques, such as chemical flooding and thermal recovery, use more dramatic methods to
overcome forces, such as surface tension and viscosity, that inhibit the flow of oil from the

As early as 1880, observers of oil producing operations concluded that water could be an
effective method for driving oil flow within the formation. Most of these observations resulted from
the accidental intercommunication of natural forces under favorable circumstances that resulted
in increased production.

One of the first documented waterfloods was in Pennsylvanias Bradford field. The accidental
flooding of the field is thought to have begun in 1905, six years after the field began production.
Flooding continued for 15 years. During this time, production rates trended upward. Most
operators credited the production increase to the accidental flooding, and, even though it was
illegal, operators in both the U.S. and Canada instituted intentional field waterfloods. While the
unintentional flooding of fields is well-documented, data on intentional waterfloods by operators
prior to 1921 (the date waterflooding was legalized) are sketchy. Still, evidence from that time
suggests that intentional waterflooding occurred as early as 1875.

The earliest waterfloods were called circle floods because of the growth pattern of the water-
invaded zone. As nearby wells were watered out, they too became injection wells in order to
continue the extension of the area of waterflooding. The rate of advance of the water, diminishing
with time and cumulative injection, prompted one operator to convert a series of wells
simultaneously to form a line drive, a technique that increased oil production rates even more.

The first five-spot-pattern waterflood was attempted in 1924 on a tract in the southern part of the
Bradford field. Frank Haskell is credited with the idea, but Arthur Yahn receives credit for the
techniques first successful deployment. Haskells attempt failed to produce a speedy response
because of the 500-ft (152-m) distance between like wells. Yahns 190-ft (58-m) distance between
wells produced a much quicker response. Initially, only surface water entered the wells, but, in
late 1929, a pressure plant was installed to increase the rate of injection. Also, the five-spot
pattern required that wells be reworked to achieve replacement of prior withdrawals in reasonable
time frames, and later five-spot well spacing varied, depending on formation permeability. The
technique gained widespread acceptance by 1937.

However, operators were slow to extend waterflood activities outside Pennsylvania due to the
economic conditions of 1929 and 1930. However, in 1931, the Carter Oil Co. initiated a pilot flood
in the shallow Bartlesville sand of Oklahoma. Soon, others followed; and all enjoyed favorable
results. In early 1936, waterflooding operations were extended to the shallow sands of the Fry
Pool in Brown County, Texas. However, results were marginal, and it wasnt until Magnolia
Petroleum Co. initiated the West Burkburnett flood in 1944 that outstanding results were
achieved. Operations soon followed in other states between 1944 and 1949.

During this expansion period, engineers became aware of the advantages of pressure control by
reinjecting produced water in natural-waterdrive fields. In 1936, the East Texas field was the site
of initial experiments involving reservoir-pressure control as a result of the disposal of produced
water in natural-waterdrive fields. Earlier analytical studies of the reservoir by Ralph J. Schilthuis
and William Hurst led to the conclusion that, as the reservoir pressure declined, salt water
contained in the aquifer of the Woodbine sand expanded and encroached into the oil reservoir.
Depending on the rate of production, the water sustained an equilibrium level of reservoir
pressure that was interdependent with the rate of production.3 After several years of observation,
the program was declared a success and was expanded to other fields. The fields natural
waterdrive resulted in a recovery factor that has, to date, exceeded 70%.

Gas Injection
Interest in gas injection continued during the days following the turn of the century. In gas
injection, compressors are used to force the air or natural gas through injection wells drilled for
that purpose or through old wells taken off production and used as key wells. Gas injection has
four objectives: to maintain or to restore formation pressure, to act as a drive mechanism, or to
place the gas in storage until it is needed.4

In August 1911, I.L. Dunn successfully demonstrated that repressuring a reservoir by injecting
gas could increase oil production. Dunn based his experiments on an idea he had gained in Ohio
in 1903 when gas, at a pressure of 45 psi, was forced into an oil well producing from a 500-ft
(152-m)-deep sand. According to Dunn, After 10 days the gas pressure was released and the
well began to pump much oil, which continued until the gas had worked out again.

To demonstrate his technique, Dunn conducted a series of experiments in which 150,000 ft3 of
free air was compressed and forced into one well daily at a pressure of 40 psi. Within a week, the
production of surrounding wells increased, after which the compressed-air method was extended
to other parts of the property.5 As a rule, the application of air resulted in a three- to four-fold
increase in the production rate, and the use of air proved more economical than natural gas.

In 1927, Marland Oil Co.s Seal Beach field in California was the site of the first use of higher
injection pressures. The higher pressure was necessary because of the hydrostatic pressure
exerted by high-head edge water. Pressures as high as 1,800 psi were required to force the gas
into the sand; however, after the gas was flowing into the formation, the pressure never exceeded
1,500 psi. A year later, 173 million ft3 of gas had been injected into the Bixby sand. As a result,
production increases as high as 50% were obtained in wells upstructure from the injection wells,
with little increase downstructure.

According to historians, annual U.S. production resulting from gas-injection projects reached a
peak in 1935, maintained a constant level until 1945, and then increased rapidly to 1952. Annual
production from gas-injection projects is estimated to have reached 212 million bbl in 1955.

Early Field Successes

Early gas- and water-injection projects were designed and implemented with empirical methods.
Fundamental scientific understanding of fluid flow began in the 1930s, and breakthrough
understanding of fluid displacement can be attributed to M.C. Leverett and his collaborators. This
understanding established the foundation for engineered design and prediction of waterflood
performance, including the method for layered reservoirs. It also set the stage for large-scale
applications in some of the biggest fields of the period.
Magnolia Petroleum Co.s West Burkburnett field in Texas is an example of the success of a large
waterflood secondary-recovery operation. Using a five-spot water-injection program, the
companys flood recovered 9 million bbl of oil between 1944 and 1953, or 1.4 times that
recovered by primary methods.

In Illinois, an initial five-spot program by Adams Oil & Gas Co.-Felmont Corp. in 1943 in the
Patoka field, Marion County, resulted in a much more rapid response in the oil production rate
than expected. The field, which produced 2.8 million bbl of oil by primary production, produced an
additional 6.4 million bbl of waterflood oil by August 1960.

In the Bradford field of Pennsylvania, results from an air-injection project were quickly realized.
The daily injection of air at an average of 68,600 ft3/well at 300 psi increased per-well production
from 0.25 to 12 BOPD in less than two months. Annual production from the 22-well project
increased from 3,474 bbl in 1925 to 18,524 bbl in 1927, and total production from the field is
estimated to have increased by 25% from the air injection alone.

Enhancing Displacement Efficiency Miscible-Gas Injection

The first use of miscible-gas injection by the petroleum industry occurred in the 1950s as a result
of a search for a miscibility process that would recover oil effectively during secondary and
tertiary production. The injection fluids used include liquefied petroleum gases (LPGs), such as
propane, high-pressure methane, methane enriched with hydrocarbons, and high-pressure
nitrogen and carbon dioxide (alone or followed by water). All of these are effective in displacing
trapped reservoir oil, but applications are dependent on the field and a variety of economic
considerations, such as the commercial marketability of these products individually.

The injection of products other than air or water into formations to encourage oil to flow to the
wellbore actually began as early as 1927. At that time, the Midwest Refining Co. injected surplus
liquefied-gas products into the secondary-gas-cap area of the Salt Creek First and Second Wall
Creek reservoirs. While they planned to improve gas-drive operations, they were not aware of the
enriched-gas-drive mechanism that was used later.

Block 31 Field
Started in 1949, the oldest active operation involving miscible-gas injection is Atlantic Richfields
(Arco) Block 31 field in Crane County, Texas. It began as a miscible-hydrocarbon-gas injection
and is still in operation as a mixed hydrocarbon/nitrogen-injection project.

The Block 31 field was discovered in 1947, states Ben Caudle, a former Arco employee and
now a professor of petroleum engineering at the U. of Texas, and the light crude in the field had
a high shrinkage factor. Our forecasted recovery rate was determined to be 10% at best, so we
began looking for a method of getting the oil out in quantities that would make the economics

Barney Wharton, at Arcos research center, had an idea. Barney suggested that in order to
handle the shrinkage, we should inject readily available natural gas into the formation at high
pressures. The gas would then evaporate the light-ends liquids before we recovered the oil at the
surface, Caudle recalls.

The Texas Railroad Commission, Texas oil and gas governing body, was contacted to obtain
approval for the new, untried recovery method for the reservoir. They approved the procedure and
also exempted the field from the mandatory shut-in period in effect at the time. In those days, we
were on production allowables, says Caudle. Wells couldnt be produced more than 9, 10,
maybe 11 days a month because of the abundance of available crude, Caudle says.
Next, we conducted experiments in the laboratory to determine the most appropriate method of
applying our idea. Nobody was more surprised than we were when, halfway through our
experiment, it dawned on us that the injected gas was becoming miscible due to the high
pressure. The multiple contacts of the gas in the pore spaces was drawing the gas and oil closer
together until it formed a miscible slug that drove the oil ahead of it, explains Caudle. We put our
method into operation and, as a result, Block 31 became the industrys first miscible-gas oil-
recovery operation. Later on, it was determined that nitrogen gas could be substituted for the
natural gas. As a result of the miscible-gas-injection project, the fields recovery factor has
reached approximately 70%. The field is still in operation.

During the 1970s, the U.S. industry switched largely to carbon dioxide gas that was available
near major west Texas fields. It achieves miscible displacement at low pressures, has a greater
viscosity under pressure than many other gases, and is less costly than LPGs or methane.6
Hydrocarbon gases continue to be used widely in Alaska and elsewhere in the world.

Chemical Flooding
During 19361937, alcohol was injected into oil wells to displace capillary-held water from the
near-wellbore region, where it offered the most severe restriction to oil flow. Later, when oil
production from the well was resumed, the operator anticipated he could remove the alcohol
surrounding the wellbore and realize an increase in oil productivity.7 His activities marked one of
the earliest uses of a chemical to displace oil in a formation.

Natural-drive fluids, like water and gas, leave a large supply of oil behind in the reservoir under
the best of conditions. Because they are immiscible with formation oil, the oil resists being
displaced from the rock pores. Also, these fluids have densities and mobilities that are
incompatible with oil. Chemical flooding adds chemicals to the water in order to overcome these
problems. Three types of chemical floods are used: polymer, micellar/polymer, and
micellar/alkaline floods.

Polymer flooding is a type of chemical flood in which long, chainlike, high-weight molecules are
used to increase the viscosity of injected water. This improves the mobility ratio of injected water
to reservoir oil, resulting in a more effective displacement process. Micellar and alkaline floods
are two methods used that result in improved microscopic oil displacement by reduction of
oil/water interfacial tensions. In alkaline floods, the injected material reacts with naturally
occurring acidic oil components to form a surfactant. Micellar/polymer flooding is a two-part
recovery technique in which a surfactant/ water solution is injected to reduce oil/water interfacial
tensions, resulting in improved microscopic oil-displacement efficiency. This is followed by
polymer-thickened water to push the oil and surfactant slug toward the producing wells.

The popularity of chemical flooding reached a peak during the 1970s when research projects
abounded, and a large number of field tests were conducted in the 1980s. Chemical-flood field
tests reached a peak of 206 in 1986.8 However, the oil-price collapse that same year interrupted
the popularity of this recovery method when it presented operators with a fundamental dilemma
linked to economics. Since the cost of the materials is generally linked to the cost of petroleum, a
vicious cost spiral ensued as oil prices collapsed. Since the mid-1980s, oil prices have seen
some growth; however, this improvement has not been sufficient to reignite the interest seen in
the 1970s and 1980s. Proponents of chemical floods are seeking ways of breaking this economic
linkage by generating surfactants from nonhydrocarbon feedstocks.

Thermal Recovery Steamflooding

Of the two main methods of thermal recoverysteam injection and in-situ combustionthe
injection of hot fluids, such as steam, into the reservoir is the oldest and most controllable
method. Flooding with heated water, steam, or superheated steam has been around almost as
long as conventional waterflooding. In fact, the idea for using heated fluid supplied from the
surface can be traced back to a proposal by B.W. Lindsly in 1928.9
To date, most thermal-recovery work has been accomplished with steam. It has found significant
application in many parts of the world, including the U.S., Venezuela, Canada, Germany, Russia,
China, and Indonesia (the site of the largest steamflood).

Hot-fluid-injection theory is simple; heated water or steam is generated on the surface and
introduced into the formation through injection wells. The heat serves to lower the oil viscosity in
the formation, allowing it to flow more easily to the producers. Steam is preferred because it is
much more efficient at delivering thermal energy to the reservoir because of the latent heat of
vaporization. Although steamflooding can be effective in both light and heavy oils, it is used
predominantly in heavy oils.

In-Situ Combustion
Purposeful underground combustion started in Russia around 1933 (unintended combustion had
occurred previously during some air-injection projects). This early project was conducted in a
pressure-depleted reservoir containing 36API oil. Since then, in-situ combustion projects have
been implemented in many locations including Romania, Canada, the U.S., and India

In-situ combustion generates heat in a reservoir through the introduction of air into the reservoir,
after which a fire is ignited in the formation near an injection well. The fire and airflow move
simultaneously toward the production wells. This forward-combustion method uses the injected
air and vaporized formation water as the heat carrier (dry combustion) or may combine air and
water injection (wet combustion) to increase process efficiency. A seldom-used variantreverse
combustionallows the fire to move from the production well toward the injection well(s), and the
fire flows counter to the flow of injected air.

The first known attempts to apply an in-situ combustion process in the U.S. occurred in 1952,
when Magnolia and Sinclair engineers, each group working independently only 300 miles apart in
Oklahoma, initiated movements of combustion fronts in pattern-type experiments after several
years of laboratory testing.10 Reports of these two experiments provided the impetus for
additional research in other laboratories. Later on in the 1950s, General Petroleum Corp. and
Magnolia Oil Co. generated a cooperative underground combustion-field-test effort supported by
10 other oil companies in the South Belridge field in California.

Simultaneously, thermal-recovery techniques also were beginning to be applied in other areas of

the world. One of these areas was Venezuela.

The use of steam (cyclic steaming and steamflooding) began in the oil fields of Venezuela in the
late 1950s and was in routine use in eastern and western Venezuelan oil fields by the mid-
1960s, says Tom Reid, a former Phillips Petroleum Co. engineer who now works for the U.S.
Dept. of Energy. I took steam to the Morichal field in eastern Venezuela in 1964 because our
management was convinced that the economics of steaming high-sulfur, heavy oil, which had
been successfully substantiated by operators in California, could be used in Phillips high-sulfur,
heavy-crude operations in Venezuela, he explains.

During the 1960s, cyclic-steam, steam-drive, cyclic-hot-water and hot-water drive raised
production to over 100,000 B/D until the latter part of that decade when the company cut
production back to 25,000 B/D to match the needs of its refinery in England that was processing
the heavy crude into asphalt for paving roads, he continues.

During these steam operations, we learned a lot, Reid says. We received a couple of surprises
when the first well was steamed. One of these surprises was the production of hydrogen that
occurred when the high-temperature steam reached the reservoir sands. We traced this produced
hydrogen in the offset producers, and it indicated the direction of the steam front. Also, the cyclic-
hot-water and hot-water drive we used produced excellent responses in these virgin reservoirs.
Reid also comments on the use of fireflooding in the Morichal field. It wasnt successful, he
states candidly. Counterflow combustion wasnt possible because of the occurrence of
spontaneous ignition, and conventional direct-drive fireflooding failed because of the low structure
(flatness) of the reservoirs. The nitrogen and carbon dioxide produced by the fireflood gassed
out distant producers, thereby reducing the overall field production.

The Future
Because of the wide dissimilarity in rock and fluid properties, combined primary and secondary
reservoir recovery factors of 30 to 40% of original oil in place are considered good. This leaves 60
to 70% of the original oil in place as the target for augmented-recovery methods. Since the
injection of any fluid into a reservoir is expensive, either low-cost injectants or high increased
recovery factors are needed to ensure good economics.

Looking at the future of augmented recovery, it is readily apparent that water is the cheapest fluid
and that improvements in waterflood technology are still occurring as a result of better control of
fluids in the formation. Also, thermal techniques continue to offer distinct advantages in that the
elevated temperatures reduce viscosity and heat flow is advantageous to sweep efficiency. On
the other hand, it is also apparent that chemical injectants can be very expensive; therefore, their
application is more limited. Finally, there is a need for carbon dioxide sequestration to help reduce
the CO2 released to the atmosphere from the combustion of hydrocarbons. This could lead to a
double dividend from an environmental benefit and from improved oil production.

Based on these observations, it is important that reservoir engineers plan the augmented phases
of oil production at the same time they plan the primary phase. This will ensure that future
recovery factors of original oil in place are maximized during the life of the field.

Chapter I

The purpose of this book is to help potential investors understand the

fundamentals of drilling and production projects. Its scope includes: the geology
of oil and gas deposits, where and how it is found, the history of exploration
technology, mineral right leasing, the structure of the deals and risk. In short, the
book provides the background information needed to properly evaluate risk before
directly investing money in a project.

The economies of all industrialized and emerging nations depend on petroleum

resources. Together oil and natural gas allow us unprecedented mobility, help
generate our electricity, help us produce fertilizer, synthetic clothing, food and
product packaging and many other items that we use on a daily basis. Most oil is
now used to create fuel for automobiles and airplanes, accounting for almost 57%
of total world oil consumption. Over half the homes in the US use natural gas for
heating while 16 percent of our electricity is generated by using natural gas to fire
generators. Each year more automobiles and planes using fuel, power stations
and houses utilizing natural gas are placed into service. As a result demand for
both fuels continues to increase. An abundant supply of oil and natural gas remain
vital in helping us and the industrialized countries of the world sustain their
prosperity and way of life.

Chart courtesy of Earth Science World and Exxon-Mobil

page 2

Distribution of Oil in the World

Oil has been discovered on every continent. Today petroleum is produced in more
than one hundred countries. In 2002, the ten largest oil producing countries were
in order: Saudi Arabia, Russia, United States, Iran, China, Mexico, Norway,
Venezuela, United Kingdom, and Canada. While the ten largest natural gas
producing countries were: Russia, United States, Canada, United Kingdom,
Algeria, Netherlands, Iran, Indonesia, Norway, and Uzbekistan. The United States
Department of Energy estimated that the world oil production was twenty-four
billion barrels in 2002, which is produced by over eight hundred thirty thousand oil
wells with more than five hundred twenty thousand of those wells located in the
United States. On average, an oil well in the United States produces only eleven
barrels a day compared to hundreds and thousands of barrels a day produced by
wells in some other countries such as Russia and Saudi Arabia.

Chart courtesy of Earth Science World and Exxon-Mobil

page 3
Chapter II
Oil and Natural Gas Formation

Oil and natural gas are the product of plant and animal remains or organic
material. For the most part these plants, corals, and animals lived in seas. Over
millions of years the remains settled on the ocean floor together with sediment
which washed down from exposed earth and rock. This sediment may have ranged
in size from molecules that dissolve in water to small boulders. In later periods
these layers of organic material and sediment were covered by more sediment
which as a result of time and pressure converted to layers of sedimentary rock. To
get a clearer picture of how this process occurred we need to understand some
geological history.

The earth is believed to have formed some 4.6 billion years ago from a cloud of
cosmic dust and ice. As the planet pulled itself together by gravity, compressing
matter ever more densely, it became molten. The heaviest components such as
nickel and iron sank to the center to form the planet's core, while the lighter
materials including silicon, aluminum, magnesium and other light elements
solidified into a thin rocky crust. Water vapor moved to the surface to form

Over time the earth's crust has become thicker and more stable. Today the planet
can be viewed as a system of plates that fit together like a puzzle. But the pieces
of this puzzle slowly move and change shape.


As recently as the Triassic Period, 225 million years ago, the world's continents
were still connected together in a single super continent called Pangaea. .As the
sequence of maps below illustrates, Pangaea fragmented into several pieces, each
piece being part of a mobile plate of the earth's outer crust called the lithosphere.
These pieces later became earth's current continents. The time sequence shown
through the maps reveals how the continents arrived at their current positions.

Page 4
Chart courtesy of US Geologic Society

During this 225 million year period the continents and oceans experienced
additional changes. As the earth's plates moved against one another the forward
or leading edges either lifted up on top of the next plate or moved below it causing
edges to crumple and increase in height producing mountains we see today. As
plates separated, magma rose from the mantle and solidified in the rift, forming
mid-ocean ridges. The new crust, thinner than the continents spreads out between
the plates. While the rate of plate movement is very slow (averaging perhaps the
speed of a growing fingernail), over the time spans we are considering, the results
are dramatic. The theory that explains these plate movements is called plate

page 5

The two basic types of crust are oceanic and continental. While the oceanic crust
is thin, about 5 to 7 miles thick and composed of heavy igneous rock that formed
from magma flows, the continental crust is 10 to 30 miles thick. Because of these
differences, the continents tend to float rising high above sea level, as in the
mountainous regions. These continental mountains were gradually worn down by
rain and the action of ice, and the freed particles of rock were carried to the sea
where they were deposited in thick sedimentary beds, cemented together by
minerals and the pressure of more sediment deposited above.

At various times (some spanning 10's of millions of years) much of North America
was covered by seas. This occurred in warmer periods when the polar ice caps
were much smaller and consequently held less water. This phenomenon of
changing sea levels helps explain why oil and gas deposits are present far inland
from any existing ocean.

The right conditions for oil and gas

Since sediment and deceased sea organisms are heavier than water they naturally
migrate towards lower areas or basins in the sea. These lower areas were caused
by tectonic action between the plates and eroded valleys that were created in
colder periods before the rise in ocean levels submerged them.

As these ocean basins gradually filled with layers of sediment, the weight of the
newer layers increased on the layers below. This weight or pressure created
friction and heat and began the process of converting the organic material to oil
and gas. The story becomes more complicated because along with organic
material, salt water was invariably captured in the source rock. Under the weight
and pressure of subsequent sediment layers all three substances attempt to
migrate along a path. Since oil is lighter than water, and gas is lighter than both,
when a reservoir rock formation is found it is stratified with gas on top, oil
between, and water on the bottom.

page 6
Chart courtesy of Earth Science World and Exxon-Mobil

In certain places tectonic plate movement has caused the earth's crust to bunch
up, something like gathered cloth, creating folds or uplifts in rock strata. This
movement also resulted in earthquakes that caused faults or fractures in the
strata. These fractures and folds create the opportunity for oil and gas to move out
of their source rock towards the surface. If the oil and gas make it to the surface,
the gas is lost in the atmosphere while the oil ultimately evaporates. But if the
conditions are right the hydrocarbons remains trapped under a layer of
impermeable rock in another sedimentary rock called a reservoir. In some
instances, oil and gas may be trapped under a layer of sediment that deposited
down into a basin, later migrating up from the source rock to reservoir rock.
Chart courtesy of Earth Science World and Exxon-Mobil

page 7

Generally, oil and gas are found in a geologic structure called a trap that prevents the
escape of the oil and gas. There two general types of reservoir traps: structural and
statigraphic. Structural traps are those formed by the deformation of the reservoir
formation while stratigraphic traps are those resulting from an updip seal of porosity
and permeability.

Chart courtesy of Earth Science World and Exxon-Mobil

The anticline trap formed by the folding of rocks into a dome. These anticlinal traps
contain petroleum that has migrated from a source below. Further upward migration of
hydrocarbons was prevented by an impenetrable layer of rock above the reservoir.
Fault traps are formed by the shearing and offsetting of rock strata. The escape of
petroleum from a fault trap is prevented by non-porous rocks that have moved into
position opposite the porous petroleum bearing rock formation. Dome and plug traps
are porous formations on or around great plugs of salt or serpentine rock that has
pierced or lifted the overlying rock layers. Stratigraphic traps are caused either by a
nonporous formation sealing off the top edge of a reservoir or by a change in the
porosity and permeability of the bed itself (see pinchout above).
Permeability and Porosity of Rock Structures

Petroleum deposits are called reservoirs. These are trapped layers of sandstone or
limestone, or dolomite. Exploration and production companies are most interested in
reservoirs that have good permeability and porosity. As we have seen, both oil and gas
co-exist in their natural state with grains of sand, pebbles, rocks and boulders in a rock
layer. Porosity is a measure of the spaces within the rock layer compared to the total
volume of rock. Though both are porous, a sponge is much more porous than a brick.
And though both can hold water in their pours, the sponge has a much higher capacity
for holding liquids. Permeability is a measure of how well liquids and gases can move
through the rock and thus is a function of how well the pours within the rock are
connected to each other.

page 8
Porous areas are in blue

Chart courtesy of Earth Science World and Exxon-Mobil

Petroleum porosities are measured in percent with the average reservoir ranging
from seven to forty percent. Permeability is measured in units named darcies and
the number of darcies range variously throughout each reservoir from millidarcies
to over forty darcies.

Our accumulated knowledge about rock structure and formation processes is

helpful if not essential in determining where to drill for oil.

page 9
Chapter III
Technology and Petroleum Exploration

Throughout human history mankind has found petroleum seeping at the surface
and put it use as pitch for canoes and fuel for lamps. But modern methods of
drilling for oil began at Titusville, Pennsylvania as a result of the determined
efforts of Colonel Edwin Drake.

Cable Tool Drilling

While petroleum oil was known prior to this, there was no appreciable market for
it. Yet, studies of crude oil showed it to be a good source of kerosene if enough
could be obtained. Drake's employers were seeking enough crude oil to establish
a new enterprise, providing kerosene for lamps.

With oil seeping at the surface as an indicator of more below, Drake decided to
drill for the oil. He used an old steam engine to power the drill. In 1857 and again in
1858 Drake searched for oil in and around Titusville. He had limited success, but
was only able to extract a maximum of 10 barrels per day. This was not enough to
make a sustainable commercial yield. When attempts to dig huge shafts in the
ground failed due to water seepage, Drake decided to drill in the manner of salt
drillers using a cable-tool system.

In cable-tool drilling, the drill bit is suspended in the hole by a rope or cable. By
means of a powered walking beam operated by a steam engine, the cable and
attached bit are raised and then allowed to drop. This up and down motion is
repeated again and again. Each time the bit drops it hits the bottom of the hole and
pierces the rock. However, two features of the cable-tool method were
disadvantageous. First the drilling had to be stopped often and the bit pulled up so
that cuttings of chipped rock could be removed. The other problem this system
presented is that it could not drill soft rock formations. This is because the
splintered rock fragments tended to close back around the bit and wedge it in the

The Drake well was dug on an artificial island on the Oil Creek. It took some time
for the drillers to get through the layers of gravel. At 16 feet the sides of the hole
began to collapse. Those helping him began to despair. It was at this point that he
devised the idea of a drive pipe. This cast iron pipe consisted of ten foot long
joints. The pipe was driven down into the ground. At 32 feet they struck bedrock.
The drilling tools were then lowered through the pipe and steam was used to drill
through the bedrock. The going, however, was slow. Progress was made at the
rate of just three feet per day. On August 27th his drill bit had reached a total depth
of 69.5 feet At that point the bit hit a crevice. The men packed up for the day. The
next morning Drake's driller, a blacksmith named William Smith, looked into the
hole. He was surprised to see crude oil rising up the pipe. Drake was summoned
and the oil was brought to the surface with a hand pitcher pump. The oil was
collected in a bath tub. After Drake's breakthrough success, many flocked to
Pennsylvania to drill for oil and Drake became a fixture in history. The cable-tool
drilling method was in common use until the 1920's.

Page 10

Rotary Drilling
The first rotary drilling rig was developed in France in the 1860's. However it was
seldom used because it was erroneously believed that most petroleum was under
hard-rock formations that could be easily drilled with cable-tool rigs. Then in the
1880's two brothers named Baker were using a rotary drill to locate water in the
soft rock formations of the Great Plains. They used a rotary drilling system that
employed a circulating fluid to remove the rock cuttings. Later the system was
successfully used in Corsicana, Texas where drillers searching for water
discovered oil in the unconsolidated soft rock structure.

Rotary drilling operates by pressing the teeth of the bit firmly against the rock and
turning, or rotating it. Simultaneously, a fluid, usually a liquid including clay and
water called drilling mud is forced out of special openings in the bit at high
velocity. This forces the mud and rock chips away from the drill bit and back up
the casing and finally out into a holding tank.
In 1899, Patillo Higgins, living near Beaumont, Texas, observed the flammability of
the gas springs on his property and concluded there must be oil under the
enormous hill on his land named Spindletop. He
placed an advertisement in the paper searching
for a driller to find oil on his property. It was
answered by Captain Anthony Lucas who made
a deal with Higgins to drill. Lucas made a lease
agreement in 1899 with the Gladys City
Company and a later agreement with Higgins.
Using a rotary system, Lucas drilled to 575 ft
before running out of money. After securing
additional funding, Lucas continued drilling and
on January 10, 1901, at a depth of 1,139 feet,
what is now known as the Lucas Gusher blew oil
over 150 feet in the air at a rate of 100,000
barrels a day. It took nine days before the well
was brought under control. Spindletop was the
largest gusher the world had ever seen and
catapulted Beaumont into one of the United
States' largest oil-fueled boomtowns. Beaumont's population of 10,000 tripled in
three months and eventually rose to 50,000. Speculation led land prices to
increase rapidly. By the end of 1902 over 600 companies were formed (ExxonMobil
and Texico among them) and 285 active wells were in operation.

In the rush to develop Spindletop, Howard Hughes, Sr patented a two-cone rotary

rock drill bit that revolutionized drilling. It was unlikely that he actually invented
the bit, but his law training helped him understand that the patent was the most
important part of the financial life of any invention. This design has been improved
over the years, but remains the most widely used system today.

page 11
eismic surveys give petroleum explorers details on the structures and strata beneath the surface of the land.
ata is collected by creating and then recording vibrations and then depicting them on a seismogram. From
ese the geologist can gain a dimensional view of the boundaries between rock layers.

hile seismographs were in use as early as 1841, they then focused exclusively on measuring earthquakes.
hen, during World War I, Dr. L. Mintrop, a German Scientist invented a portable seismograph which he set up
ree places facing the Allied lines. When an enemy artillery piece fired, he used the vibrational data to calcula
e location so precisely that that the Germans could wipe out the gun with one try.
Chart courtesy of Earth Science World and Exxon-Mobil

ter the war, Mintrop reversed the process by setting off an explosion at a known distance and by measuring
e time of subsurface shock wave reflections, he was able to estimate the depth of rock formations. After
oving his theories in the field Mintrop formed Seismos, the first seismic exploration company. Seismos was
red by the Gulf Production Company and quickly proved the effectiveness of the tool in locating likely oil
servoir formations. Later improvements developed in the 1960's allowed 2D subsurface imaging and still late
the 1980's, 3D seismic imaging.
page 12

Well Logging
Sometime in the early 1920's, well logs began being kept by drillers. These driller
logs recorded the depth, kind of rocks, fluids, and anything else of interest that
occurs while drilling the well. This information is useful when compared with
nearby wells that might be drilled later or wells that may have been drilled in the
past. More useful are core and cutting logs. This data comes from core samples
that are taken during drilling and by examination of the drilling chips that are
brought to the surface in the drilling mud. The core sample contains the most
information since it is a slender column of rock that shows the sequence of rock
layers as they appear in the earth. A special core bit is used to cut and bring up
the sample. Once the sample reaches the surface it is packaged and sent to a
laboratory for analysis. Core
samples can provide a clear
understanding of the strata's
porosity, permeability, lithology,
fluid type and content, as well as
geological age. This information
helps determine the oil-bearing
potential of the sampled beds.

As technology advanced, electric

logging came into widespread use.
An instrument called a sonde is
lowered into the bore on a
conductor line, or electric wire line.
The sonde measures and records electrical, radioactive, or acoustic properties of
the various drilled formations. The sonde transmits its information up the wire to a

A Spontaneous Potential (SP) log records the electrical currents that flow in rock
formations. Most minerals are non-conductors of electricity when dry. However,
some, like salt, are excellent conductors when dissolved in water. As drilling fluids
invade a permeable formation, spontaneous potential causes weak current to flow
from the un-invaded saltier rock into the invaded rock. The SP log can be visually
analyzed to understand formation bed boundaries, and thickness, as well as the
relative permeability of formations.

Resistivity logging devices measure and record the resistance of a formation to

the flow of electricity. High saltwater saturation lowers resistivity, while oil and gas
raise resistivity, since hydrocarbons are poor conductors. Common resistivity
logs include the lateral focus log, the induction log, and the microresistivity log.

Radioactivity logging devices measure natural and induced radioactivity. Gamma

ray logs record the emissions of naturally radioactive elements in formation
sediments. Since these elements leach out of porous and permeable rock, a
gamma ray logging device can identify impermeable formations such as shale and
clay-filled sands. Another type of radioactive log is the neutron log which emits
radiation from the sonde bombarding the rock around the wellbore. Readings can
provide useful information about water, hydrocarbon saturations, salt content,
rock types and porosity.

page 13
Acoustic logging devices are also called sonic logs and operate on the
understanding that sound travels better through dense rock than through more
porous rock. Correlating data provided by several different logging methods can
provide a clear picture of the strata of interest.

Its important to note that, due to the expense of logging a well and the overlapping
information provided, not all types of logs are run on each well being drilled.

Formation Test Data

As more wells are drilled and logged in a given field, it becomes easier to predict
and determine where the productive petroleum reservoir will extend and end.
However, with a new discovery, it is imperative to take some pressure readings to
help estimate the lateral extent of the reservoir. Pressure can be taken through a
DST or drill-stem test or wireline. Both involve isolating the potential reservoir to
recover a sample of fluids and take pressure readings. What is recovered and the
pressure data gained from the sample helps determine if a commercialy viable
reservoir has been found.

Offshore Drilling

By the 1930's petroleum exploration companies realized that oil and gas reservoirs
existed in shallow waters offshore. But the problem remained how to drill when
your drilling rig must float and at the same time stand steady against any heavy
wave action. The solution was to create a drilling platform a system of legs or
supports that would anchor or hold the platform in place - eventually these type of
rigs became know as submersibles.

The earliest form of submersible rig

was a posted barge. It consisted of
a barge with several steel posts
attached. A deck was laid across
the top of the posts, and the drilling
equipment was installed on the
deck. Posted barges cannot be
used in waters exceeding 30 feet.
Later improvements on this
concept resulted in ship-shaped
barges and drill ships. While the
ship-shaped barge must be towed
into place the drill ship travels under its own power. In deep waters, drill ships and
ship-shaped barges are anchored much like an ocean going boat may be anchored
or may be held into position by dynamic positioning. Here computer controlled
thrusters are used to maintain the ships position.

Another early drilling platform was the bottle-type submersible rig. These have
several steel cylinders, or bottles that when flooded with water come to rest on the
ocean floor. When it comes time to move the rig, the water is pumped out and the
rig is moved by tugboats to the new location. Bottle-type rigs are usually designed
to operate in maximum water depths of 100 feet although some have been built
that can work in up to 175 feet of water.

Continuing to explore further offshore in deeper waters

resulted in new submersible designs. The Jackup rig is
capable of drilling in waters up to 350 feet. A few have
been designed to operate in up to 600 feet. Jackups are
bottom supported rigs can be either column or truss
supported. Columnar legs are steel cylinders while open
truss legs resemble a derrick. Both types have water
tight hulls that can float on the surface of the water
while being moved into position.

page 14
Today the most common type of offshore rig is the steel-jacket platform. This
consists of the jacket which is a tall vertical section manufactured from tubular
steel. The steel jacket is pined to the ocean floor using driven piles. Additional
sections of tubular steel are placed on top of each other. Above the water level are
quarters for the drilling crew and the drilling rig. This system has been used to
drill wells in up to 1,000 feet of water.

There are other ocean drilling rig designs that are used in special situations
including the concrete gravity platforms used in the North Sea and the steel-
caisson platform used in the Cook Inlet of Alaska.

Directional Drilling

Directional drilling techniques

began to be employed in the 1970's.
Normally wells are drilled vertically
but there are many occasions when
it is helpful to be able to drill at an
angle. Directional wells are drilled
straight to a predetermined level,
then are gradually curved. By
changing the direction of the drill
bit in small increments of no more
than 2 to 3 degrees at a time, it is
possible to drill many wells into a
reservoir from a single offshore
platform. Directional wells may also
be deflected from a shoreline to
reach a reservoir under nearby
water. And directional wells are very
useful in avoiding fault lines, which can cause hole problems, as well as in
instances where it is undesirable to set a rig in a given spot because of an
obstruction or for environmental reasons.

Directional well bits can be used to straighten a hole, deflect the hole from the
original dry well to intersect a reservoir, kill a wild well that is burning, or sidetrack
around a fish (an object that has become lodged in the hole and cannot be

Several special tools are available to assist in directional drilling. The most
common involves the use of a bent sub and a downhill motor. A bent sub is a short
piece of pipe that is threaded on both ends and bent slightly in the middle. It is
installed in the drill stem between the bottommost drill collar and the downhill
motor. A downhill motor is driven by drilling mud thus eliminating the need to
rotate the drill stem. Shaped like a piece of pipe, the downhill motor can have
turbine blades or it can have a spiral shaft that turns inside an elliptical opening in
the housing. In the case of the turbine tool, the force of the circulating mud inside
the tool turns the turbine blades and makes the tool turn.

page 15

Seismography and 3D Simulation

Beginning in the 1980's, 3D seismic data collection systems came on line. As a result seismic readings can b
gathered in three dimensions allowing the construction of 3D simulations that literally paint more accurate
pictures of potential reservoir formations.
Chart courtesy of Earth Science World and Exxon-Mobil

page 16
Chapter IV
Drilling Oil Wells
ile the equipment used to drill and complete an oil well is usually quite simple, the engineering required to d
e properly can be highly complex. Whether the drilling rig is offshore or onshore, they all have the same basi
ucture and use the same equipment.

atomy of an Oil Rig

e purpose of a rotary oil rig is to drill a hole to a predetermined depth. And, hopefully, in the drilling process,
ssing to or through one or more oil and gas bearing reservoir rock formations. Since rigs are expensive to
nufacture they need to be movable. In addition, since the rock chips created by the drilling bit must be
moved, mud is pumped through the drill pipe to the bit and back up the annulus or space between the drill pip
d the outer casing that is added as drilling proceeds. The mud is mixed, usually with water but sometime with
emicals, in a chemical tank, then sucked from the mud pit and pumped via a standpipe and rotary hose to a
vel that is attached to a Kelly, which is itself attached to the drill pipe. The returning mud and rock chips tha
ch the surface move by gravity down a return line to a shale shaker designed to separate the returning mud
m the shale for reuse. The remaining rock chips travel down a shale slide to a reserve pit.
Chart courtesy of Earth Science World and Exxon-Mobil

Page 17
The Kelly is a special section of pipe that has flattened sides that are either square
or hexagonal in shape. The Kelly fits inside an opening called a kelly-bushing. The
kelly bushing in turn fits into a part of the rotary table called the master bushing.
As the master bushing rotates, the kelly bushing rotates. The turning kelly rotates
the drill stem and thus the bit. Since the kelly slides through the opening in the
kelly bushing the kelly can move down as the drilling progresses.

Power to rotate the drill stem

comes from the rotary table. This is
equipped with its master bushing
and kelly bushing. When the kelly
and kelly bushing are removed, the
hole left in the master bushing
accommodates slips that have
teeth-like gripping elements called
dies. These are placed around the
drill pipe keep it suspended in the
hole when the kelly is disconnected
and an additional section of drill
pipe and or casing is attached.

A recent innovation is the advent of the power swivel. This is a top drive system
that eliminates the need for the kelly and rotary table. With a power swivel drill
pipe joints can be added three at a time versus one at a time, vastly speeding
drilling time.

Both the mud transfer pumps and drilling pipe require power to operate. Usually
both are handled by two or more 500 hp to 1,000 hp diesel engines. Additional
power and engines are required to supply electricity to the rig since the rig
typically drills 24 hours a day.

Well Bore Engineering

The engineering becomes more complex as the well is being drilled to its total
depth. The deeper one drills the more pressure that is exerted on the lower strata.
Drilling engineers maintain a density (weight) of mud in the hole in order to
counter the natural pressures of fluids and gases that might otherwise release into
the well hole. But at certain depths and conditions, it becomes impossible to either
keep the mud from penetrating a formation or for fluids to release into the well
hole. A control panel on the surface is linked to various parts of the rig to keep an
eye on down-hole pressure, mud volume, weight on the drill bit and other aspects
of the drilling operation. But at some point it becomes necessary pull the drill stem
and bit from the hole, insert casing in the hole and fill the annulus (space) between
the casing and the wall of the hole with concrete.

Anytime the drill stem and bit are removed, whether to case the well or retrieve a
broken tool, the process is call "tripping out". The cement used to cement wells is
not very different from ordinary concrete. The cement is pumped into a special
valve called a cementing head. As the cement arrives at the head a plug called a
bottom plug is released from the cementing head and precedes the concrete slurry
down the inside of the casing. This plug keeps the cement and the mud ahead of it
from mixing. The plug travels downward until it reaches the float collar. At the
collar, the plug stops, but continuing pump pressure breaks the seal in the top of
the plug and allows the slurry to pass on. The slurry flows through the plug and
starts up the annulus.

page 18

When the estimated amount of cement required has been pumped into the casing,
a top plug is inserted and water, usually salt water, is pumped in behind the top
plug. The top plug keeps the cement from being contaminated by the following
displacement water. When the top plug reaches the bottom, the pumps are
stopped and the cement is allowed to harden. Depending on well conditions it may
take from a few to 24 hours or more for the
cement to harden. Then drilling may be

To resume drilling, the drill stem and a new,

smaller drill bit that fits inside the casing must
be tripped back into the hole. This process is
called "tripping in.

Well Completion

Once a well has been drilled and logged, a

decision must be made whether to complete the
well or plug it. The vast majority of wells drilled
in existing fields do produce petroleum.
However, examination of pierced reservoir rock
formation porosity and permeability may
indicate that the potential flow of oil and gas
from the well will be worth less than the cost to
complete the well. In these cases, the well is
plugged with concrete, sometimes in several
places, and the well is abandoned.

If however, the well readings indicate that the

well will be commercially productive, the well is
completed. If the well is to be completed,
production casing is run down the hole and
cemented. Once the casing is in place, a tool
called a "perforating gun" is lowered into the
well-bore to blast holes through the casing,
cement and into the reservoir. These holes are
made in order for the oil bearing reservoir to
have access to the production casing. Tubing
may then be lowered into the casing. A plug may
then be set below the perforations and a packer
set above the perforations as a barrier between
the production casing and the tubing. This
allows the earth's natural pressure to push hydrocarbons to the well-bore and to
the surface through the tubing unless a pumpjack is necessary to raise the fluids
to the surface.

Several steps are taken at this time to cut out excessive costs from the production
process. A large drilling rig will be replaced by a smaller, moveable completion rig.
Also, a completion team will use a swabbing method to force the reservoir to give
up fluids naturally. This natural flow rate will be measured and compared to other
wells in the area. If it is not up to par, then further measures will be taken to
increase the volume of production. These measures include chemically or
physically treating the reservoir to stimulate the flow of fluids. Acid treatment can
be used in a reservoir containing limestone to open pore spaces by dissolving
sections of limestone. Using a physical method, fluid containing small beads is
pumped into the earth under great pressure to crack open the reservoir. Then the
beads are used to keep the fractures open and allow the flow of fluids to increase.

When a satisfactory rate of production has been established, the well will be
tested to calculate the maximum production for the well over a period of twenty-
four hours. This is termed the open flow potential. This and other completion
information may be required by the state and will aid other geologists and analysts
scouting for oil and/or natural gas in the same area.

page 19
If a well contains more than one zone of interest, the operator will usually begin by
producing the lowest zone in the well bore first and then work their way up the
well bore as each zone becomes depleted. When a zone is completed and the well
is near the production process, a multi-valve device will be connected to the
surface called a "Christmas tree." This device is placed at the top of the
production casing and will allow connections to flow the oil and gas. Equipment to
process the recovered oil and gas is placed near the well to make sure that no
contaminants remain in the oil or gas. This equipment is used to make the oil or
gas ready for transportation.


Production is the process of extracting petroleum from the underground reservoir

and bringing it to the surface to be separated into gases and fluids that can be
sold to refineries. Production begins with a high level of production and decreases
through time until the well is ultimately plugged and abandoned. This decrease in
production is a natural result of the inevitable decline in original pressure within
the reservoir. The time period for commercial production can span from three to
fifty years and the production amount can vary between thirty to over fifteen
hundred barrels a day.

Either gas expansion and/or water encroachment provides the principal natural
energy for most petroleum reservoirs to produce. Both can operate as reserves
are taken from the reservoir. The reduction in pressure around the well bore as
hydrocarbons are extracted causes other hydrocarbons to move into their space.
This process continues until the energy is depleted and/or the well makes too
much water to be commercially productive.

Engineers take the past performance of a well and use it to project the future
reserves of a well. One way of predicting future production is to measure the
percentage of decline in production over a given period of time and use this rate of
decline to estimate future reserves.

Reservoir Engineering

Reservoir engineering is the application of scientific principles to develop and

maintain petroleum reservoirs to maximize economic benefit. For example,
carefully spacing of wells over a reservoir can make a huge difference in the
overall productiivity of the reservoir. In 1904, Anthony Lucas, who had discovered
Spindletop, spoke about the decline in production. He claimed that "the field had
been poked with too many holes and that the cow was milked too hard." Oil
operators in that day gave little thought to reservoir depletion as they completed
wells. They produced a well at the highest rate they could without regard for well
spacing. As a result, in the 1920's the federal government questioned the wasteful
treatment of reservoirs and decided to initiate studies. These studies consisted of
applied mathematics, geology, chemistry, fluid dynamics, and physics to aid in the
analysis of hydrocarbons within a reservoir. Reservoir engineering began as
engineers implemented what the government learned.

page 20

Enhanced Oil Recovery Once the natural flow of gas and oil ceases, the reservoir will
have yielded only 10 to 25 percent of the total volume of the oil it contains. The rest is
trapped in unconnected rock pockets or is thick enough to cling to the rock and
refuses to migrate toward the well-bore.
Chart courtesy of Earth Science World and Exxon-Mobil

Petroleum engineers have developed a number of ways to coax this reluctant oil to
migrate. The most common approach is to drill adjacent wells and use them to inject
water into the reservoir to force the oil to move toward the production well. Another is
to inject gas into adjacent wells to slow the rate of production decline or to enhance
gravity drainage. Both approaches are referred to as secondary recovery processes.
Even after secondary recovery steps have been taken, more than 50 percent of the oil
in the reservoir will remain.

Enhanced oil recovery, also known as tertiary recovery method, is a technique used
for increasing the amount of oil which can be extracted from an oil field after
secondary recovery efforts are no longer effective. Using this method, ten to twenty
percent more of the reservoir's original oil can be extracted versus the amount
retrieved using primary or secondary methods. Gas injection, thermal recovery, or
chemical injection can be used to encourage additional flows, although gas injection
is the most commonly used. Thermal recovery involves using heat to improve the
flow rates and chemical injection is used rarely to lower the surface tension in the
reservoir. The cost of using these types of recovery is usually high; however, when
the costs of oil are at historically high levels, the economics of the enhanced oil
recovery methods rapidly improve.

page 21
Chapter VII
Future Trends in Energy Supplies
Oil supplies may not last forever. But oil will be available for use long beyond our
children's lifetimes. While no one can accurately predict oil's relative supply and
demand, even in the near future, several theorists have attempted to predict when and
how the world will run out of oil. The most notable theorist on this subject is. M. King

Chart courtesy of Earth Science World and Exxon-Mobil

Hubbert's Peak Hubbert started out in the 1950's claiming that oil is a naturally
occurring resource which will not last forever and production will rise to a point which
can not be sustained, and then dies down to a point of total depletion. This, to most,
seems a fairly obvious assumption. Understanding his theories, one can conclude
that once Hubbert's Peak has been reached, half of the world's oil reserves will have
been depleted. This is also the case for a field with several wells. When one takes
what has already been produced from the field and what can potentially be produced
for the field combined this number becomes its ultimate potential. Its peak in
production would be placed at the ultimate divided by two. The only way to precisely
determine the ultimate for the world's oil reserves is to count them when they have
been depleted.

Page 28

The problem with determining the peak in world production is one must use
estimates to determine ultimate world production. There are three main numbers or
concepts used to do this: cumulative production, or what is known, reserved
production, or what knowable, undiscovered production, or what is predictable from
past trends. From these concepts, we can estimate that ultimate equals cumulative
production plus reserved production plus undiscovered production. However, due
to each country's different analysis for total production of oil and gas, determining
the ultimate world oil production is relatively impossible. A Dr. C. J. Campbell took
the time to understand how each country counts its oil and gas production and
predicted that the ultimate world recovery is 1.8 trillion barrels of oil.
Chart courtesy of Earth Science World and Exxon-Mobil

Each country will have its peak. Hubbert predicted that the United States would
reach its peak in the 1970's and it appeared to do so. The country was depending on
Texas for the bulk of its oil and when the Texas Railroad Commission announced in
1971 that Texas was at 100 percent production. That is when the Organization of
Petroleum Exporting Countries was created to control the supply of oil and gas for
the world market and drove the Texas oil industry into the ground for a few years.

Hubbert suggested that it will take many years to completely deplete the world's
supply of oil. However, he also suggested that each country will have its peak and
then experience decreasing production from there. We are already seeing that some
Middle Eastern countries may be peaking today with their production approaching
100 percent of capacity. So, what does this mean for our industries? Cheap oil
production in the Middle East will likely be a thing of the past in the next ten to
twenty years sparking a feverous interest in alternative fuels.
page 29

While there is no denying that there is a finite amount of oil and gas on this planet,
new technologies and recovery methods continue to increase the percentage of an
existing field's recoverable oil and gas. At the same time, while the rate of
discovery of new fields declines, we are getting better at finding them, by digging
deeper and in more isolated areas to do so. Who's to say there will not be yet more
advances that once again tip the scales in favor of more supply than demand.

Alternative Energy Sources

A theorist by the name of Walter

Yongquist exposed several myths
surrounding alternative energy sources.
The first myth states that alternative
energy sources can readily replace oil.
Yongquist asserts that this is simply not
true. Every means of transportation
including airplanes and cars is powered
by carbon based fuels. When carbon
based fuel is no longer used, every
airplane and car will need to be replaced
and this will cost trillions of dollars to do.
He states that 97 percent of the world's 600 millions cars are powered by gasoline
or diesel fuel. That means that when the oil and gas is depleted, 582 million cars
will have to be replaced by cars that run on alternative
sources of energy. He also felt is was a myth to assume that
alternative sources of energy can just be plugged into our
present day economic system and our lifestyles will continue
as usual. If we were to make better use of our most renewable
source of energy, the sun, our current lifestyles would be
impeded due to our modes of transportation being limited to
necessity travel and not for recreational purposes as well. No
longer would things look the way they do; however, each
building and mode of transportation would have huge solar
panels stretched across it. He claims that changing forms of
energy would drastically change our lifestyles and standards
of living. A third myth is that alternative energy sources are
environmentally benign. This simply is not true. For a liquid
version of coal to be used, huge mining endeavors would be
taken on to fulfill the world's necessity of coal. If plants were
used in the form of biomass to fuel the world, the value of soil
would be compromised. Each type of alternative fueling
brings its own environmental problems. A final myth claims
that using biomass types of fuel (plants) can be a lifesaver to
a fuel crisis. This is not true due to the huge costs in
converting plants to liquid fuel and the detriment this would
create in replacement of fuel crops. We can already see this
effect with corn prices doubling in 2006, as more and more
corn is converted to ethanol.

While Yongquist may be correct in calling each of these

premises myths, the equation will surely change if the price of
oil skyrockets. Change is usually not an instantaneous
process, but rather a gradual evolution that occurs over time.

Types of Alternative Vehicular Fuel
One type of alternative fuel is biodiesel. Bio-diesel is a diesel fuel replacement
made from vegetable oils or animal fats. It contains methyl ester; so, most
vehicles do not need to change engines to be able to use it. Claims are made that
bio-diesel reduces carbon dioxide emissions by up to eighty percent and also
reduces the black smoke associated with fuel combustion by up to seventy-five
percent. Its claimed that using biodiesel eliminates the smell of smoke emission
and further emits a smell of doughnuts or popcorn depending of the type of oil
used in the fuel. It also is biodegradable and assists with the lubrication of
engines and can be mixed with normal fuel. The only downside of this type of fuel
is the amount of vegetable oils and animal fats needed to ensure enough fuel for
all diesel vehicles in the country. Bio-diesel can also soften rubbers overtime
causing problems with plastic and rubber hoses contained in engines.

The second type of alternative fuel is ethanol, which is a clear, colorless liquid
formed from distilling starchy crops such as barley, wheat and corn. This is a
gasoline like biofuel. Bioethanol is a form of ethanol produced from trees and
grasses and can be used in the same ways that
traditional ethanol is used. Ethanol can be used
without many additional costs since it is nearly as
cheaply produced than gasoline. Vehicles are not
required to make extreme changes to be able to
run on ethanol and it is better for the environment
than petroleum emissions. Similarly to the
biomass fuel, however, it is difficult to imagine
how the additional quantity of crops that must be
grown to satisfy potential demand. Clearly
farmers could not keep up with demand if ethanol
was only as the sole basis for vehicular fuel.

The third type of alternative fuel is electricity

which can be used for certain types of battery
operated vehicles and also fits in the no
combustion category. The only waste products of
this type of vehicle are heat and water. The primary problem with this type of fuel
(mainly generated from fossil fuels) is the limited battery charge meaning that cars
can only travel a small distance without having to "plug up" and recharge their
batteries. A vehicle of this type would have to be plugged into a source of
electricity every night in order to run it during the day. Solar powered electrical
cells can fuel a car that has panels fixed on the vehicle to collect sunlight. This
type of transportation has not been sold commercially to date; however, solar
powered machinery and homes will enjoy a greater share of the market as the
price of oil and gas continues to increase. Most houses can be covered in solar
panels and it will provide all the electricity needed except for the air conditioning.


An external combustion type of vehicle would be one that was fueled by steam,
coal or organic waste. However, coal can be used to produce gasoline or diesel
The process is highly expensive and is not worth it to most countries due to a
limited coal supply. Coal is dangerous to mine and it is also a fossil fuel which will
eventually run out just as petroleum will. In World War II, when the Germans were
cut off from the oil and gas industry through embargos, they used the country's
rich supply of coal to produce gasoline for their vehicles. Steam power is also a
version of the external combustion vehicle. The Stanley Steamer was created and
it effectively used steam just like steam boats would to power the car. However,
steam powered automobiles require a burner and are slow. Gasoline powered
automobiles continue to be much more efficient.
In the end everything comes down to supply and demand. If the price of oil and
gas continue to rise, say doubling, and doubling again, the economics of
alternative fuels will improve, and mass production technology will be employed
to make vehicles that can take advantage of them, perhaps as cheaply as the
vehicles we buy today.

page 32
Latest update: August 2004

Natural gas-a colorless, odorless, gaseous hydrocarbon-may be stored in a number of different ways. It is most
commonly held in inventory underground under pressure in three types of facilities. These are: (1) depleted
reservoirs in oil and/or gas fields, (2) aquifers, and (3) salt cavern formations. (Natural gas is also stored in
liquid form in above-ground tanks. A discussion of liquefied natural gas (LNG) is beyond the scope of this report.
For more information about LNG, please see the EIA report, The Global Liquefied Natural Gas Market: Status &
Outlook.) Each storage type has its own physical characteristics (porosity, permeability, retention capability)
and economics (site preparation and maintenance costs, deliverability rates, and cycling capability), which
govern its suitability to particular applications. Two of the most important characteristics of an underground
storage reservoir are its capacity to hold natural gas for future use and the rate at which gas inventory can be
withdrawn-its deliverability rate (see Storage Measures, below, for key definitions).

Most existing gas storage in the United States is in depleted natural gas or oil fields that are close to
consumption centers. Conversion of a field from production to storage duty takes advantage of existing wells,
gathering systems, and pipeline connections. Depleted oil and gas reservoirs are the most commonly used
underground storage sites because of their wide availability.

In some areas, most notably the Midwestern United States, natural aquifers have been converted to gas
storage reservoirs. An aquifer is suitable for gas storage if the water bearing sedimentary rock formation is
overlaid with an impermeable cap rock. While the geology of aquifers is similar to depleted production fields,
their use in gas storage usually requires more base (cushion) gas and greater monitoring of withdrawal and
injection performance. Deliverability rates may be enhanced by the presence of an active water drive.

Salt caverns provide very high withdrawal and injection rates relative to their working gas capacity. Base gas
requirements are relatively low. The large majority of salt cavern storage facilities have been developed in salt
dome formations located in the Gulf Coast states. Salt caverns have also been leached from bedded salt
formations in Northeastern, Midwestern, and Southwestern states. Cavern construction is more costly than
depleted field conversions when measured on the basis of dollars per thousand cubic feet of working gas
capacity, but the ability to perform several withdrawal and injection cycles each year reduces the per-unit cost
of each thousand cubic feet of gas injected and withdrawn.

There have been efforts to use abandoned mines to store natural gas, with at least one such facility having
been in use in the United States in the past. Further, the potential for commercial use of hard-rock cavern
storage is currently undergoing testing. None are commercially operational as natural gas storage sites at the
present time.

Figure 1 is a stylized representation of the various types of underground storage facilities, while Figure 2 shows
the location of the nearly 400 active storage facilities in the Lower 48 States.
Glossary of Oil and Gas Terms

General Terms

These general terms will assist the general public to understand terminology used in the Oil and Gas
business. Additionally these terms will assist a person new to the Oil and Gas business or an
experienced person to better understand terms and forms when filing reports or documents to the
Railroad Commission.

The Railroad Commission of Texas shall not be held liable for improper or incorrect use of the following
public information, which is provided via this Web site as a public service for informational purposes
only. PLEASE NOTE that the following is not a legal document and is not intended to be used as such.

Users of this information are responsible for checking the accuracy, completeness, currency and/or
suitability of this information themselves. The Railroad Commission of Texas makes no representation,
guarantees, or warranties as to the accuracy, completeness, currency, or suitability of this information.
The Railroad Commission of Texas specifically disclaims any and all warranties, representations or
endorsements, express or implied, with regard to this information, including, but not limited to, all
implied warranties of merchantability, fitness for a particular purpose, or noninfringement.

Abandoned Well - A well no longer in use, whether dry, inoperable or no longer

productive, and the previous operator has intentionally relinquished its interest in the

Acre-Feet - Unit of volume; one acre of producing formation one foot thick. One acre
foot equals 7,758 barrels, 325,829 gallons or 43,560 cubic feet.

Adjacent Estuarine Zones - This term embraces the area inland from the coastline of
Texas and is comprised of the bays, inlets, and estuaries along the gulf coast.

Administrative Penalty - Statutory penalty imposed by the RRC for violation of a rule.

Allowable - Amount of oil or gas which a well, leasehold or field may produce per month
under proration orders of the RRC.

Associated Reservoir - Oil and gas reservoir with a gas cap. Gas production from
these reservoirs is generally restricted in order to preserve the gas cap energy thereby
increasing ultimate recovery.

Owners and Operators ofand

Basic Sediment Storage
Water (BS&W) - Impurities and water contained in the fluid
The principal owners/operators
produced by anof oilunderground
well. storage facilities are (1) interstate pipeline companies, (2)
intrastate pipeline companies, (3) local distribution companies (LDCs), and (4) independent storage service
providers. There are Bayabout
Well -120
14) Anythat
well currently operate the
under the jurisdiction of thenearly 400 active
Commission underground
for which the storage
facilities in the lower 48 states.
surface locationIn turn, these operating entities are owned by, or are subsidiaries of, fewer than
is either:
80 corporate entities. If a storage facility serves interstate commerce, it is subject to the jurisdiction of the
Federal Energy Regulatory
(a) located in or on a Commission (FERC);
lake, river, stream, canal, otherwise,
estuary, bayou,it is
or state-regulated.
other inland navigable waters of the
state; or
(b) located of
Owners/operators on storage
state lands seawardare
facilities of the mean
not high tide line
necessarily theofowners
the Gulf of
of Mexico
the gas in water of astorage.
held in depth Indeed, most
working gas held in storage facilities is held under lease with shippers, LDCs, or end users the
at mean tide of not more than 100 feet that is sheltered from the direct action of the open seas of who own the gas. On
Gulf of Mexico.
the other hand, the type of entity that owns/operates the facility will determine to some extent how that
facility's storage capacity is utilized.
Bbl, Barrel -- In the energy industry, a barrel is 42 U.S. gallons measured at 60
For example, interstate pipeline companies rely heavily on underground storage to facilitate load balancing
and system supply management on their long haul transmission lines. FERC regulations allow interstate pipeline
companies to reserve some portion of their storage capacity for this purpose. Nonetheless, the bulk of their
storage capacity is leased to other industry participants. Intrastate pipeline companies also use storage
capacity and inventories for similar purposes, in addition to serving end-user customers.

In the past, LDCs have generally used underground storage exclusively to serve customer needs directly.
However, some LDCs have both recognized and been able to pursue the opportunities for additional revenues
These LDCs, which tend to be the ones with large distribution systems and a number of storage facilities, have
been able to manage their facilities such that they can lease a portion of their storage capacity to third parties
(often marketers) while still fully meeting their obligations to serve core customers. (Of course, these
arrangements are subject to approval by the LDCs' respective state-level regulators.)

The deregulation of underground storage has combined with other factors such as the growth in the number of
gas-fired electricityBCF
generating plants to
- The abbreviation forplace
billion acubic
premium on high-deliverability
feet of gas. storage facilities. Many salt
(see "cubic foot of gas")
formation and other high deliverability sites, both existing and under development, have been initiated by
independent storage BCF/Dservice providers,
- The abbreviation oftencubic
for billion smaller,
feet ofmore nimble
gas per day. and focused companies started by
entrepreneurs who recognized the potential profitability for these specialized facilities. They are utilized almost
exclusively to serveBHP
third-party customers
- The abbreviation who can most
for bottom-hole benefit from the characteristics of these facilities, such
as marketers and electricity generators.
Blind Nipple - Nipple (pipe with threads at both ends) that can be blocked off from
Storage Measures
formation pressure and give a false pressure measurement.
There are several volumetric measures used to quantify the fundamental characteristics of an underground
storage facility andBlowout
the gasPrevention
contained- within it. For
Casinghead some of
equipment these
that measures,
prevents it is important
the uncontrolled flow of oil, to distinguish
gas and mud from the well by closing around the drillpipe or sealing the hole.
between the characteristic of a facility such as its capacity, and the characteristic of the gas within the facility
such as the actual inventory level. These measures are as follows:
BOPD - The abbreviation for barrels of oil per day.
Total gas storage capacity is the maximum volume of gas that can be stored in an underground storage
facility in accordance with its design,
Bradenhead Completion which
- A comprises
head, screwed the physical
into the top ofcharacteristics
the casing, usedof
to the reservoir, installed
gas in theprocedures
equipment, and operating well until release throughto
particular an the
outlet into a pipeline.

Total gas in storage Plugvolume
is the - A downhole tool (composed
of storage primarily of slips,
in the underground a plug at
facility mandrel, and a time.
a particular
rubber sealing element) that is run and set in casing to isolate a lower casing interval
while gas)
Base gas (or cushion testingisanthe
upper section.
volume of gas intended as permanent inventory in a storage reservoir to
maintain adequate pressure and deliverability rates throughout the withdrawal season.
Brine Well - A well used for injecting fresh water into geologic formation comprised
mainly of salt. The injected freshwater dissolves the salt and is pumped back to the
Working gas capacity refers to total gas storage capacity minus base gas.
surface as a saturated sodium chloride brine solution used as a feedstock in
petrochemical refineries and in oil and gas well drilling and workover operations.
Working gas is the volume of gas in the reservoir above the level of base gas. Working gas is available to the
BTU, British Thermal Unit(s) -- The amount of heat required to raise the temperature
of one pound of water one degree Fahrenheit under standard conditions of pressure and
Deliverability is most often expressed as a measure of the amount of gas that can be delivered (withdrawn)
from a storage facility on a daily basis. Also referred to as the deliverability rate, withdrawal rate, or withdrawal
capacity, deliverability is usually
Casing expressed
- Pipe cemented in theinwell
terms of off
to seal millions of fluids
formation cubicorfeet
the day (MMcf/day). Occasionally,
hole from
deliverability is expressed in
caving in. terms of equivalent heat content of the gas withdrawn from the facility, most often
in dekatherms per day (a therm is 100,000 Btu, which is roughly equivalent to 100 cubic feet of natural gas; a
dekatherm is the equivalent
- Gasone thousand
found naturally cubic feet
in oil and (Mcf)).with
produced Thethe
oil. of a given storage facility
is variable, and depends on factors such as the amount of gas in the reservoir at any particular time, the
pressure within theCasing-Tubing
reservoir, compression
Annulus - Spacecapability
between available tocasing
the surface the reservoir, the configuration and capabilities
and the production
of surface facilities casing.
associated with the reservoir, and other factors. In general, a facility's deliverability rate
varies directly with the total amount of gas in the reservoir: it is at its highest when the reservoir is most full
and declines as working gas Tree
Christmas is withdrawn.
- The system of pipes, valves, gauges and related equipment that is
located on the well at ground level and that controls the flow of gas and other petroleum
Injection capacity (or rate)
products is the
produced fromcomplement
the well. of the deliverability or withdrawal rate-it is the amount of gas
that can be injected into a storage facility on a daily basis. As with deliverability, injection capacity is usually
expressed in MMcf/day, although
Commission dekatherms/day
- The Railroad Commissionis also used. The injection capacity of a storage facility is also
of Texas.
variable, and is dependent on factors comparable to those that determine deliverability. By contrast, the
injection rate variesCommon
inversely with the
Reservoir - Atotal amount
pool or of gas
accumulation of in
oil storage:
or gas thatitis is at its lowest
produced in morewhen
than the reservoir is
most full and increases as working gas is withdrawn.
one well.

None of these measures for any

Condensate given
(also storage
called facility are fixed
Lease Condensate) or hydrocarbons
-- Liquid absolute. The rates of
separated injection and
withdrawal change gas as the level of gas varies within the facility. Additionally, in practice a storage facility may be
able to exceed certificated total capacity in some circumstances by exceeding certain operational parameters.
But the facility's total capacity
Crude can petroleum
Oil - Liquid also vary, astemporarily
it comes out oforthe
permanently, as its defining
ground as distinguished parameters vary.
from refined
Further, the measures of base gas,out
oils manufactured working
of it. gas, and working gas capacity can also change from time to time.
This occurs, for example, when a storage operator reclassifies one category of gas to the other, often as a result
of new wells, equipment, or operating practices (such a change generally requires approval by the appropriate
regulatory authority). Also, storage facilities can withdraw base gas for supply to market during times of
particularly heavy demand, although by definition, this gas is not intended for that use.

Underground Natural Gas Storage Data

The Energy Information Administration (EIA) collects a variety of data on the storage measures discussed
above, and publishes selected data on a weekly, monthly, and annual basis. For example, EIA uses Form EIA-
and regional level from a sample of all underground natural gas storage operators. The sample is drawn from
the respondents to the EIA-191, Monthly Underground Gas Storage Report, which, among other things, collects
data on total capacity, base gas, working gas, injections, and withdrawals, by reservoir and storage facility,
from all underground natural gas storage operators. Data from the EIA-912 survey are tabulated and published
at regional (see Figure 2 for depiction of regions) and national levels on a weekly basis. Data derived from the
EIA-191 survey are published on a monthly basis in the Natural Gas Monthly. These data include tabulations of
base gas, total inventories,
Cubic Foottotal storage
of Gas capacity,
or Standard Cubicinjections,
Foot of Gasand
- Aswithdrawals at state
a unit of volume, and regional levels.
1,728 cubic
Figure 3 below depicts As applied
some basic to water, 7.48
storage gallons.compiled
statistics As appliedby
to natural
EIA. gas, the volume of gas
which, when saturated with water vapor at 60F and at a pressure of 30 inches of
mercury occupies one cubic foot of volume.

Diagonal - Farthest distance between two points on a proration unit.

Discovery Date - Date assigned to discovery of a new field.

Discovery Well - The first oil or gas well drilled in a new field. The discovery well is the
well that is drilled to reveal the actual presence of a petroleum-bearing reservoir,
Subsequent wells are development wells.

Disposal Well - Well used for disposal of saltwater into an underground formation.

Dissolved Gas -- Commonly referred to as solution gas. (Refer to solution gas)

District Office - The Commission designated office for the geographic area in which the
property or act subject to regulation is located or arises.

Downstream -- This term is used in describing operations performed after those at a

point of reference.

Dry Gas - Natural gas that does not have a significant content of liquid hydrocarbons or
water vapor.

Dry Hole -- Any well that fails to discover oil or gas in paying quantities.

Electric Logs - Recording that indicates the well's rock formation characteristics by
different responses to electric current.

Enhanced Oil Recovery (EOR) - The use of any process for the displacement of oil
from the reservoir other than primary recovery.
Relative Measures of Gas Inventories
Ethane -- A there
For some analytic purposes, colorless odorless gaseous
is interest hydrocarbon
in relative withstatus,
inventory the characteristics
expressedofin theterms of how nearly "full"
predominant molecule, CH3CH3.
are the nation's storage facilities. There are different approaches to measure "percent full." The remainder of
this section discusses three ways of computing an estimate of how full are the nation's storage facilities,
resulting in three numbers, Wellof- Any
each wellhas
which drilled for the purpose
a different of securing
meaning geological or geophysical
or interpretation.
information to be used in the exploration or development of oil, gas, geothermal, or other
mineral resources, except coal and uranium, and includes what is commonly referred to
1. Total Gas ininStorage Relative
the industry as "slimtohole
tests," "core hole tests," or "seismic holes".
This measure of full is obtained by dividing the total amount of gas in the facility by its total gas storage
capacity. This measure
Extraction is not
Loss often
-- The used,in because
reduction volume and byenergy
contentthe values
of natural for
gas base and working gas, this
statistic does not provide information about the potential
from the removal of natural gas liquid constituents. gas available to the market.

2. Working GasFarmout
Relative to Working
- Assignment Gas Capacity
or partial assignmentPercent
of an oilfull
given region
from basedto on working gas
one lessee
capacity is obtained by
another lessee. dividing the sum of estimates of working gas volumes in storage by the total
working gas capacity of the relevant storage facilities. This measure is based on the physical capabilities
of storage facilities to hold
Field - Area working
of oil and gas. Although
gas production working
with at least gas capacity
one common reservoirisfor
entire directly, a
reasonable estimate is total capacity minus base gas for the facility(ies). Hence, working gas capacity will
change as its components change.
3. Working GasField Rules to
Relative - Spacing and production
Historical Maximums rules
Anfor the common
approach reservoir in by
popularized an area.
the American Gas Association
(AGA) was to estimate storage "percent full" by comparing current inventory to the maximum amount of
gas held in storage
Formation during a given
-- A separate time
layer period.
of rock Theofregional
or group historical
intermingled beds. maximum used by AGA for its
weekly storage report (no longer published) was the sum of the largest volumes held in storage for each
facility in a region at any time during 1992-March 2000. The total U.S. historical maximum was the sum
of the three regional numbers. It is important to note that the respective historical maximum volumes for
storage facilities did not all necessarily occur during the same week, or even the same year. Thus, AGA's
regional and total U.S. historical maximums were non-coincident peak volumes. AGA's U.S. historical
maximum volume determined this way was 3,294 Bcf. This historical maximum volume is virtually
historical maximum volume from the period 1992-2000.

The three measures vary with the level of working gas in storage. These relationships may be illustrated in the
following scenarios, which use actual EIA storage data from the recent past. The scenarios represent end-of-month
storage data for the traditional start and end of a heating season, and were chosen so as to accentuate the effect of
working gas levels (i.e., highest to begin the heating season, lowest at the end) on the different measures of percent
Frac-- High pressure or explosive method of fracturing rock formations
full. All figures are in billion cubic feet (Bcf):
Fuel and Shrinkage - The difference between the amount of gas produced at the
wellhead and the gas that enters a pipeline that can be associated withWorking
energyTotal Gas in Storage
to on lease equipment or removal of solution gas. The losses include Capacity
but are not(Total
limited (Base Gas plus
Scenario Total Capacity Base Gas Working Gas
to those from the separation process and field use, as well as fuel, flare gas andminus
plant Working Gas)
liquids extraction. Base Gas)
A: as of October 31, 2003 8,265 4,327 3,130 3,938 7,457
B: as of March 31, 2004Lift -The process of raising
8,219or lifting fluid from
4,283 a well by means
1,058 of gas injected
3,936 5,341
down the well through tubing or tubing casing annulus. Injected gas aerates the fluid to
Sources: October make
31, 2003: Energy
it exert less Information
pressure Administration
than the formation pressure, (EIA), Natural
consequently Gasthe
forcing Monthly
fluid (DOE/EIA-0130),
December 2003, Table
out of14. March 31, 2004: Natural Gas Monthly (DOE/EIA-0130), May 2004, Table 14.
the wellbore.

The various estimates of storage

Gas-Oil "percent- full"
Ratio (G.O.R.) Numberforofthe twofeet
cubic scenarios, according
of gas produced to theofcomputation
per barrel oil. methods described
above, are as follows:
Gas Well - Any well:
Method 1 Method 2 Method 3
(A) which produces natural gas not associated or blended with crude petroleum oil at the time of
Working Gas AGA
production; Total Gas in Storage Total Working Gas Working Gas
Percent Full Computed as: Historical Maximum (3,294
Capacity Capacity
(B) which produces more than 100,000 cubic feet of natural gas for each barrel of crude petroleum oil Bcf)
the sameA producing horizon; or 90% 79% 95%
Scenario B 65% 27% 32%
(C) which produces natural gas from a formation or producing horizon productive of gas only
encountered in a wellbore through which crude petroleum oil also is produced through the inside of
While theanother
amount of working
string of casing orgas in storage
tubing. in a produces
A well which given scenario is fixed,
hydrocarbon liquids,the "percent
a part full"
of which measures vary
is formed
by a For Example,
condensation in aScenario
from gas phaseA,and thea part
Method 3 calculation
of which indicates
is crude petroleum thatbe
oil, shall working
as astocks are only 5
percent below
gas wellAGA's
unless historical non-coincident
there is produced one barrelmaximum,
or more of crude while the 79 oil
petroleum percent fromcubic
per 100,000 Method
feet of2 indicates that 21
percent ofnatural
workinggas; gas capacity
and that is "crude
the term available if needed.
petroleum Onnot
oil" shall the
be other
construedhand, Method
to mean 1 shows that only 10 percent
any liquid
of total capacity mixture or (In
is available. portion
thisthereof which Methods
scenario, is not in the
1 liquid
and 3 phase
yieldin percentages
the reservoir, removed
that are from
close in value, but
this resultthe
is reservoir in such
conditional onliquid phase, and high
the relatively obtained
levelat the surface asgas,
of working one can see from the results in Scenario
B for these two methods.) In Scenario B, Method 2 indicates that gas equivalent to only 27 percent of available
working gas capacityGatherer - Includes
remains any pipeline,
in storage, while truck,
Methodmotor vehicle, that
1 shows boat, storage
barge, or facilities
person authorized
as a whole are over half
"full." Yet the sametoamount
gather orof
emptyoil, capacity
gas, or geothermal resources
is available fromtwo
for the lease production
methods or lease
(Total Capacity minus the sum of
Base and Working Gas volumes).
Gathering Line -- A pipeline that transports oil or gas from a central point of production
It is important to note that a given measure for percent full for the total U.S., regardless of computation
to a transmission line or mainline.
method, may have limited usefulness in assessing the adequacy of inventories going into a heating season. This
is true because most storage facilities are located near, and are designed for the most part to serve, local
Geothermal Energy and Associated Resources--
market areas. Storage facilities have therefore tended to cluster in a number of areas (Figure 2). There are
impediments to sharing inventories between or among regions. Working gas stocks in the Producing Region can
(A) All products of geothermal processes, embracing indigenous steam, hot water and hot brines, and
be directed to either of the other two regions, but sharing between the two Consuming regions is limited at
geopressured water; (B) Steam and other gases, hot water and hot brines resulting from water, gas, or
best. Thus, inventory
other statusintroduced
fluids artificially is more into
formations;on a regional basis.

Shifts in
Heat or otherUse Impact
associated energyInventories and
found in geothermal Storage Activities
The natural gas industry has experienced significant changes in inventory management practices and storage
utilization(D) Anythe
over by-product derived or
past decade from a result of market restructuring. During that time, the operational
practices of many U.S. underground storage sites became much more market oriented. Seasonal factors are
less important nowGeothermal
in the use Resource Well - A well
of underground drilledinventories.
storage within the established limits of agas
Many storage designated
owners (marketers and
geothermal field.
other third parties) are attempting to synchronize their buying and selling activities more effectively with market
needs while minimizing their business costs.
(A) A geopressured geothermal well must be completed within a geopressured aquifer.
"Open Access" to Storage Capacity
(B) A geopressured aquifer is a water-bearing zone with a pressure gradient in excess of 0.5 pounds
Prior to 1994, interstate
per square inch perpipeline
foot and companies,
a temperaturewhich
gradientare subject
in excess of to the
1.6o jurisdiction
F per 100 feet of of the FERC, owned all of the
gas flowing through their systems, including gas held in storage, and had exclusive control over the capacity
and utilization of their storage facilities. With the implementation of FERC Order 636, jurisdictional pipeline
companies were required to operate their storage facilities on an open-access basis. That is, the major portion
of working gas capacity (beyond what may be reserved by the pipeline/operator to maintain system integrity
and for load balancing) at each site must be made available for lease to third parties on a nondiscriminatory
Today, in addition to the interstate storage sites, many storage facilities owned/operated by large LDCs,
intrastate pipelines, and independent operators also operate on an open-access basis, especially those sites
affiliated with natural gas market centers. Open access has allowed storage to be used other than simply as
backup inventory or a supplemental seasonal supply source. For example, marketers and other third parties
may move gas into and out of storage (subject to the operational capabilities of the site or the tariff limitations)
as changes in price levels present arbitrage opportunities. Further, storage is used in conjunction with various
financial instruments (e.g. futures and options contracts, swaps, etc.) in ever more creative and complex ways
Henry Hub -- Located in Erath, LA, the Henry Hub is a pipeline interchange and the
in an attempt to profit from
delivery market
point for the conditions. Reflecting
New York Mercantile this change
Exchange (NYMEX)inactive
focus within
natural gasthe natural gas storage
industry during recent years, the largest growth in daily withdrawal capability has been
contracts. Natural gas from the Gulf of Mexico moves through the Henry Hub onto from high deliverability
storage sites, whichinterstate
salt cavern storage reservoirs as well as
serving the Midwest and the Northeast. some depleted oil or gas reservoirs. These
facilities can cycle their inventories-i.e., completely withdraw and refill working gas (or vice versa)-more rapidly
Horizontal Drilling - A well which is not vertically drilled as defined in Statewide Rule

Hub -- A location where several pipelines interconnect. Also known as a market center.

Hydrocarbon -- An organic chemical compound of hydrogen and carbon, called

petroleum. The molecular structure of hydrocarbon compounds varies from the simplet,
methane (CH4), a constituent of natural gas, to the very heavy and very complex.
Octane, for example, a constituent of crude oil, is one of the heavier, more complex

Independent Producer - An energy company, usually in the exploration and production

segment of the industry and generally, with no marketing, transportation or refining
operations. A non-integrated producing company in the oil industry.

Injection Well - Well used to inject fluids (usually water) into a subsurface formation by

Kelly Bushing - Drilling rig equipment that fits inside the rotary table and is also used as
a reference point on logs to calculate depth.

Long String - Last string of casing set in the well, covering the productive zone.

Low Temperature Extraction (LTX) Unit - Condensation of gas into a liquid by


Mcf -- One thousand cubic feet of natural gas measured at standard pressure and
temperature conditions (see "cubic foot of gas").

MMbo -- Million barrels of oil.

MMBtu -- One million British thermal units, 252,000 Kilocalories or 293 Kilowatt Hours.

MMcf - One million cubic feet.

Mud - Drilling fluid used to lubricate the drill string, line, the walls of the well, flush
cutting to the surface and create enough weight to prevent blowouts.

Multiple 49(b) - Rule governing gas well production from an oil reservoir gas cap.

Multiple Completion -- The completion of a single well into more than one producing
horizon. Such a well may produce simultaneously from the different horizons, or
alternatively from each.

Natural Gas or Gas - A naturally occurring mixture of hydrocarbon and non-

hydrocarbon gases in porous formations beneath the earth's surface, often in
association with petroleum. The principal constituent is methane.
Natural Gasoline - Gasoline manufactured from casinghead gas or from any natural

NGL, Natural Gas Liquids - Hydrocarbon liquids extracted from natural gas.

Odorant - Any malodorous substance added to natural or LP-gas in small

concentrations for the purpose of making the presence of the gas detectable.

Offshore Well - (SWR 14) Any well subject to Commission jurisdiction for which the
surface location is on state lands in or on the Gulf of Mexico, that is not a bay well. (see
bay well)

Oil Well - Any well which produces one barrel or more crude petroleum oil to each
100,000 cubic feet of natural gas.

Open-flow Test - A test made to determine the volume of gas that will flow from a well
during a given time span with minimum restrictions.

Operator - A person, acting for himself or as an agent for others and designated to the
Commission as the one who has the primary responsibility for complying with its rules
and regulations in any and all acts subject to the jurisdiction of the Commission.

Overproduction - Production in excess of the well's monthly allowable.

PPM -- Parts per million.

Perforations - Holes through casing and cement into the productive formation.

Permeability - Ability of rock to transmit fluids through pore spaces.

Person - Any natural person, corporation, association, partnership, receiver, trustee,

guardian, executor, administrator, and a fiduciary or representative of any kind.

Pit - Hole dug out in the ground surface for temporary storage of fluids during drilling

Plug - Seal off formations to stop open communication of formation fluids within a well.

Pollution - Unauthorized contamination of surface or subsurface waters or land.

Pooled Unit - Unit created by combining separate mineral interests under the pooling
clause of lease or agreement.

Porosity - Percentage of the rock volume that can be occupied by oil, gas or water.

Proration Unit - Acreage allocated to a well for the purpose of determining an


PSIA - Pounds of pressure per square inch absolute, using absolute zero as a base.

PSIG - Pounds of pressure per square inch guage, using atmospheric pressure as a
PSI (pounds per square inch) - An English system of measure of the amount of
pressure on an area that is 1 inch square.

Processing Plant -- A plant to remove liquefiable hydrocarbon.

Product - Includes refined crude oil, crude tops, topped crude, processed crude
petroleum, residue from crude petroleum, cracking stock, uncracked fuel oil, fuel oil,
treated crude oil, residuum, casinghead gasoline, natural gas gasoline, gas oil, naphtha,
distillate, gasoline, kerosene, benzine, wash oil, waste oil, blended gasoline, lubricating
oil, blends or mixtures of petroleum and/or any and all liquid products or by-products
derived from crude petroleum oil or gas, whether hereinabove enumerated or not.

Propane -- A gaseous hydrocarbon with the characteristics of the predominant molecule


Quad -- One quadrillion (1,000,000,000,000,000) British Thermal Units (BTU)

Rat Hole - Hole adjacent to well bore for storage of the kelly joint when not in use during
drilling operations.

Reclamation Plant - Plant that treats and reclaims waste oil.

Regular Permit - Permit to drill, plug back or deepen that does not require an exception.

Reserve Pit - Pit used to collect spent drilling fluids, cutting and wash water during
drilling operations.

Reservoir - A porous, permeable sedimentary rock containing commercial quantities of

oil and gas.

Residue Gas - Gas remaining after processing and extraction of NGL.

Salt Water Disposal Well (SWD) - A well used for the purpose of injecting produced
water back into the ground.

Secondary Recovery - Hydrocarbons produced in one well bore by increasing reservoir

pressure with water injected into an adjacent well bore.

Solution Gas - Gas which is dissolved in oil in the reservoir under pressure.

Sour Gas - (SWR 79) Any natural gas containing more than 1-1/2 grains of hydrogen
sulfide per 100 cubic feet or more than 30 grains of total sulfur per 100 cubic feet, or gas
which in its natural state is found by the Commission to be unfit for use in generating
light or fuel for domestic purposes.

Spot Market -- Short term, non-regulated, arms length contract sales of natural gas,
crude oil, refined products, or liquid petroleum gas.

Spud Date - Date that drilling begins.

Storage Gas - A gas that is stored in an underground reservoir.(see underground

hydrocarbon storage)
Storage Tank - Tank for the accumulation of oil pending transferal to a pipeline
company or other purchaser.

Stratigraphic Cross Section - Series of electric logs that illustrate formation correlation
with one formation as a datum.

Structural Cross Section - Series of electric logs that illustrate subsurface structure by
placing the logs relative to sea level.

Substandard Acreage - Amount of acreage that is less than the standard amount for a
proration unit for a field.

Surface Casing - Outer casing cemented in the upper portion of the wellbore to protect
fresh water formations from contamination.

Sweet Gas - All natural gas except sour gas and casinghead gas.

Tank Battery - Tanks for oil storage before delivery to a refinery.

Texas Offshore - This term embraces the area in the Gulf of Mexico seaward of the
coast line of Texas comprised of:

(A) the three league area confirmed to the State of Texas by the Submerged Land Act (43 United
States Code 1301?1315); and

(B) the area seaward of such three league area owned by the United States.

Therm - A unit of heat equivalent to 100,000 British Thermal Units (Btu).

3-D, Three Dimensional Seismic --Advanced method for collecting, processing, and
interpreting seismic data in three dimensions. Three-dimensional seismic data are
collected from closely spaced lines over an area and the data are processed as a
volume. The advantages of three-dimensional seismic methodology include increased
resolution (through 3-d migration and deconvolution) as well as improved interpretational
tools and data displays (such as closely spaced parallel seismic lines, horizontal time

Tidal Disposal - Discharge of produced water or other waste materials into tide
influenced waters.

Tolerance Acreage - Small amounts of acreage assigned to a proration unit after the
unit is already established.

Transportation or to Transport - The movement of any crude petroleum oil or products

of crude petroleum oil or the products of either from any receptacle in which any such
crude petroleum or products of crude petroleum oil or the products of either has been
stored to any other receptacle by any means or method whatsoever, including the
movement by any pipeline, railway, truck, motor vehicle, barge, boat, or railway tank car.
It is the purpose of this definition to include the movement or transportation of crude
petroleum oil and products of crude petroleum oil and the products of either by any
means whatsoever from any receptacle containing the same to any other receptacle
anywhere within or from the State of Texas, regardless of whether or not possession or
control or ownership change. (See gatherer in general terms section and transmission
line in section pertaining to oil and gas well production reporting)
Transporter or transporting agency - Includes any common carrier by pipeline,
railway, truck, motor vehicle, boat, or barge, and/or any person transporting oil or a
product by pipeline, railway, truck, motor vehicle, boat, or barge.

Tubing - String of pipe set inside the well casing, through which the oil or gas is

Underground Hydrocarbon Storage -- The use of sub-surface geologic formations for

storing liquid, liquefied or gaseous hydrocarbons, such as natural gasoline, propane and
natural gas.

Underproduction - Production that is less than the allowable assigned to a proration


Unitization - Joint operations to maximize recovery among separate operators within a

common reservoir.

Unitization Tract - Land subject to a unitization agreement.

Waterflood - Injecting water in one well causing oil not recovered by primary production
to migrate to an adjacent well.


Development Well -- A well drilled to a known producing formation in a previously discovered field.

Exploratory Well - Any well drilled for the purpose of securing geological or geophysical information to
be used in the exploration or development of oil, gas, geothermal, or other mineral resources, except
coal and uranium, and includes what is commonly referred to in the industry as "slim hole tests," "core
hole test," or "seismic holes".

Wildcat Well - A well drilled for the purose of discovering a new field or reservoir.

Zone - Interval of subsurface formation.