You are on page 1of 24

SPE

International

Society of Petroleum Engineers

SPE 38441

Reservoir Simulation: Past, Present, and Future J. W. Watts,


SPE, Exxon Production Research Company

Copyright 1997, Society of Petroleum Engineers, Inc.


This paper was prepared for presentation at the 1997 SPE Reservoir Simulation Symposium held in Dallas, Texas, 8-11 June
1997,

In order to understand where we are going, it is helpful to know where we have


been. Thus, this paper begins with a discussion of historical developments in
reservoir simulation. Then it briefly describes the current state of the art in terms
of how simulation is performed today. Finally, it closes with some
general predictions.
This paper was solicited by the Program Committee of the 1997 SPE Reservoir Simulation Symposium for presentation as the Symposium's
keynote address. Contents of the paper, as presented, have nat been reviewed by the Society of Petroleum Engineers and are subject to
correction by the author. The material, as presented, does not necessarily reflect any position of the Society of Petroleum
Engineers, its officers, or members. Papers presented at SPE meetings are subject to publication review by Editorial Committees of
the Society of Petroleum Engineers. Electronic reproduction, distribution, or storage of any part of this paper for commercial
purposes without the written consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is restricted
to an abstract of not more than 300 words; illustrations may not be copied. The abstract must contain conspicuous
acknowledgment of where and by whom the paper was presented. Write Librarian, SPE, P.O. Box 833836, Richardson, TX
75083-3836, U.S.A., fax 01-972-952-9435.

The Past This paper views the past as a progression, a series of


advances. Sometimes this takes the form of a graph, such as CPU speed
versus time. In other cases, it is a listing of events grouped by decade. In either
case, the intent is to convey an impression of how rapidly development has
occurred, with the hope that that will assist in estimating the future rate of
change.
Abstract
Reservoir simulation is a mature technology, and nearly all major reservoir
development decisions are based in some way on simulation results. Despite this
maturity, the technology is changing rapidly. It is important for both providers and users
of reservoir simulation software to understand where this change is leading. This paper
takes a long-term view of reservoir simulation, describing where it has been and where it is
now. It closes with a prediction of what the reservoir simulation state of the art will be in
2007 and speculation regarding certain aspects of simulation in 2017.

Introduction Today, input from reservoir simulation is used in nearly all major reservoir
development decisions. This has come about in part through technology improvements
that make it easier to simulate reservoirs on one hand and possible to simulate them
more realistically on the other; however, although reservoir simulation has come a long way
from its beginnings in the 1950's, substantial further improvement is needed, and this is
stimulating continual change in how simulation is performed.
Given that this change is occurring, both developers and users of simulation
have an interest in understanding where it is leading. Obviously, developers of new
simulation capabilities need this understanding in order to keep their products relevant
and competitive. However, people that use simulation also need this
understanding; how else can they be confident that the organizations that provide
their simulators are keeping up with advancing technology and moving in the right
direction?
Computing. The earliest computers were little more than adding machines by
today's standards. Even the fastest computers available in the 1970's and early
1980's were slower and had less memory than today's PC's. Without substantial
progress in computing, progress in reservoir simulation would have been
meaningless.
Figure 1 gives the computing speed in millions of floating point operations per
second for the three fastest CPU's of their times: the Control Data 6600 in
1970, the Cray is in 1982, and a single processor on a Cray T94 in 1996.
The performance figures are taken from Dongarra's compilation of LINPACK
results' and should be reasonably representative of reservoir simulation
computations.
Figure 1 shows single processor performance. Today,
high-performance computing is achieved by using multiple
processors in parallel. The resulting performance varies widely
depending on problem size and the number of processors used. Use of, for
example, 32 processors should lead to speedup factors of 15-25 in large
reservoir models. As a result, although Figure 1 shows rate of performance
improvement to be slowing, if parallelization is factored in, it may actually have
accelerated somewhat. No attempt is made to incorporate parallel
performance into Figure 1 because of its wide variability.
Consider Table 1, which compares the Cray 1S to today's Intel Pentium Pro.
The 1S was the second version of Cray's

333
J. W. WATTS
SPE 38441

first supercomputer. Not only was it the state of the art of its time, but it represented a
tremendous advance over its contemporaries. As such, it had the standard
supercomputer price, a little under $20 million. It comprised large, heavy pieces of equipment
whose installation required extensive building and electrical modifications. The Pro, on
the other hand, costs a few thousand dollars, can be purchased by mail order or from
any computer store, and can be plugged into a simple wall outlet. The 200 MHz Pro is
over twice as fast as the Cray (according to Dongarra's numbers) and is commonly
available with up to four times as much memory.

Model Size. Over time, model size has grown with computing speed.
Consider the maximum practical model size to be the largest (in terms of number of
gridblocks) that could be used in routine simulation work. In 1960, given the computers
available at the time, this maximum model size was probably about 200 gridblocks. By 1970,
it had grown to about 2000 gridblocks. In 1983, it was 33,000; Exxon's first application
of the MARS program was on a model of this size running on the Cray 18". In 1997, it
is roughly 500,000 gridblocks for simulation on a single processor. Figure 2 plots these
values, This semi-log plot is a nearly straight line, indicating a fairly constant rate of
growth in model size. The growth in model size is roughly consistent with the growth in
computing speed shown in Figure 1.
troublesome to use, it nearly always could be made to work. Another mathematical
breakthrough of the time was development of implicit-in-time methods, which
made it practical to solve high flow velocity problems such as well coning
The 1970's saw publication of Stone's three-phase relative permeability
models",12. These continue to be widely used. Another innovation that has stood
the test of time was the two point upstream method". Despite widespread efforts since
then, only incremental improvements to it have been found. Also having tremendous
lasting impact was the development of solvers that used approximate factorizations
accelerated by orthogonalization and minimization. These made possible methods that
were largely parameter-free. Finally, Peaceman's well correction for determining
bottom-hole pressure from gridblock pressure and well rate is almost universally used
today.
In the early 1980's, a significant advance occurred with the development of nested
factorization. Both fast and very robust, nested factorization may be today's most widely
used matrix solver method. Another major step occurred in compositional simulation.
Although the first compositional simulators were developed in the late 1960's, their
formulations included an inherent inconsistency that hurt their performance. The
volume balance-5,26 and Young Stephenson'' formulations solved this problem and,
at the same time, made it practical to write a simulator that can efficiently solve both
black-oil and compositional problems. Development of cornerpoint geometry made it
possible to use non-rectangular gridblocks", providing a capability that was useful
in a variety of applications.
In the late 1980's, efforts shifted to issues related to geologic modeling, geostatistics,
upscaling, flexible grids, and parallelization, and this emphasis has continued in the
1990's. The list of accomplishments for the 1990's is much shorter than those for the
preceding three decades. This is, of course, partly because the decade is not finished
yet, but it may also be partly because not enough time has elapsed to make lasting
contributions recognizable. Perhaps it also stems in part from the diversion of effort from
general computational work into the nuts and bolts of interactive software and
parallelization. Finally, it also may be, unfortunately, a result of the reductions in
research effort that began in the mid-1980's.
Technical Advances. The first "reservoir simulator" modeled single-phase flow in one
dimension. Today, the norm is three phase flow in three dimensions with many
gridblocks and complex fluid representation. Many advances in computational
methods made this transition possible. Table 2 lists some of these by decade. In
general, the advances in Table 2 are chosen because they are still in use today or they
paved the way for methods used today. Also, they are methods that are used in many
types of simulation, as opposed to techniques for modeling specific phenomena such
as relative permeability hysteresis or flow in fractured reservoirs. The high points in the
table are discussed below.
Reservoir simulation began in 1954 with the radial gas flow computations of
Aronofsky and Jenkins”. The first work to receive notice outside the field of
reservoir engineering was Peaceman and Rachford's development of the alternating
direction implicit (ADI) procedure". More than 40 years later, ADI is still being used,
though seldom in reservoir simulation.
Today, it is hard to appreciate how little was known at the beginning of the 1960's. Even
concepts that today seem obvious, such as upstream weighting, were topics of
debate. Yet, by the end of the decade, the first true general-purpose simulators had
come into being.
One of the difficulties in the 1960's was solving the matrix equations. Today, a solver
must be fast and easy to use. Then, there were problems that could not be solved
at all. The first effective solver was SIP. Though it was sometimes
Simulator Capabilities. The preceding section discusses technical advances.
They would not be of interest had they not led to improvements in capabilities from
the user's standpoint. Table 3 lists, again by decade, the state of the art capabilities that
were available in simulators of the time. No attempt is made to cite the literature in this
discussion. These capabilities tended to become available in several simulators at about
the same time, and often there was no external publication describing them.
The computers of the 1950's permitted use of only the

334
SPE 38441
RESERVOIR SIMULATION: PAST, PRESENT, AND FUTURE

facilities models and economics packages.


crudest models. Three-dimensional simulation was out of the question, and only
small two-dimensional models were possible. Everything had to be kept simple, so
single-phase flow or incompressible two-phase flow and very simple geometry
were used.
The more powerful computers of the 1960's enabled more realistic description of the
reservoir and its contents. Three phase, black-oil fluid treatment became the
norm. It became possible to run small three-dimensional models. Multiple
wells were allowed for, they could be located where desired, and their rates could be
varied with time. Gridblock sizes could vary, and gridblocks could be "keyed out," or
eliminated from the system. By the end of the decade, implicit computational
methods were available, permitting practical well coning modeling.
The 1970's seems to have been the enhanced oil recovery (EOR) decade. The first
compositional simulators were developed. Computing limitations forced these
to use large gridblocks, so numerical dispersion was a problem. Also, there
were weaknesses in the initial formulations used. Nonetheless, they made it
possible to begin to model phenomena that had up to then been ignored. Because of
heavy interest in EOR, much effort went into modeling miscible and chemical
recovery. Finally, advances in implicit computational methods provided the
solution stability required to model thermal processes.
In the 1980's, it became no longer adequate for the user to tell the simulator where to
put the wells and how much to produce from them. In prediction runs, the simulator became
responsible for making such decisions on its own. This led to development of
complex well management software that, among other things, determined when to
work over existing wells or drill new ones, adjusted rates so as to adhere to
constraints imposed by separator capacities, maintained computed reservoir
pressures at desired values, and allocated produced gas to injection and gas lift. Other
developments led to approaches for modeling fractured reservoirs. The normal
restriction of topologically rectangular grid connectivity was lifted to allow taking
into account shifting of layers across faults. Finally, work began on
interactive data preparation and display and on graphical user interfaces in
general.
The dominant efforts of the 1990's have been on various ways to make
simulators easier to use. These have included continuation of the work on
graphical user interfaces, widespread attempts at data integration, and
development of automatic gridding packages. The use of numerical geologic
models, often depicting fine-scale property variation generated statistically, has
become widespread. There has been considerable work on methods for "upscaling"
reservoir properties from these models' cells to the much larger gridblocks that
simulators can use. Gridding flexibility has increased through use of local grid
refinement and more complex grid geometries. A current thrust is integration of
simulation with non-reservoir computational tools such as
Evolution of Software and Support. As simulators became more powerful and
flexible, their user communities changed. As this happened, developers and
supporters had to change the way they did their jobs. As a result, software
development and support practices progressed through several stages. The
progression is still under way, with one stage left to be accomplished. 1. Developer use.
Initially, a very small team of developers
devised computational methods, implemented them in a simulator, and applied this
simulator themselves. The first applications were intended to test the simulator and its
concepts and the next ones to demonstrate its usefulness on real problems. After these
tests, the developers made production runs intended to benefit the corporation. Portability of this
simulator was not an issue, because there was only one computer for it to run on. Team
use. By the next stage, the team had grown and developed internal specialization.
Within it, one group developed the simulator and another applied it. The simulator
required customization for each new application. Despite the group's
specialization, developers were still frequently involved in applications because of the
simulator's need for continual support. The simulator frequently failed, and it was
essentially undocumented. The simulator ran on a single computer,
and portability was still not an issue. 3. Local use. In this stage, the simulator was used by
people
who were located near to its developers but worked in other parts of the organization. The
simulator still required customization for most new studies. Other support was required
frequently but not continually. Failures still occurred, but less frequently than before.
Documentation was adequate to permit use of the simulator with some assistance from
developers. The simulator ran on several computers, but they were all of the same
type. Widespread use. In this stage, the simulator first began to receive use by
people at remote locations. It seldom required customization, but it still needed
occasional support. It rarely failed. Documentation was thorough, but training was required
for effective use of the simulator. Most applications were performed by specialists in
the use of the simulator. The simulator ran on a small variety of computers. General
use. By this stage, the simulator will have become widely used by people with varying
expertise. It rarely will need customization, will require support only infrequently, and seldom
will fail. Its documentation will be thorough and easily understood. Its user interfaces will
be intuitive and standardized. Little training will be

335
J. W. WATTS
SPE 38441

required to use the simulator. The user will need knowledge of reservoir engineering,
but he will not need
to be a simulation expert. Each transition to a new stage changes what is required
of the simulator and those who support it. Each transition has been more difficult than the
ones that preceded it. A transition that was surprisingly difficult was from local use to
widespread use. In the local use stage, the simulator was being used frequently and for
the most part was functioning correctly. As a result, it seemed safe to send it to remote,
perhaps overseas, locations. Doing so led to many more problems than expected. The
remote computer differed slightly from the developer's, its operating system was at a
different release level, and it was configured differently. The remote users tried the simulator
once or twice, and it did not work. These users did not know the developers personally, and they
had no really convenient way to communicate with them. Typically the users abandoned
attempts to use the simulator, and the developer was slow to learn of this failure.
Interestingly, vendors were forced by their business to make the transition to
widespread use before petroleum companies' in-house developers had to. The
vendors have been dealing with the related problems since the early 1970's, in-house
developers began addressing them later, with varying degrees of success. This
transition is made difficult by the high standards in usability, functionality, and
robustness that must be met. Developing software and documentation of the quality
needed is very time-consuming and requires different sets of skills than those traditional
to reservoir simulator development.
1977, the Computer Modelling Group was formed in Calgary with support
from the province of Alberta. J. S. Nolen and Associates and Todd, Diettrich, and Chase
were formed in 1979. In 1981, a reservoir simulation business was formed at the
existing exploration-related firm Exploration Consultants Limited (ECL). Finally,
SimTech was founded in 1982 and Reservoir Simulation Research Corporation in 1984.
The founding of these firms was followed by a series of mergers and acquisitions.
These began with in 1977 with the acquisition of INTERCOMP by Kaneb Services. In
1983, Kaneb Services sold INTERCOMP to Scientific Software Corporation, the two
firms merging to form Scientific Software-Intercomp, or SSI.
In the middle 1980's, J. S. Nolen and Associates was acquired by what came to
be Western Atlas. In 1996, Landmark Graphics acquired Western Atlas' reservoir
simulation business. Since then, Landmark was in turn acquired by and became a
division of Halliburton
In the middle 1980's, INTERA acquired ECL's reservoir simulation business. A few years
later, INTERA split into two completely separate companies, one based in the United
States and the other in Canada. The reservoir simulation business went with the
Canadian INTERA. In 1995, Schlumberger's GeoQuest subsidiary acquired
INTERA's reservoir simulation business. Shortly later, the United States-based
INTERA, which had become part of Duke Engineering and Services,
reentered the reservoir simulation business by acquiring SimTech.
Finally, in 1996, the Norwegian firm Smedvig acquired Reservoir Simulation
Research Corporation.
As of the second half of the 1990's, vendor simulators are very widely used. As
vendor products have improved, some petroleum companies have reduced or
dropped altogether their in-house efforts. Those companies found that vendors
could provide tools that were at least as good as those that they could develop
themselves, and that they could lower their development and support costs by using
the vendors' products. On the other hand, several large companies continue to find it in
their best interests to develop proprietary tools, perhaps incorporating certain vendor
components into their systems.
Vendor History and Role. Researchers working at major oil company laboratories
developed the first reservoir simulators. It was not until the middle 1960's that vendors
started to appear. Following is a brief history of certain of these vendors. Inclusion of a
particular vendor in this discussion is not intended to imply endorsement, and exclusion is not
intended to imply criticism. The firms discussed are those with which the author is familiar;
for this reason North American-based firms are more likely to be included than those based
elsewhere.
The first to market a reservoir simulator was D. R. McCord and Associates in 1966. Shortly
thereafter, Core Laboratories also had a reservoir simulator that they used in
consulting work.
The year 1968 saw the founding of INTERCOMP and Scientific Software
Corporation, two companies that dominated the reservoir simulation market in the
1970's. Despite their market success, a number of new companies were formed in the
1970's and early 1980's. The first was INTERA, which was formed in 1973 by
merging an INTERCOMP spinoff with the environmental firm ERA. INTERA initially
focused on environmental work, but eventually got into reservoir simulation as
discussed below. In
The Present The following discussion briefly reviews today's common
practices in reservoir simulation. It is intended to depict what people actually do
when performing state of the art simulation. It focuses on large, field-scale
models.

Geologic Modeling. The reservoir is defined using a numerical geologic


model. Often the fine detail of this model is generated statistically. The model
typically has on the order of one to ten million cells. It is based on geologists' beliefs
about the reservoir's depositional environments and data from a variety of
sources such as seismic, well logs, and cores. At

336
SPE 38441
RESERVOIR SIMULATION; PAST, PRESENT, AND FUTURE

times, several randomized versions of the model, called "realizations," are


simulated in order to quantify uncertainty in the simulation results.

Gridding. The user defines major reservoir units to which the simulation grid must
conform. Within these units, the grid is generated automatically with some guiding
input from the user. The gridblocks are generally rectangular or nearly so. Local refinement
of the grid in key parts of the model is used. Special computations account for flow across
faults between gridblocks that are not topologically neighbors.

Upscaling. The typical simulation gridblock consolidates several tens to several


hundreds of geologic model cells. Effective permeabilities on the simulation grid are
determined from the geologic model permeabilities using either an appropriate
numerical average or flow-based upscaling. Sometimes finely-gridded segment models
are run prior to field-scale simulation, with the results of these runs being used to
determine pseudo relative permeabilities, thus effectively accomplishing multiphase
upscaling.
in situ combustion is rare and partly because simulating it is difficult and
expensive.
Most simulation effort probably goes into modeling entire reservoirs or substantial
portions of them, but in terms of number of models run there may be more simulations
of single wells. This is being driven largely by the growth in use of horizontal wells.
Model Size. 100,000 gridblock field-scale black-oil models are common, and models
several times larger are not unheard of. Compositional models tend to be smaller by
perhaps a factor of three to five.
Typical vertical-well coning models have about a thousand to several thousand gridblocks,
while models of horizontal wells are a factor of ten larger.
Most steam process models are of a single well and have at most a few thousand
gridblocks.
Computing Platform. Most reservoir simulation is performed on Unix workstations and
servers. Simulation performed on other platforms is probably split fairly evenly
between PC's and supercomputers. Currently very little is done in parallel, but that is
changing rapidly. Simulation pre and post-processing are done predominantly on Unix
workstations, but use of high-end Windows-based PC's is growing
History Matching. History matching is still essentially a manual process, assisted
perhaps by software that organizes and displays results of history matches. Fully
automatic history matching is still not feasible. Some optimization of small sets of
parameters, such as regional permeability multipliers, is performed automatically.
Predictive Well Management. During prediction, the simulator in effect operates the
reservoir. It drills new wells, recompletes existing ones, restricts rates to satisfy facility
constraints, switches wells to low pressure gathering systems, etc. It decides when to
perform these actions based on rules and other information provided by the user.
However, modeling of flow in the tubing and surface network is less sophisticated.
Pressure drops are normally determined only in the wellbore, and these computations
are typically made using tables. Most network computations are based entirely on material
balance. In effect, physics is largely ignored after the fluid leaves the wellhead. This is
changing, and some simulators can model these flows more rigorously
Data Import. Well data exist in databases. Historical production and injection
rates are exported from these databases in a form that can be read by the
simulator or converted by utilities into simulator input. Most of this process is
essentially automatic in the sense that it requires little intervention by the user. On the
other hand, the user must work with historical pressure data and data describing the
wells (such as completion intervals and tubing sizes) to get them into the form needed
by the simulator.
Fluid property data are also available in databases. Laboratory-measured results
vary from sample to sample, however, and the user must manually average them in
some way. Similar manual averaging is also needed for relative permeability and
capillary pressure data. In addition, pseudofunctions are often used. Use of
pseudofunctions is declining as finer grids become possible.

Simulation. The following characterizes today's reservoir simulation based


on certain aspects of how it is performed.
Type. The large majority of simulations are of conventional recovery processes, and
most of these simulations use black-oil fluid treatment with variable bubblepoint
pressure. However, use of compositional fluid representation is growing and has
become significant. When a compositional representation is used, fluid behavior is
normally computed using an equation of state.
Full compositional fluid representations are also used in simulating miscible
recovery, particularly at laboratory scale. For field-scale processes, the norm is to
use simpler approaches, particularly in modeling carbon dioxide flooding. Simulation
of steam injection processes is common, but simulation of in situ combustion is
rare, partly because use of
Use of Results. Reservoir simulation is most often used in making
development decisions. These begin with initial development and continue with
decisions such as those relating to in-fill drilling and implementation of enhanced oil
recovery processes. Simulation is not commonly used in day to-day
operations, but it is used at times in making major operating decisions.
337
J. W. WATTS
SPE 38441

The Future Before attempting to predict the future, it is good to be aware of the
business drivers, the most important of which are discussed below. These lead into
current technical objectives, also discussed below. These objectives then form the
basis of several predictions.

Business Goals. In the view of petroleum company management, research and


development related to reservoir simulation should address the four goals discussed
below. The order in which they are listed does not indicate priority, and which is most
important varies from situation to situation.
Compress calendar time. Time is money, and the more quickly a study can be
performed, the more valuable it is. Deadlines associated with possible property
acquisitions are getting tighter, and competitive pressures are growing. In addition, a
study that does not take long probably will not cost much.
Decrease cost. Petroleum companies continue to be under pressure to reduce costs.
Despite reservoir simulation's computing-intensive nature, its major cost comes
from the engineering labor it requires.
Reduce expertise required. In a sense, this goal is closely related to the
preceding one, since maintaining expertise is expensive. However, it also relates to
the widespread push to decentralize and flatten organizations. As this occurs, more
simulation is being done locally by generalist reservoir engineers, rather
than by centralized groups of specialists. These reservoir engineers cannot be
expected to be simulation experts.
Improve accuracy and realism. These are the traditional goals of simulation research
and development. New challenges have been created by the grids coming out of
geologic models and by more complex well geometries.
Automate history matching. Determine the best possible fit to existing historical
data, consistent with the presumed depositional environment and other characteristics
of the geologic model.
Recent work has led to an economical way to compute derivatives for use in
gradient-based optimization methods. These will become more commonly applied, but
much more is needed for truly automatic history matching
Minimize the time and effort required to access information required to
perform the study and to generate the results that are the study's objective.
Integrate data when doing so is practical and provide efficient import capabilities
when it is not. Likewise, integrate downstream computations when practical and
provide efficient export capabilities when not.
The Petroleum Open Systems Corporation (POSC) and its members are
addressing the data integration issue. Recent acquisitions of reservoir
simulation vendors by larger service organizations is leading to integration with
computing tools both upstream and downstream of simulation

Technical Objectives and Efforts to Address Them. A good technical objective must
meet two criteria: it must address one or more of the business goals, and it must be
achievable with reasonable effort in the intended time frame. Substantial
progress on the following objectives should be possible within 10 years, and
achievement of them should be possible within 20 years. The following states each
objective, expands upon it very briefly, and describes current work addressing it.
Require the user to provide only data describing the physical system, its fluids,
and its history. Do not require him to specify computational data such as solver
selection, iteration parameters, and timestep controls. Create the simulation grid
for him with no intervention on his part.
Several organizations are working on gridding, and some of their work relates to
automating the gridding process. Current linear equation solver work relates primarily
to parallelization, but within this work there is an ongoing attempt to improve robustness
and reduce the effort required by the user.
Predictions. Following are predictions regarding the year 2007 state of the art of
reservoir simulation and related technologies.
1. The dominant high-end computing platform for
simulation calculations will be a Unix server comprising multiple nodes, with each
node having a small number of processors. Where high performance is not
needed, the dominant platform will be a multiprocessor PC running NT.
The dominant pre- and post-processing and visualization platform will be the top of
the line version
of whatever the PC has become. 3. Integration of reservoir simulators with the software
and databases that provide their input data will be much better than it is today. Reservoir
simulation will be essentially automatic, given the geologic model, fluid data, and
rock (i.e.,
relative permeability and capillary pressure) data. 5. The largest black-oil simulations
will use at least 10
million gridblocks. 6. Most simulators will be based on unstructured grids. 7.
Integrated reservoir-surface network calculations will
be common. 8. Use of history matching tools will be widespread, but
the history matching process will not be automatic. The above predictions are made
with some confidence. It seems reasonable to expect most of them to be generally
correct, with perhaps one or two turning out wrong. Attempting to predict further
into the future is more problematic. Nonetheless, it may be instructive to consider

338
SPE 38441
RESERVOIR SIMULATION: PAST, PRESENT, AND FUTURE

what may happen by 2017. The following are offered as speculations


intended to provoke thought rather than as serious predictions. 1. The
largest black-oil simulations will use about 100
million gridblocks. 2. History matching will still not be automatic, but it will be
closer. 3. A critical issue will be integration of reservoir simulation
with earth modeling. Computing and other developments will have occurred
that cannot at all be foreseen at present. These will have a dramatic impact
on reservoir simulation.

Acknowledgments I thank the 1997 SPE Reservoir Simulation Symposium


Program Committee for the invitation to present this paper and Exxon Production
Research Company for permission to do so; K. Aziz, I. M. Cheshire, K. H. Coats, J.
S. Nolen, A. Settari, M. R. Todd, and S. Trippet for providing information on
vendor histories; and I. M. Cheshire, J. M. Hutfilz, L. K. Thomas, and S. J. Webb
for reviewing a draft of this paper and making helpful comments. Any opinions
expressed in the paper are mine and not necessarily those of the people
acknowledged above or of Exxon Production Research Company
Simulation of Water and Gas Coning," Soc. Pet. Eng. J. 10
(December 1970) 425-436; Trans., AIME 249. 10. Watts, J. W.: "An Iterative Matrix
Solution Method Suitable for
Anisotropic Problems," Soc. Pet. Eng. J. 11 (March 1971) 47
51; Trans., AIME 251. 11. Stone, H. L.: "Probability Model for Estimating Three-Phase
Relative Permeability," J. Pet. Tech. 24 (February 1970) 214
218; Trans., AIME 249. 12. Stone, H. L.: "Estimation of Three-Phase Relative Permeability
and Residual Oil Data," J. Can. Pet. Tech. 12 (October
December 1973) 53-61. 13. Coats, K. H., Dempsey, J. R., and Henderson, J. H.: "The
Use of
Vertical Equilibrium in Two-Dimensional Simulation of Three Dimensional Reservoir
Performance," Soc. Pet. Eng. J. 11
(March 1971) 63-71; Trans., AIME 251. 14. Todd, M. R. and Longstaff, W. J.: "The
Development, Testing,
and Application of a Numerical Simulator for Predicting Miscible Flood Performance," J.
Pet. Tech. 24 (July 1972) 874
882, Trans., AIME 253. 15.
Todd, M. R., O'Dell, P. M., and Hirasaki, G. J.: "Methods for
Increased Accuracy in Numerical Reservoir Simulators," Soc.
Pet. Eng. J. 12 (December 1972) 515-530. 16. Price, H. S. and Coats, K. H.: "Direct
Methods in Reservoir
Simulation," Soc. Pet. Eng. J. 14 (June 1974) 295-308; Trans., AIME 257. Spillette,
A. G., Hillestad, J. H., and Stone, H. L.: "A High Stability Sequential Solution
Approach to Reservoir Simulation," SPE 4542 presented at the 48th Annual Fall
Meeting of the Society of Petroleum Engineers of AIME, Las
Vegas, Nevada, September 30-October 3, 1973. 18. Kyte, J. R. and Berry, D. W.: "New Pseudo
Functions to
Control Numerical Dispersion," Soc. Pet. Eng. J. 15 (August
1975) 269-276. 19. Thomas, L. K., Lumpkin, W. B., and Reheis, G. M.: "Reservoir
Simulation of Variable Bubble-point Problems," Soc. Pet. Eng.
J. 16 (February 1976) 10-16; Trans., AIME 261.. 20. Meijerink, J. A. and Van der Vorst, H. A.:
"An Iterative
Solution Method for Linear Systems of Which the Coefficient Matrix is a Symmetric
M-Matrix," Mathematics of Computation
31 (January 1977) 148. 21. Vinsome, P. K. W.: "Orthomin, an Iterative Method for
Solving
Sparse Banded Sets of Simultaneous Linear Equations," paper SPE 5729
presented at the Fourth SPE Symposium on
Reservoir Simulation, Los Angeles, February 19-20, 1976. 22. Peaceman, D. W.:
"Interpretation of Well-Block Pressures in
Numerical Reservoir Simulation," Soc. Pet. Eng. J. 18 (June
1978) 183-194; Trans., AIME 253. 23. Yanosik, J. L. and McCracken, T. A.: "A Nine-Point
Finite
Difference Reservoir Simulator for Realistic Prediction of Unfavorable Mobility Ratio
Displacements," Soc. Pet. Eng. J.
19 (August 1979) 253-262; Trans., AIME 267. 24. Appleyard, J. R. and Cheshire, I. M.:
"Nested Factorization,"
paper SPE 12264 presented at the Seventh SPE Symposium on Reservoir
Simulation, San Francisco, CA, November 16-18,
1983. 25. Acs, G., Doleschall, S., and Farkas, E.: "General Purpose
Compositional Model," Soc. Pet. Eng. J. 25 (August 1985) 543
553. 26. Watts, J. W.: "A Compositional Formulation of the Pressure and
Saturation Equations," SPERE 1 (May 1986) 243-252.
References
1. Dongarra, J. J.: "Performance of Various Computers Using
Standard Linear Equations Software," December 19, 1996 version available as a
Postscript file from
http://www.netlib.org/benchmark/performance.ps. 2. Kendall, R. P., Morrell, G. O.,
Peaceman, D. W., Silliman, W.
J., and Watts, J. W.: “Development of a Multiple Application Reservoir Simulator for
Use on a Vector Computer," SPE 11483 presented at the SPE Middle East Oil Technical
Conference, Manama, Bahrain, March 14-17, 1983. 3. Aronofsky, J. S. and Jenkins, R.: “A
Simplified Analysis of
Unsteady Radial Gas Flow," Trans., AIME 201 (1954) 149
154. 4.
Peaceman, D. W. and Rachford, H. H.: "The Numerical
Solution of Parabolic and Elliptic Differential Equations,” Soc.
Ind. Appl. Math. J. 3 (1955) 28-41. 5. Sheldon, J. W., Harris, C. D., and Bavly, D.: "A
Method for
Generalized Reservoir Behavior Simulation on Digital Computers," SPE 1521-G presented
at the 35th Annual SPE Fall Meeting, Denver, Colorado, October 1960. Stone, H. L. and
Garder, Jr., A. O.: "Analysis of Gas-Cap or Dissolved-Gas Drive Reservoirs," Soc. Pet.
Eng. J. 1 (June
1961) 92-104; Trans., AIME 222. 7. Lantz, R. B.: "Quantitative Evaluation of Numerical
Diffusion
(Truncation Error)," Soc. Pet. Eng. J. 11 (September 1971) 315
320; Trans., AIME 251. 8. Stone, H. L.: "Iterative Solution of Implicit Approximations
of
Multidimensional Partial Differential Equations," SIAM J.
Numer. Anal. 5 (September 1968) 530-558. 9. MacDonald, R. C. and Coats, K. H.:
"Methods for Numerical

339
J. W. WATTS
SPE 38441

27. Young, L. C. and Stephenson, R. E.: "A Generalized


Compositional Approach for Reservoir Simulation," Soc. Pet.
Eng. J. 23 (October 1983) 727-742; Trans., AIME 275. 28. Thomas, G. W. and
Thurnau, D. H.: "Reservoir Simulation
Using an Adaptive Implicit Method,” Soc. Pet. Eng. J. 23
(October 1983) 759-768. 29. Wallis, J. R., Kendall, R. P., and Little, T. E.:
"Constrained
Residual Acceleration of Conjugate Residual Methods," SPE 13536 presented at the
Eighth SPE Reservoir Simulation
Symposium, Dallas, Texas, February 10-13, 1985. 30. Ponting, D. K.: “Comer point
geometry in reservoir
simulation," Proc. ECMOR 1. 31. Heinemann, Z. E., Brand, C. W., Munka, M., and
Chen, Y. M.:
“Modeling Reservoir Geometry with Irregular Grids," SPERE
6 (1991) 225-232. 32. Palagi, C. L. and Aziz, K.: “Use of Voronoi Grid in Reservoir
Simulation," SPE Advanced Technology Series 2 (April 1994) 69-77.
340

SPE 38441
RESERVOIR SIMULATION: PAST, PRESENT, AND FUTURE

Table 1. Comparison of Cray 19 and Intel Pro 200

| Intel Pro 200


$2,899
Table 3. State of the Art Reservoir Simulation
Capabilities
Cost Performance (Mflop/s) Memory (Mbytes)
Cray 15 $17,000,000
27 32

1
64

Table 2. Significant Reservoir Simulation Technical


Advances

Decade 1950's
Decade Capabilities 1950's Two dimensions
Two incompressible phases
Simple geometry 1960's Three dimensions
Three phases Black-oil fluid model Multiple wells Realistic geometry
Well coning 1970's Compositional
Miscible Chemical
Thermal 1980's Complex well management
Fractured reservoirs Special gridding at faults
Graphical user interfaces 1990's Improved ease of use
Geologic models and upscaling Local grid refinement Complex geometry
Integration with non-reservoir computations
1960's

1970's
Technical Advances Aronofsky and Jenkins radial gas model Alternating-direction
implicit (ADI) procedure IMPES computational method Upstream weighting
Understanding of numerical dispersion? Strongly-implicit procedure (SIP)
Implicit computational methods Additive correction to line-successive
overrelaxation 10 Stone relative permeability models"72 Vertical equilibrium
concept Todd-Longstaff miscible displacement computation Two-point
upstream weighting D4 direct solution method 10 Total-velocity sequential
implicit method Pseudofunctions' Variable bubblepoint black-oil treatment1 Conjugate
gradients and ORTHOMIN20,21 Iterative methods based on approximate
factorizations Peaceman well correction? Nine-point method for grid orientation
effect? Code vectorization Nested factorization" Volume balance formulation25,28
Young-Stephenson formulation Adaptive implicit method Constrained residuals
Local grid refinement Cornerpoint geometry Geostatistics Domain decomposition
Code parallelization Upscaling Voronoi grid31,32
1980's

28

1990's
341

J. W. WATTS
SPE 38441

.-.
..
.-
--...
R
VA
VA.
--
--
-
-
--
-
--

Mflop/s
--
-
--
-

--.---.

1970
1975
1980
1990
1995
2000
!
1985 Year
.
.
.

... --------
.
--
-
--
--
--
-.-
-- --
-
-
-
.
.
....

Figure 1. State of the Art CPU Performance

1000000
-
-
-
-
.

100000
---
.
.
-
· -----

Gridblocks
10000

1000

100
.
.
.
.
.
.
.
- - - ...........
......
.........------

1960
1970
1990
2000
1980 Year
..-.
.
.. ...
.-
.
. .
.
.
.
.
.
-
.
.
--

-
.-.-
.-.-
.
-.-
.-.
-
.-
..
--
-
-
-
--
--..--..-..-
..
-..--. ... .-
.
-.
--

Figure 2. Maximum Practical Model Size

342

You might also like