You are on page 1of 5

Distinguished Author Series

Megacell Reservoir Simulation


Ali H. Dogru, SPE, Saudi Arabian Oil Co.

Summary comprehensive description of parallel computers and the


Reservoir simulation has moved into a new era with the associated programming languages, and Abate et al.* discuss
rapid advancements in parallel-computer hardware and soft- the latest efforts and provide an example of a reservoir sim-
ware technologies. In the past decade, the total number of ulator running on cost-effective PC clusters.
gridblocks used in a typical reservoir study increased from
thousands to millions. Mega (million) -cell simulation capa- Early Oil Industry Efforts
bility allows accurate representation of reservoirs because of As the developments described were taking place in the
its ability to include detailed information provided by logs, computer industry, researchers in the oil industry began
3D seismic, horizontal wells, and stratigraphic studies. One investigating the benefits of the parallelization in reservoir
of the main advantages of megacell simulation is that it min- simulation. The research effort on parallel reservoir simula-
imizes or eliminates upscaling effects, making more-accu- tion during the 1980’s is widely discussed in the literature.
rate reservoir-performance predictions possible. Further-
more, the speed of parallel computers significantly reduces Shared Memory. Refs. 2 through 4 present early work on
the manpower-intensive history-matching process. Megacell parallelization of reservoir simulators for shared-memo-
simulators typically are developed on parallel computers. ry/vector computers. The results presented in these works
With the current low-cost parallel computers, a million-cell demonstrate the potential benefits of parallel processing.
simulation run covering 20 years of production history takes
less than 1 hour. State-of-the-art computer graphics provide Distributed Memory. Wheeler5 developed a black-oil sim-
an effective means of visualizing large data sets covering mil- ulator on a hypercube, Killough and Bhogeswara6 present-
lion-cell descriptions of a reservoir and its performance. This ed a compositional simulator on an Intel iPSC/860, and
paper presents an overview of megacell simulation tech- Rutledge et al.7 constructed a black-oil simulator for the
nologies in the oil industry and academia. CM-2 and showed that reservoir models with more than 2
million gridblocks could be run on this type of machine
Introduction with 65,536 processors.
The development of parallel computers allowed faster reser- The mid-1990’s brought further distributed-memory pub-
voir simulation, placing heavy demand on number crunch- lications. Kaarstad et al.8 presented a 2D oil/water simulator
ing and data storage. Parallel computers are classified in two for a 16,384-processor MasPar MP-2. He showed that a
categories: shared- and distributed-memory computers. model problem with 1 million gridblocks could be solved in
Shared-memory computers, such as Cray vector super- a few minutes. Rame and Delshad9 parallelized a chemical-
computers, are composed of multiple central processing flooding code and tested scalability on a variety of systems.
units (CPU’s) sharing a common memory. Each CPU of this Distributed-memory simulators began to move from
type of machine is usually very fast and uses vector process-
research to production in 1997 with the presentation of real
ing. In contrast, in distributed-memory machines, each CPU
field applications with massively parallel simulators. Shi-
has its own memory and communicates with the other
ralkar et al.10 presented FALCON, a distributed-memory
nodes (CPU’s) through a high-speed network. The number
simulator that uses several programming languages to handle
of CPU’s in shared-memory machines cannot be increased
data distribution and interprocessor communication. These
beyond a specific number (usually 4, 8, or 16), but distrib-
uted-memory machines have no such limitations. Distrib- languages include high-performance FORTRAN and FOR-
TRAN 90 coupled with message passing [parallel virtual
uted-memory machines [sometimes called massively paral-
lel processors (MPP’s)] can be made up of hundreds or thou- machine or message-passing interface (MPI)].1 The simula-
sands of CPU’s. Each CPU can be very low cost (the cost of tor was tested on various computing platforms, including
a PC); however, the network communication and program- TMC CM-5, IBM SP-2, SGI Power Challenge, Cray T3D and
ming language become major challenges in this type of T3E, and Origin 2000.
architecture. Both major and small hardware companies Chien et al.11 presented a distributed-memory simulator
manufacture massively parallel computers. The collection based on an existing FORTRAN 77 simulator. They used
(cluster) of workstations and, more recently, the collection domain decomposition and MPI message-passing libraries
of PC’s with a high-speed switch box provide both signifi- on an IBM SP-2. Their largest reported field was more than
cant cost savings and attractive performance. Ref. 1 gives a 1 million gridblocks running on a 16- or 32-node SP-2 sys-
tem. Killough et al.12 presented locally refined grids based
Copyright 2000 Society of Petroleum Engineers on domain decomposition for a distributed-memory simula-
This is paper SPE 57907. Distinguished Author Series articles are general, descrip-
tor. The largest field model featured implicit pressure/explic-
tive representations that summarize the state of the art in an area of technology by it saturations (black oil and compositional) for more than 1
describing recent developments for readers who are not specialists in the topics dis-
cussed. Written by individuals recognized as experts in the area, these articles provide key million gridblocks and up to 32 SP-2 processors.
references to more definitive work and present specific details only to illustrate the tech-
nology. Purpose: to inform the general readership of recent advances in various areas *J. Abate, P. Wang, and K. Sephernoori: “Parallel Compositional Reservoir Simulation on a Clus-
of petroleum engineering. ter of PCs,” Dept. of Petroleum Engineering, U. of Texas, Austin, Texas (December 1999).

54

MAY 2000
Fig. 2—Megacell simulation process.

tion. These limitations also force engineers to study a section


of the reservoir with fine-grid definition rather than the full-
field representation. Full-field representations are more rep-
resentative, especially for the large, high-permeability reser-
voirs found in the Middle East. Unlike sector models, full-
field models do not use boundary-condition assumptions
and, therefore, represent total reservoir behavior better.
Fig. 1—Conventional simulation process.
Impact of Parallel Simulation
Fig. 1 illustrates the conventional process of upscaling with
Hwang and Xu1 provide useful information on parallel sequential computer technology. As the figure shows, geo-
computers and programming languages, and Gropp et al.13 logical models containing more than 1 million gridblocks
describe the MPI for parallel programming in detail. typically are reduced 100,000 or fewer cells for reservoir
simulation. Depending on the procedures used in the
Upscaling Concerns upscaling process, it is possible to end up with different
As previously stated, parallel reservoir simulation technol- representation of the reservoir for the simulation process.
ogy offers the potential to include detailed reservoir The process may change from one company to another and
description available from the geological models. Geologi- from one individual to another, depending on the style and
cal models usually cover millions of gridblocks that contain approach and computer hardware availability. As a result,
the core, log, seismic, and geologically interpolated data. If different reservoir behavior could be generated.
we consider only the vertical direction, we see that logs In contrast, megacell simulation carried out on various
alone provide foot-by-foot vertical heterogeneity (layering) platforms, such as MPP’s, a cluster of workstations (PC’s),
information. Owing to the limitations in the current simu- or distributed/shared-memory machines, can overcome or
lation process, describing all the layers seen in the logs in a minimize upscaling effects (Fig. 2). As Fig. 2 shows, the
simulation model is impossible. Instead, flow layers are detailed description of the reservoir contained in the geo-
defined by lumping the log information. For example, for a logical models can be incorporated directly into the simu-
200-ft-thick reservoir, defining 200 vertical simulation lay- lation model or critical features of the geology can be trans-
ers is not practical; therefore, the information is lumped ferred to a simulation model with minimum upscaling. The
into 10 to 20 flow layers. This process, called upscaling, size of the simulation model generated from the geological
carries the risk that vertical barriers and heterogeneity are model depends on the hardware configuration. Assuming
not properly defined. Upscaling also changes the locations that sufficient nodes are available in the parallel computer,
of perforations, which can be moved up or down 10 to 20 it is possible to include the entire geological model as a
ft, depending on the layering. This can significantly affect reservoir simulation model.
calculation of the arrival of water or gas at the wells. Simu-
lation engineers then must change gridblock permeabilities Industrial Applications
and introduce pseudorelative permeability curves (well Several oil companies and universities currently are devel-
functions to match the water or gas breakthrough times oping and implementing parallel simulation technologies.
and behavior). This often results in unrealistic property val- Among these are Saudi Arabian Oil Co. (Saudi Aramco),14
ues and causes disagreements between the simulation engi- Amoco,10 and Chevron.11 Consulting companies (e.g.,
neers and reservoir management. Landmark12 and Geoquest) also provide technologies in
Another adverse effect of areal upscaling is more pro- parallel simulation. Here, we present Saudi Aramco’s data to
nounced when simulating large oil fields, such as those in the highlight the megacell simulation experience and results.
Arabian Gulf where reservoirs usually contain billions of bar- Saudi Aramco is a dedicated user of reservoir simulation
rels of oil. In such reservoirs, wells generally are located 0.62 technologies for its giant hydrocarbon reservoirs. Current-
miles apart and geological models have a grid spacing of 820 ly, we use three well-known commercial simulators. All the
ft. If a simulation engineer wants to place at least a few grid- simulators use conventional finite-difference schemes and
blocks between the wells and wishes to use 20 to 30 layers to have options for black-oil, compositional, and dual-poros-
describe thick vertical pay, the simulation model will have a ity features. The simulators are operated on IBM SP-2
few million cells. Because of this, current simulation prac- high-end superworkstations. The company previously
tices force the engineer to use coarser areal grid sizes and 20 used Cray supercomputers for reservoir simulation.
or less vertical layers to fit the reservoir into a conventional Parallel with the development of computer technologies,
simulator running on a mainframe or high-speed worksta- the number of models used increased from 25 in 1988 to

56
MAY 2000
Fig. 3—Impact of parallel reservoir simulator.

Fig. 4—TICAM’s Wonderland 16-CPU cluster.


100 in 1998. In all these applications, we used conven-
tional simulators running on supercomputers and, more
recently, high-end workstations. ethernet network, they reached a speed comparable with a
To examine reservoir complexities in the giant oil reser- 16-node IBM SP-2. Fig. 5 shows the CPGE PC cluster and
voirs better, Saudi Aramco developed its own MPP parallel also the results of a cluster of 16 PC’s (each PC is a 400-
simulator called POWERS.14 Fig. 3 illustrates the average cycle/sec Pentium II with 512 Mbytes of memory).* They
number of gridblocks used in the simulation models in the compared this system with a 16-node IBM SP-2 of the Maui
past 14 years. As the figure shows, the average number of High Performance Computing Center. The SP-2 system had
gridblocks increased from 30,000 in 1988 to 100,000 in 1996 thin nodes (160-cycle/sec P2SC, 512 Mbytes with a high-
and, with the development of the parallel simulator, jumped performance switch box). The reservoir simulator problem
an order of magnitude to more than 1 million cells in 1998. used for the comparison was the fifth SPE comparative prob-
Note that the megacell simulation process includes an effi- lem with 30,720 gridblocks, three hydrocarbon components,
cient pre- and post-processing environment. This means and 100 days of gas injection.16 The compositional simula-
users can easily generate million-cell models from the geo- tor was run on two different systems with the same data set.
logical models, run them in a practical time frame, and ana- The scaleup runs consisted of running the same problem
lyze the results using efficient visualization packages, which on 4, then 8, then 12, then 16 processors. Total time for the
makes it comparable with building, running, and analyzing 16-processor run was 82 seconds for the cluster and 77 sec-
the results of a 100,000-cell model on a workstation. This is onds for the SP-2. Fig. 5 shows that the PC clusters and the
considered a major achievement during the past decade. SP-2 system gave very similar scaleup results. The figure also
shows that the scaleup was linear, making it potentially very
Academic Applications/PC Clusters attractive. The main comparison, however, is cost. PC clus-
The U. of Texas (UT), Austin, is active in parallel reservoir ters cost only a small fraction of what a commercial parallel
simulation.15 The Center for Subsurface Modeling of the computer costs. CPGE has also run more-realistic problems
Texas Inst. of Computational & Applied Mathematics consisting of 200,000 gridblocks and six components.17
(TICAM) developed a parallel reservoir simulator called
IPARS that is heavily used for R&D of new mathematical Benefits of Megacell Simulation
algorithms in reservoir simulation. Similarly, the Center for The following are the major benefits of megacell simula-
Petroleum & Geosystems Engineering (CPGE) is active in tion on parallel computers.
developing parallel compositional simulation models. 1. Adequate description of reservoir heterogeneities in
TICAM researchers built a cluster of PC’s as a parallel both areal and vertical directions.
computer and successfully ran their parallel simulator on 2. Full-field description with sufficient resolution in
the same machine. The PC cluster is called a Beowulf sys- areal and vertical directions.
tem and is composed of 16 nodes. Another such system 3. Speed of simulation.
called Wonderland was easily built in a student office by an 4. Integrated reservoir simulation (coupled reservoir
undergraduate student (Fig. 4). This system is composed of and surface facility simulation).
16 PC’s connected by an ethernet network, one switch box, These factors all can reduce the time-consuming history-
and a monitor. Its primary advantage is cost. The system matching process.14 Fig. 6 shows the pressure and water-
costs at least an order of magnitude less than a commercial cut performance match for a giant offshore oil reservoir.
parallel computer. TICAM also built a 64-node cluster called These results were from the initial run and indicate close
Longhorn with a fast Myrinet network, and they currently agreement. This allows less effort to go into obtaining a
run their parallel simulator on this new system. final history match.
CPGE also built a 16-node PC cluster. They developed a Fig. 7 illustrates one major effect of use of fine-grid mega-
fully implicit, parallel compositional simulator and studied cell simulation. A bypassed oil zone is detected by the mega-
its performance on both a cluster of PC’s and massively par- cell simulation with 67 vertical layers (a total of 2.5 million
allel supercomputers (Fig. 5). By using a fast Myrinet net-
*J. Abate, P. Wang, and K. Sephernoori: “Parallel Compositional Reservoir Simulation on a Clus-
work (1.28 gigabits/sec) rather than the 100-megabit/sec ter of PCs,” Dept. of Petroleum Engineering, U. of Texas, Austin, Texas (December 1999).

57
MAY 2000
Fig. 5—PC clusters parallel computer, CPGE Beowulf system.

cells), while the conventional simulator with five vertical Computational speed achieved in megacell simulation
layers (40,500 cells) misses the oil pockets left behind the offers great promise for coupling surface-facility simulators
flood front. This is in spite of the fact that both models (from sandface to separator) with reservoir simulators. Such
matched the water-cut behavior for Well A (Fig. 8). capability would produce more-accurate reservoir-perform-
Table 1 provides an example of the speed of the megacell ance computations and eliminate the need to use hydraulic
simulation. The table shows execution times with the fully flow tables.
implicit black-oil option of Saudi Aramco’s parallel simula-
tor for four different full-field models representing large Conclusions
reservoirs. To demonstrate the effect of the number of The megacell simulation process is becoming a feasible
processors on speed, the total number of processors was process with the development of fast, cost-effective parallel
increased from 64 to 128 nodes of a CM5E parallel com- computers. Megacell simulation allows detailed description
puter and resulting CPU times are reported. Each node has of the reservoir in a simulation mode, leading to a close his-
a modest processing speed of 128 MFLOPS and 128 tory match in a short time. Having millions of cells mini-
megabytes of memory. The speed of each model is nearly mizes and, in some cases, eliminates the upscaling effects and
doubled by doubling the number of nodes. In each case,
megacell simulators with 20 to 50 years of production/injec-
tion history involving hundreds of horizontal and vertical
wells can be run within hours (like conventional simula-
tors) with an order of magnitude more gridblocks.

Fig. 7—Comparison of conventional simulation (40,500


Fig. 6—Early runs of megacell simulation of offshore oil cells, 5 layers) and megacell simulation (2.45 million
reservoir showing pressure and water-cut match. cells, 67 layers).

58
MAY 2000
TABLE 1—SPEED OF MEGACELL MODELS ON A
CM5E PARALLEL COMPUTER, POWERS SIMULATOR

Model Size History CPU Hours


(millions of Length On 64 On 128
Reservoir gridblocks) (years) Nodes Nodes
Carbonate 1.12 27 3 1.7
Sandstone 1.3 49 4.5 2.5
Carbonate with 3.9 10 — 2.0
gas cap
Carbonate 2.5 25 — 4.0

10. Shiralkar, G.S. et al.: “FALCON: A Production Quality Dis-


Fig. 8—Water-cut match for Well A. tributed Memory Reservoir Simulator,” SPEREE (October
1998) 400.
11. Chien, M.C.H. et al.: “A Scalable Parallel Multipurpose Reser-
process. Technology offers the potential to couple reservoir voir Simulator,” paper SPE 37976 presented at the 1997 SPE
and surface-facility simulators. On parallel computers, mega- Reservoir Simulation Symposium, Dallas, 8–11 June.
cell simulation models can be run in a matter of hours and 12. Killough, J.E., Camilleri, D., and Darlow, B.: “A Parallel Sim-
graphical tools are available to analyze and construct these ulator on Local Grid Refinement,” paper SPE 37978 present-
large models in a very practical time frame. The megacell ed at the 1997 SPE Reservoir Simulation Symposium, Dallas,
description is a better reservoir management tool. JPT 8–11 June.
13. Gropp, W., Lusk, E., and Skjellum, A.: Using MPI—Portable
Acknowledgments Parallel Programming with the Message-Passing Interface, MIT
We thank the Saudi Aramco management for permitting Publications, Cambridge, Massachusetts (1994).
the publication of this paper and the POWERS team mem- 14. Dogru, A.H. et al.: “A Massively Parallel Reservoir Simulator
bers, L.S. Fung and W.T. Dreiman at Saudi Aramco, for Large Scale Reservoir Simulation” paper SPE 51886 pre-
TICAM, and the UT Dept. of Petroleum Engineering for sented at the 1999 SPE Reservoir Simulation Symposium,
providing data. Houston, 14–17 February.
15. Parashar, M. et al.: “A New Generation EOS Compositional
References Reservoir Simulator: Part II—Framework and Multiprocess-
11. Hwang, K. and Xu, Z.: Scalable Parallel Computing: Technolo- ing,” paper SPE 37977 presented at the 1997 SPE Reservoir
gy, Architecture, Programming, WCB, McGraw-Hill Book Co. Simulation Symposium, Dallas, 8–11 June.
Inc., Boston (1998). 16. Killough, J. and Kossack, C.: “Fifth SPE Comparative Solu-
12. Scott, S.L. et al.: “Application of Parallel (MIMD) Computers tion Project: Evaluation of Miscible Flood Simulators,” paper
to Reservoir Simulation,” paper SPE 16020 presented at the SPE 16000 presented at the 1987 SPE Reservoir Simulation
1987 SPE Reservoir Simulation Symposium, San Antonio, Symposium, San Antonio, Texas, 1–4 February.
Texas, 1–4 February. 17. Wang, P. et al.: “A Fully Implicit Parallel EOS Compositional
13. Chien, M.C.H. and Northrup, E.J.: “Vectorization and Paral- Simulator for Large Scale Reservoir Simulation,” paper SPE
lel Processing of Local Grid Refinement and Adaptive Implic- 51885 presented at the 1999 SPE Reservoir Simulation Sym-
it Schemes in a General Purpose Reservoir Simulator,” paper posium, Houston, 14–17 February.
SPE 25258 presented at the 1993 SPE Reservoir Simulation
Symposium, New Orleans, 28 February–3 March. SI Metric Conversion Factors
14. Li, K.G. et al.: “Improving the Performance of MARS Reser- bbl ×1.589 873 E−01= m3
voir Simulator on Cray-2 Supercomputer,” paper SPE 29856 cycle/sec ×1.0* E+00= Hz
presented at the 1995 SPE Middle East Oil Show, Bahrain, ft ×3.048* E−01= m
11–14 March. mile ×1.609 344* E+00= km
15. Wheeler, J.A. and Smith, R.A.: “Reservoir Simulation on a psi ×6.894 757 E+00= kPa
Hypercube,” SPERE (November 1990) 544. *Conversion factor is exact.

16. Killough, J.E. and Bhogeswara, R.: “Simulation of Composi-


tional Reservoir Phenomena on a Distributed Memory Parallel Ali H. Dogru is General Supervisor of the Technology Devel-
Computer,” JPT (November 1991) 1368; Trans., AIME, 291. opment Div. of Saudi Arabian Oil Co. in Dhahran, Saudi Ara-
bia, with responsibility for development of new reservoir and
17. Rutledge, J.M. et al.: “The Use of Massively Parallel SIMD
production technologies. His areas of interest are numerical
Computer for Reservoir Simulation,” paper SPE 21213 pre-
reservoir simulation, fluid mechanics, heat transfer, and
sented at the 1991 SPE Reservoir Simulation Symposium,
applied mathematics. He previously worked at Mobil R&D
Anaheim, California, 17–20 February. Co. and Core Laboratories and held teaching positions at UT,
18. Kaarstad, T. et al.: “A Massively Parallel Reservoir Simulator,” Austin; California Inst. of Technology; Norwegian Inst. of
paper SPE 29139 presented at the 1995 SPE Reservoir Simu- Technology; and Technical U. of Istanbul (TUI). Dogru holds
lation Symposium, San Antonio, Texas, 12–15 February. an MS degree from TUI and a PhD degree from UT, both in
19. Rame, M. and Delshad, M.: “A Compositional Reservoir Sim- petroleum engineering. He served as a 1984–90 member of
ulator on Distributed Memory Parallel Computers,” paper the Editorial Review Committee, a 1994–95 member of the
SPE 29103 presented at the 1995 SPE Reservoir Simulation Forum Series in the Middle East Steering Committee, and on
Symposium, San Antonio, Texas, 12–15 February. Annual Meeting technical committees.

60
MAY 2000

You might also like