Professional Documents
Culture Documents
Training Guide
© 1998–2015 Paradigm B.V. and/or its affiliates and subsidiaries. All rights reserved.
The information in this document is subject to change without notice and should not be construed as a commitment by Paradigm B.V.
and/or its affiliates and subsidiaries (collectively, "Paradigm"). Paradigm assumes no responsibility for any errors that may appear in
this document.
The Copyright Act of the United States, Title 17 of the United States Code, Section 501 prohibits the reproduction or transmission of
Paradigm’s copyrighted material in any form or by any means, electronic or mechanical, including photocopying and recording, or by
any information storage and retrieval system without permission in writing from Paradigm. Violators of this statute will be subject to civil
and possible criminal liability. The infringing activity will be enjoined and the infringing articles will be impounded. Violators will be
personally liable for Paradigm’s actual damages and any additional profits of the infringer, or statutory damages in the amount of up to
$150,000 per infringement. Paradigm will also seek all costs and attorney fees. In addition, any person who infringes this copyright
willfully and for the purpose of commercial advantage or private financial gain, or by the reproduction or distribution of one or more
copies of a copyrighted work with a total retail value of over $1,000 shall be punished under the criminal laws of the United States of
America, including fines and possible imprisonment.
The following are trademarks or registered trademarks of Paradigm B.V. and/or its affiliates and subsidiaries (collectively, "Paradigm")
in the United States or in other countries: Paradigm, Paradigm logo, and/or other Paradigm products referenced herein. For a complete
list of Paradigm trademarks, visit our Web site at www.pdgm.com. All other company or product names are the trademarks or
registered trademarks of their respective holders.
Alea and Jacta software under license from TOTAL. All rights reserved.
Some components or processes may be licensed under one or more of U.S. Patent Numbers 6,765,570 and 6,690,820.
Some components or processes are patented by Paradigm and/or one or more of its affiliates under U.S. Patent Numbers 5,563,949;
5,629,904; 5,838,564; 5,892,732; 5,930,730 (RE 38,229); 6,055,482; 6,092,026; 6,430,508; 6,819,628; 6,820,043; 6,859,734;
6,873,913; 7,095,677; 7,123,258; 7,295,929; 7,295,930; 7,328,139; 7,584,056; 7,711,532; 7,844,402; 8,095,319; 8,120,991;
8,150,663; 8,582,825; 8,600,708; 8,635,052; 8,711,140; 8,743,115; 8,744,134; and 8,792,301. In addition, there may be patent
protection in other foreign jurisdictions for these and other Paradigm products.
All rights not expressly granted are reserved
Recommended Reading
For further study after the class, we recommend the following:
• Caers, Jef. Petroleum Geostatistics . Richardson, Texas: Society of Petroleum Engineers, 2005.
• Deutsch, Clayton V. Geostatistical Reservoir Modeling. Oxford: University Press, 2002.
• Deutsch, Clayton V., and Andre G. Journel. GSLIB: The Geostatistical Software Library. Oxford:
University Press, 1998.
Overview
Before you begin building a reservoir model, you need to understand the main concepts and
challenges of reservoir data analysis and reservoir modeling.
A reservoir model is only as good as the parameters and data used to build it.
• In order to build a consistent and robust reservoir model, we recommend that you use the Data and
Trend Analysis Workflow to analyze the properties of interest first. This analysis establishes
representative statistics and identifies the probable inconsistencies in your data.
• Afterward, you can use the Reservoir Properties Workflow to populate a 3D reservoir grid with
petrophysical properties using the data generated in the Data and Trend Analysis Workflow. These
two workflows are used in conjunction with each other.
Background
• Huge investments in exploration and production, such as seismic acquisition campaigns, drilling
programs, field development plans or enhanced oil recovery usage, are made on the basis of 3D
numerical representations of the subsurface.
• To be as precise as possible, numerical models need to integrate all the data and knowledge
collected and interpreted by geoscientists, from geophysical interpretation to flow simulation.
• These models support the representation of several elements of the subsurface: the reservoir
structure and stratigraphy (that is, faults, horizons and their relationships and hierarchy), the rock
property distribution and petrophysical content of the reservoir, and the fluid content of the reservoir.
Introduction
The construction of these realistic models that are economically optimal and consistent requires:
• Integrating and combining:
• Exploration data with geologic interpretation. For accurate models, we need to make sure
that the interpretation is correct and that the geological model (at least) honors the input
data.
• Data from various sources (core data, well log data, outcrops, production data, seismic-
derived structural interpretation, 3D surveys, seismic-derived attributes=) and resolutions.
• Expertise from different fields (geology, petroleum/reservoir engineering, economics..)
• Accounting for the inherent uncertainty in the spatial distribution of reservoir properties and the
structure.
• Predicting rock properties at unsampled locations and forecasting the future flow behavior of the
reservoir.
The picture above [Jef Caers, Petroleum Geostatistics (Richardson, Texas: Society of Petroleum
Engineers, 2005)] shows a comparison of the scale of observation, the typical resolution for geologic
modeling (geocellular) and reservoir flow simulation models, and the operations between the various
models. The reference unit resolution is the core support. If this reference resolution is on the order of
1, a 3D geocellular model is typically 6 orders of magnitude larger, a flow-simulation model is 8 orders
of magnitude larger, and the entire reservoir is 12 orders of magnitude larger.
3D Grid construction
Despite the various natures of the reservoir models introduced previously, today geologists, reservoir
modelers and engineers all tend to use the same 3D reservoir grid definition to construct their
respective reservoir models. That grid is the pillar-based grid.
Pillar Gridding Technique
Pillars are defined as columns of grid cells and must respect two fundamental principles:
• #1: Pillars must be aligned along faults, which implies that columns of cells cannot cross faults.
• #2: Pillars must connect top and base horizons, which implicitly forces the same number of cells on
the top and as at the base of the reservoir structure.
Though this technique works well in simple, vertically faulted, layer-cake stratigraphy it raises many
questions when geology becomes more complex.
NOTE For more information about how to create structural and stratigraphic models, geologic grids
and flow simulation grids with SKUA, sign up for the Modeling Reservoir Architecture with SKUA
course.
Challenges
Oversampling must be identified and sampling bias removed by weighting raw data, in order not to
establish biased scenarios (and biased estimates) that would lead to wrong strategies.
Well data is also often too sparse to yield histograms representative of the underlying geologic
phenomena. Smoothing data distributions allow you to remove artifacts (spikes, holes=) and provide
representative information about a true underlying geologic distribution.
Data blocking (upscaling) is recommended before performing stochastic simulation to:
• Speed up algorithms
• Enable verification of conditioning
What to remember
How SKUA-GOCAD help the user to manage both data analysis and property modeling?
• Integrated workflows-driven solutions
• Use output from DTA as input for the Reservoir Properties workflow
• Create and test alternative scenarios of models
Overview
When you model reservoir properties, it is important to know both the data and the geologic context of
the area of interest.
In this chapter, you will become familiar with the available data, geologic setting, and the process for
analyzing raw data by using the Data and Trend Analysis Workflow.
Selecting data
You can select different objects and choose more than one object property at the same time because
the same property to analyse may have a different name in different objects.
If more than one property is chosen and the same object has more than one of the properties, the
workflow uses only the first property.
Data representation
Once both objects and object properties are selected, you need to specify where you assume that the
data is valid:
• As a point measurement (single data value) – valid only locally. You don’t know what happens in
the interval and you don’t want to influence computations by missing data.
• As a continuous interval – you consider the measured value that can be propagated along the well
path below the point. You assume that what you measure is valid until the next measured point.
Select modeling grid and seismic data for blocking and trend analysis
8. In the Modeling grid box, select the DTA_PM_Field grid.
9. In the Seismic grid box, select the DTA_PM_Field grid.
10. Click Next.
Histogram definition
• For a discrete property the histogram shows the proportion of each category, i.e. the number of time
a given category occurs divided by the total number of data points.
• For a continuous property the histogram shows the number of data points that fall within each bin
divided by the total number of data points.
You can edit plot size, axes, and graphic appearance for better visualization and in-depth data
analysis as necessary. What would be a relevant plot for Lithology?
Domain selection
• Use all data option displays all data corresponding to the objects selected in the Select Data
panel.
• Use data from all selected domains option displays only data that corresponds to the domain of
analysis specified in the Define Domains of Analysis panel.
• Show by domain of analysis option lets you customize the display of the histogram and group
the data by the specified domains of interest in the matrix plots.
Data selection
You can select:
• Only one type of data to be plotted along the first dimension (vertical/or horizontal).
• As many data types (sub-objects) as you want to be plotted along the second dimension.
Calculate raw porosity distribution for the entire model and per stratigraphic unit
1. In the Navigation panel, expand POROSITY and click Calculate Statistics to open the Calculate
Statistics (POROSITY) panel.
2. Apply what you have learned to calculate and display the histograms for raw porosity data
everywhere, and then per stratigraphic unit.
Calculate and analyze raw porosity distribution per facies category and stratigraphic unit
1. Apply what you have learned to edit plot layout and display raw porosity distribution per
stratigraphic unit and facies category.
2. Edit plot layout and display raw porosity distribution per facies category only.
What can you observe? Discuss the results with your instructor.
3. After you are finished with your analysis, click Next until you reach the Calculate Vertical Trend
Curves (POROSITY) panel.
NOTE In the Navigation panel, an icon appears to the right of the step to show that you have added a
note.
What to remember
What is the objective of defining a property in the first DTA panel (Specify Properties to Analyze
panel)?
• Definition of a container associated with a list of objects and properties that represent the
data
• Act as a filter
What is the difference between the Point and Interval representation? Is that synonymous with
continuous and discrete properties?
• The data representation depends on the way you want to extrapolate the local
measurement
• The continuous and discrete terms represent the nature of the property values
What are the different manners to display the statistics with DTA?
• Per unit, facies= with the Plot Layout Manager
In our case study, we want to analyze the lithology, porosity and permeability distribution in the
reservoir. In which order should you analyze these three properties? Why?
• Porosity and permeability are influenced by the rock lithology
• Permeability models are often built using porosity values
Overview
Analyzing lithology is an important step in reservoir modeling because most of the properties of
interest are highly correlated with the lithology facies.
In this chapter, you will learn how to build a realistic 3D model of lithology that you can use later in
making decisions and developing strategies regarding the reservoir.
Blocking challenge
The challenge lies in determining the appropriate method to upscale well data that intersects the grid
to grid resolution by blocking the values and calculating new statistics for those values. Capturing
heterogeneity is much more important than reproducing raw statistics that will be input with global
proportions or distribution.
• NDV
If data is sparsely sampled, cells lacking data must be considered and assigned an interpolated
property value.
• Include only cells that the well path intersects through opposite faces
It is not recommended to exclude cells that the well paths do not intersect through opposite faces as
it will lead to ignore many data points.
• Calculate one value for each layer (Cell layer averaging)
- When not selected (default setting): each individual cell contains an upscaled value. Successive
cells in the same layer can have strong different values. Can lead to strong local artifacts when
running a simulation.
- When turned on: adjoining cells in the same layer are considered as a single one, and the mean
computed for all these cells is assigned to the cell that contains the longest well path intersection.
The others remain undefined.
Such settings:
• Affect the number of well data points which are used and locally assigned to the grid. See next
slide.
• Are especially relevant when working with deviated/horizontal wells.
Top pictures - When no option is selected, each individual cell will contain an upscaled value.
Successive cells in the same layer could have strong different values, which would lead to strong local
artifacts when computing the variograms and running a simulation.
Middle pictures – When selected, the Include on cells that are intersected through opposite
faces option avoids imposing features that might not be representative of the reservoir.
Bottom pictures – When selected, the Calculate one value per layer option avoids high horizontal
variations and allows you to reproduce the variogram.
Conclusion: For the exact same well log, depending on the blocking options that are selected, the
blocked data will be different, as will be the output model.
NOTE For more information watch the video Visualizing Blocked Data in 2D in SKUA-GOCAD
available in Paradigm Online University under *Training SKUA-GOCAD > Video Learning Library.
Data oversampling
Determining the global proportions in the reservoir is not easy to do as we often have a view of only
the over-sampled zones.
Wells are drilled in areas with the greatest probability of high production (core are taken preferentially
from good quality reservoir rock). Such data collection practices lead to the best economics and the
greatest number of data in portions of the study area that are the most important. These practices
should not be changed, but subsequent bias should be considered.
→ Oversampling must be identified and sampling bias removed by weighting raw data, in order
not to establish biased scenarios (and biased estimates) that would lead to wrong strategies.
In the figures above, the proportions calculated from well data (a) are not representative of the global
proportions in the reservoir (b). We could even create a model that honors other global proportions (c)
and the well data. That model would be valid if we don’t have other data.
Cell size
The weights assigned by cell declustering depend on the cell size. If the cell size is set as very small,
then every sample occupies its own cell and the result is equal weighting or the naïve sample
distribution. If the cell size is very large, then all samples reside in the same and the result is once
again equal weighting.
Procedure
When it is difficult to make a choice, a common procedure is to assign a cell size that maximizes or
minimizes the declustered mean (the declustered mean is maximized if the data is clustered in low-
valued areas, and it is minimized if the data is clustered in high-valued areas). This procedure is
applied when the sample values are clearly clustered in a low or high range. Automatically assigning
the minimizing or maximizing cell size may lead to less representative results than simply using the
original distribution. Choosing the optimal grid origin, cell shape and size requires some sensitivity
studies.
Cell size for declustering is assigned to an intermediate grid that is generally not the cell size used for
geologic or flow modeling, but rather a size required for one datum to reside in the most sparse areas
sampled.
The histograms and summary statistics (mean, standard deviation=) are then calculated with these
declustering weights to be representative of the entire volume of interest.
Calculate weights by cell declustering and apply them to raw lithology data
1. Access the Apply Weights (LITHOLOGY) panel.
2. Select Calculate weights from cell-declustering. In this case, you do not have existing weights
so you will use the workfow to calculate them using cell-declustering techniques.
3. Make sure Raw is selected in the Apply Weights on box.
4. Select Sand in the Reference facies box and make sure Over-sampled is selected.
5. Click Calculate and Display to display the histogram for the weighted proportions of lithology data
everywhere in the grid.
NOTE You can delete a model at any time. When you delete a model, the workflow deletes the row
from the Summary Statistics table. But deleting models does not delete models saved as resources in
the Object tree.
NOTE You can also create 2D proportion maps from deposition azimuth or from facies
boundaries outside the workflow, from the Resources browser.
When you specify one map value per well, the values for the intersected cells are averaged and only
one cell contains the averaged value. The location of that cell is the average aerial location of the
intersected cells. For deviated wells, the location of the averaged cell will not necessarily be exactly at
the well location.
NOTE For more information about each method, please check the Online Help.
Tip All properties (most probable facies, random facies and facies proportions) are stored on the
map. To change the display, open the map Style Editor, click Property in the left pane and select
the property from the Display menu.
Create a resource object from the interpolated proportion map for each unit
1. In the workflow panel:
a. Select Raw(LITHOLOGY,UNITA)[192,227](single value)::DSI#1.
b. Click Create as Resource.
2. Open the Resources browser and expand 2D Trend.
3. Right-click Raw(LITHOLOGY,UNITA)[192,227](single value) DSI#1 and select Rename.
4. In the Rename Statistic Object dialog box, do the following:
a. Enter LithologyMap_DSI_UNITA as the new name and click Apply.
b. Select Raw(LITHOLOGY,UNITB)[192,227](single value) DSI#1 as the Stratistic
object and enter LithologyMap_DSI_UNITB as the New Name.
c. Click OK.
NOTE From the VPC and Map(s) boxes in the panel, you can either select the data object that was
calculated in the workflow, or the one you saved as a Resource. Because their names are very similar,
a good idea is to rename the Resources objects as you create them and as you just did in the previous
exercise.
Variogram parameters
Definition
The variogram (a spatial correlation function) is a measure of how geology varies with distance.
Geologic properties at two locations are correlated if the distance that separates them is less than the
range of the variogram.
Advantage of using SIS
The main advantage of SIS is that it allows you to have different variogram models for different
property ranges. This means that it can better handle properties that show multiple correlation
patterns. Geological facies is the most common example, where each property value has its own
correlation pattern as it represents a distinctive sedimentary body or environment.
SIS requires as input a variogram model for each facies category. Each facies may have a different
variogram with different correlation lengths and anisotropy characteristics, reflecting the difference in
spatial continuity of the various facies.
Top unit
k-layer #2. Marine deltaic system in the top unit consisting of distributary channels with a main trend
oriented WSW/ENE. More specifically delta front deposits with little tidal influence have been identified
in SW and West parts of the area.
Bottom unit
k-layer #21. Terrestrial system in the base unit consisting of braided streams with a SSW/NNE
orientation.
What is the difference between raw, weighted and blocked data, and when should you use each
one?
• Raw = initial
• Weighted data = sampling bias has been removed from initial data
• Blocked data = data has been upscaled to the grid resolution
What is the difference between interpolation and simulation? And between cell-based and object-
based algorithms?
• Interpolation method = deterministic approach
• Simulation method = stochastic approach
Overview
Because petrophysical properties within facies are more homogeneous than within the reservoir as a
whole, these properties are usually modeled on a by-lithology basis. Even though non-net lithology
(shale, for example) may be assigned arbitrary low values, porosity and permeability within most
lithology must be simulated using geostatistical algorithms to reproduce the representative histogram,
variogram, and correlation with related secondary variables when available.
In this chapter, you will build realistic 3D models of porosity and permeability that you can use later in
making decisions and developing strategies regarding the reservoir.
When you use the blocked lithology data as a filter, only the porosity data values that correspond to
the facies blocked in each grid cell are considered for the average computation. This prevents from
overestimating or underestimating the porosity value that is assigned to each grid cell and subsequent
inconsistencies in the output models.
• Min/Mean/Max Blocked-3 values increase in Shale lithology. Property value in this region is
overestimated.
• Min/Mean/Max Blocked-3 values decrease in Sand lithology. Property value is underestimated.
• Min/Mean/Max Blocked-4 values are more relevant within each lithology category.
NOTE See the Online Help for a more detailed description of the other forms of kriging.
Top unit
Marine deltaic system in the top unit consisting of distributary channels with a main trend oriented
WSW/ENE. More specifically delta front deposits with little tidal influence have been identified in SW
and West parts of the area.
Bottom unit
Terrestrial system in the base unit consisting of braided streams with a SSW/NNE orientation.
NOTE Bivariate analysis is also required as permeability is usually correlated with porosity and models
of permeability must also account for any relationship with porosity.
Usually, the logarithm of permeability is used because of the approximately lognormal character of
many permeability histograms. Based on this advanced analysis, alternative scenarios of models can
be considered and compared in order to determine which modeling method would best fit your data
and context.
Create a new Data and Trend Analysis Workflow and define the new properties
1. Create a new Data and Trend Analysis Workflow named K_Core_Analysis.
2. Create new properties and select input data as indicated in the table above.
Cloud transform
The cloud transform allows the conversion of one property (porosity) to another property (permeability)
via a calibration scatterplot.
A scatterplot is a 2D crossplot between an independent variable X and a dependent variable Y. It is
presented as an ascii file of data pairs with no headers: for a given X, several Y values. It provides the
correlation between the input data and the dependent property.
The key element is that for a given input property value, there are several possible outcomes
(converted property values). The output property value is obtained through sampling a number of
possible values (instead of a one-to-one conversion).
This approach:
• Preserves the uncertainty in the relationship between the two types of data.
• Allows to reproduce nonlinear relationships between two properties.
Figure Step #A: The number of bins depends on the dispersion of the data. Usually 10 or more bins
are differentiated.
Three options to bin the input property: number of bins, number of data points per bin, and discrete
independent variable (when input data is a discrete property).
Figure Step #B: For each bin, a CDF is constructed using the data pairs from the scatterplot.
Process
The permeability value at a location can be drawn by Monte-Carlo (p-field) simulation from the
conditional distribution of permeability given the porosity at that location. A series of conditional
distributions are constructed. In general, 10 or more conditional distributions are used. The histogram
of permeability and the full scatter between porosity and permeability is reproduced with this approach.
NOTES
• The permeability variability usually follows the porosity variability.
• The term kriging is traditionally reserved for linear regression using data with the same attribute as
the one being estimated. For example, an unsampled porosity value is estimated from neighboring
porosity sample values defined on the same volume support.
• The term cokriging is reserved for linear regression that also uses data defined on different
attributes. For example, the porosity value may be estimated from a combination of porosity
samples and related accoustic impedance values.
NOTE A new variogram should be computed with the residuals data. However, for training purposes
existing permeability variograms are used here.
At this stage you have created a new permeability property in the grid using the trend applied to the
porosity property POROSITY 1. The property is defined in the region Sand and ShalySand. In the region
Shale there is only No Data Value (NDV) for the moment. You set the NDV to 0.001 in the following task,
and add the residuals to the results.
Combine permeability trend and permeability residuals into a new permeability model
1. In the Objects browser, under DTA_PM_Field, right-click properties and select Apply script.
2. In the Properties Script Editor:
a. Make sure DTA_PM_Field is selected in the Objects box and that the Check no-data values
automatically check box is selected.
b. Type in the script as indicated in the figure above.
c. Click Define Variables4 and specify the property settings of Permeability (category and type)
as appropriate.
d. Check script and click OK to apply it.
This script enables you to add the residuals to the results, set the values of Permeability to 0.001 in the
region Shale (this is where No Data Value NDV have been defined), and clip the values to zero.
Overview
In this chapter, you will learn how to compute reservoir volumes based on the simulated properties of
interest, your understanding of geology, and your knowledge of the reservoir.
You will also learn about the post-processing functionality so you can compute parameters of
significant interest for decision-making purposes.
You can easily run a calculation, update parameters, and then re-compute at any time to see how the
volumes have been updated. All the requested volumes are computed on the fly for each of the
specified regions and in the desired units.
NOTE In this course you keep the default settings on the Reporting tab.
Nesting simulations
A nesting column is available for use only if the user has a Reservoir Risk Assessment license.
However, the Reservoir Risk Assessment module doesn’t need to be loaded.
In cases where two properties are to be simulated:
• If the Reservoir Risk Assessment module is available, then the two properties can be nested,
meaning that they can be simulated one after the other in one go. All will use a different random
path and yield different results.
• If the Reservoir Risk Assessment module is not available, you cannot launch nested simulations,
and you must perform each property simulation separately.
Post-processing
• Summary statistics are allowed for simulated properties or any scalar properties. Post-
processing operations are not allowed on vectorial properties.
• When you perform the computation, the result is stored in a new grid property with the
specified name. Default names can be changed as necessary.
Compute connectivity
3. In the Connectivity computation panel, select POROSITY1_between0_2And0_3 in the Region
for connectivity computation box.
4. In the Geobody rank property box, leave the default name selected.
5. Leave the Save geobody volume check box cleared.
6. In the Connectivity type area, leave Faces selected.
7. Click Process to perform the calculations.
Next you visualize the connectivity of the cells within the selected region.