You are on page 1of 10

SPE-171202-MS

Algorithms of Automatic Core-Log Depth-Shifting in Problems of


Petrophysical Model Construction
O. Nadezhdin, E. Zairullina, D. Efimov, and V. Savichev, “BashNIPIneft” LLC

Copyright 2014, Society of Petroleum Engineers

This paper was prepared for presentation at the SPE Russian Oil and Gas Exploration and Production Technical Conference and Exhibition held in Moscow, Russia,
14 –16 October 2014.

This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents
of the paper have not been reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect
any position of the Society of Petroleum Engineers, its officers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written
consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is restricted to an abstract of not more than 300 words; illustrations may
not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.

Abstract
The problem of core-log depth matching/shifting is an integral part of any petrophysical interpretation and
modeling. The correct and robust solution to this task predetermines the quality of the resultant
petrophysical model however this solution is not straightforward. In this paper the authors present specific
algorithms aimed at automated processing, depth shifting and calibration of downhole wireline logs and
petrophysical properties measured in core analysis.

Introduction
The paper reviews two different approaches to core-log depth shifting, i.e. per interval and point-to-point.
Both approaches are based on the same assumption on the functional connection between physical
properties of core samples and amplitude values of geophysical signals. The principle difference lies not
only in the elements which are tied for calibration but in the hierarchy of calibration parameters.
Common practice is to calibrate core and log data per interval and only in the case of high recovery
from that interval (over 70%) [1]. At the same time the bulk of the dataset (up to 90% of core analysis
results) as a rule comes from intervals with low recovery which makes it difficult to use such data for
calibration.
Point-to-point calibration is more complex and resource-intensive than per interval matching. How-
ever, it allows matching not only cored intervals but also individual samples thus significantly increasing
the number of possible solutions and consequently improving the quality of a rock physics model itself.
Point-to-point calibration can also resolve the issue of core depth shifting based on the results from routine
core analysis for the intervals with low recovery. The input data are sourced from wireline logging, core
analysis and drilling reports (driller’s depth, tops and bottoms of cored intervals, recovery, porosity and
permeability measurements, etc.). The paper presents the findings obtained from the application of these
calibration procedures to different datasets.

Background and previous studies


Core-log depth shifting becomes problematic in a number of cases such as fragmentation of core
(incomplete recovery), lengthening of wireline or logging cables, loss of information on cored depths, etc.
2 SPE-171202-MS

The three main techniques for core-log calibration are summarized below [1]:
● matching a certain physical attribute measured on core and in a well, i.e. the same log run on core
and in a borehole. As a rule the physical attribute for matching in such case is selected as most
differentiated in a given interval, such as dt in sonic logging or natural radioactivity of rocks in GR
logging, etc.;
● matching log curves to core description (on a macro scale);
● matching log interpretation to core analysis results (␸, k, Swr, etc).

The first technique is suitable for cases when logs were run on whole core with high recovery. In this
case logs on core and logs in a borehole are consistent and can be compared by the shape of the curves,
thus matching is feasible.
If the recovery is low then the curve recorded on core is not continuous with a large number of separate
individual fragments. In this case matching logs from core and logs from a borehole is challenging. For
example such situation may occur when core was taken from some brittle shaly interval and the rock broke
into pieces. Then gamma ray logging (GR) in a borehole would show high radioactivity in this shale
interval whereas logging destroyed and fragmented core would be difficult and even if recording is
successful the logs would not display any GR anomaly.
The first technique can also be applied if logs were run not on whole core but on samples (plugs), such
as density log, acoustics (P-waves and S-waves) and the like, but even in this case the requirement on high
recovery should be observed.
The second and third techniques listed above are applicable for large scale core analysis, i.e. numerous
core samples from intervals with high recovery (above 70%) are analyzed for porosity, permeability and
residual water saturation and representative values are obtained. Representative sampling entails values of
petrophysical properties for all the possible lithological varieties present in a target sequence (e.g. values
for shales, siltstones, tight and permeable sandstones, tight and permeable limestones, etc).
In study [1] the following workflow is recommended for core-log matching:
1. cored intervals and their description in conventional symbols are added to a composite log or a
well log crossplot : if the recovery is less than 100%, the core is initially tied to the top of the
corresponding cored interval;
2. all the marker beds identified from well logs and core description are tied, the stratified section is
divided into layers from log data and lithology description;
3. calibration of core description on a macro scale against well logs controls the high-level depth shift
of the core;
4. for the intervals with less than 100% recovery the high-level shift is corrected from logs and core
description in a given interval of coring;
5. core analysis charts for porosity, permeability, residual water and other petrophysical values are
plotted against the depths taken from logs; these parameters are used to fine-tune the calibration;
6. all the tied samples are listed for each of the identified layers or beds with core data; the average
values of reservoir properties are calculated.
Core-log depth shifting has been investigated by different authors [1,2,3]. At the same time it should
be noted that in many popular commercially available software packages for processing core data and well
logs together (e.g. [4-8]) there are no options for automated calibration and depth shifting, however there
is an option for manual correction.
There are in-house corporate algorithms developed by various upstream research centers, service
companies and operators aimed at facilitating core-log calibration. For example, [9] presents results from
cross-correlation and depth shifting of core and downhole logs from magnetic susceptibility measured on
whole core and in a borehole. The core log was stretch and/or compressed and depth shifted for the best
SPE-171202-MS 3

match to the downhole log. The review of the findings in [9] leads to the conclusion that the initial task
was reduced to two-parameter fitting: finding a parameter to control stretching/compression of core logs
vs downhole logs and finding a parameter to control depth shifting to achieve the highest possible
correlation coefficient. Therefore it was assumed that within a target interval in a given borehole the
stretching/compression index is a fixed number. However it should be noted that this index (the value of
which is generally close to one) can be assumed fixed only for matching logs to cored intervals with high
recovery. If the recovery is low the depth shifting of core is faulty and unreliable which may lead to poor
outcome from automated calibration. The algorithm developed in [9] is intended for consequent core to
log calibration from one well to another. The use of correlation coefficient as a calibration criterion may
result in inadequate and erroneous coordination of a cored interval and a mistie, i.e. a log on core and a
downhole log may have the same shape but differ in the values of the measured parameters.
In reality it is common practice that correct stratification of an analyzed section (a very important step
in depth matching) is possible only on a suite of logs not on one single log. This holds true for carbonate
as well as for sandstone sequences which may have different mineralogy, texture and structure (such as
compound polymineral rocks, heterogeneity like vugs or molds). It is obvious that the use of one single
log for matching can lead to incorrect depth shifts. Then a specialist will be required for manual
adjustment of core depths. This issue results in significant time and effort employed to find a solution for
such calibration tasks, thus making any computerized algorithms low productive or impractical. The
authors believe that the approach presented in this paper resolves this problem.
As for approaches to core-log calibration, an essential advantage of the first approach as compared to
the second and the third one is that the parameter for calibration is one petrophysical property directly
measured on core and in a borehole (e.g. radioactivity from GR and core gamma, magnetic susceptibility
from core and downhole measurements, etc.).
At the same time the second and third approaches employ additional models for depth calibration of
core data and wireline logs. Therefore the second technique requires an established correlation between
logs and core description on a macroscale for automatic processing to link logs to core description from
logs (synthetic core description) and vice versa to link core description to logs synthesized from this
macroscale description. For instance shales are characterized by high gamma and natural radioactivity,
faster P-waves travel time, high hydrogen content. Sandstones are characterized by rather low natural
radioactivity, etc.
Similarly in the third approach correlation is based on the calibration of core laboratory measurements
and rock properties synthesized from wireline logs. Hence it implies the same accompanying intermediate
task which involves finding the parameters for a core-log rock physics model to establish correlation
between petrophysical properties of a rock and downhole measurements (logging). In some particular case
or instance this petrophysical interpretation can be defined a priori by an expert and then the problem of
combined search for parameters of a petrophysical model and optimum core-log depth shifts is reduced
to one task of finding only the optimum shifts (without any calibration or fine-tuning of the petrophysical
interpretation). Nevertheless in general core-log depth shifting from petrophysical properties includes a
simultaneous solution of two problems: adjusting petrophysical interpretation and matching optimum
shifts for core samples to get the best fit of the petrophysical model to depth shifted reservoir properties
from core analysis.

Automatic interval core-log depth shifting and matching


The authors developed algorithms for automatic per interval core-log depth shifting and matching [10].
The calibration was achieved by depth shifting of cored intervals, then establishing the correlation
between petrophysical properties and wireline log data and comparing the values of the measured rock
properties from logs to laboratory data from core analysis.
4 SPE-171202-MS

The solution to interval core-log depth calibration problem is suitable and operative only provided there
is a representative sampling of petrophysical properties measured on core with high recovery (more than
70%). In this case samples in a cored interval are not shifted, the depth shifting is initiated only for the
cored intervals themselves. In fact the number of the optimization parameters is controlled by the number
of coring jobs or cored intervals (as a rule, about 1-40 interval shifts – parameters) for a cluster of wells
under study and number of calibration parameters used to adjust a core-log rock physics model (as a rule,
about 2-12 parameters subject to the type of logs available, mineral composition of the analyzed rock, etc).
The problem of searching for cored interval shifts and parameters of core-log rock physics model is
mathematically reduced to an optimization task with constraints. The global optimum optimization
algorithms which may be applied include evolutionary and genetic. Simpler search algorithms may be
used, but they generally provide only local optimum (e.g., gradient methods) or involve a lot of time and
effort (e.g., stochastic methods).
The algorithms of core-log coordination were developed and implemented as an operational tool
(MATLAB platform based). The input data uploaded include downhole log data as mat/LAS-files and
core analysis as xls-files, alongside with information on preset parameter values and constraints.
The file with laboratory data from core analysis should have the following format and headings.
e well name;
e top of cored interval (m);
e bottom of cored interval (m);
e recovery (%);
e sample/plug depth (m);
e petrophysical property 1 (PPP1);
e...
e petrophysical property n (PPPn).
For interval calibration the samples should be from cored sections with high recovery and high point
density otherwise there is substantial risk of significant inaccuracy in depth determination.
When reading, structuring and data intersection procedures are complete for core data and log data the
next step is to match depth shift of core to logged depths. The range of rock physical properties to be
correlated with logs are set by the user.
For interval core-log depth matching a depth shift is search for each of the intervals in the analysis (i.e.
for all the sampling points in one interval the depth shift will be the same). Then the corresponding points
of in the domain of log data are interpolated to the shifted points of core data domain and the correlation
is established between rock physics and log measurements. The obtained crossplots or fitting parameters
are used to synthesize the values of petrophysical properties.
The core shift vector ⌬opt should provide the minimum mean square deviation between synthetic
values of petrophysical properties and the actual values measured on core.
(1)

where ci – value of point i sampled from core for PPPj;


– synthetic value of point i sampled from core for PPPj;
n – number of points sampled from core with values for PPPj;
wj – weight of PPPj in the objective function;
m – number of PPP for calibration / matching;
⌬k – shift for k interval of core;
SPE-171202-MS 5

di – depth of point i sampled from core for PPPj.


(2)

where f – dependence function of PPP ci on log suite gi


aj –the corresponding parameters of dependence;
In case of linear dependence the parameters are derived with least-squares method (LSM). The task
solved with this technique is the selection of a vector to minimize sum squared error: the parameter search
is run by minimizing the discrepance between the actual and calculated synthetic values.
(3)

(4)

Constraints can be applied to the desired dependence parameters. For example, it is very common that
high gamma anomalies correspond to low neutron porosity and high resistivity whereas increase in
residual water saturation results in lower gamma and lower resistivity.
The general form of the inequation for defining constraints is as follows:
(5)

where A – a matrix of constraints for increase/decrease trend in logs for increase of a given PPP (in
a matrix A elements aii, where i – numbers of a line and a column corresponding to the selected log, equal
-1 if the trend is downward and 1 if the trend is upward, other elements equal zero);
b – column matrix of constraints on corresponding log coefficients;
x – the required parameter vector.
If there is a dependence of porosity (␾) on GR and neutron log (NGR) and the trend is known and
established (e.g. higher porosity – lower gamma), the constraints will be set as follows:
(6)

(7)

(8)

(9)

The general form of the equation for constraints is given below:


(10)

where Aeq – a matrix of constraints for log values;


beq – a matrix of constraints for PPP values;
x – the required parameter vector.
Then for dependence (9) constraints will be expressed as follows:
(11)
6 SPE-171202-MS

(12)

(13)

where {GR1,NGR1, ␾1} – characteristic parameters for lithology type 1 (e.g., true shale);
{GR2,NGR2, ␾2} – characteristic parameters for lithology type 2 (e.g., tight limestone).
Genetic algorithms of optimization are used to search for the vector of shifts. Constraints on the shift
magnitude are applied.
The algorithm of interval core-log depth calibration and search for optimum parameters of core-log
dependence includes the following steps:
1. Data on coring jobs are collected (cored depths, recovery).
2. Core data are filtered by recovery and density. Only data with high recovery and high density (e.g.
the recovery of 70% and above and density of at least 5 points per meter of low recovery) are taken
for further analysis. The recommended values for filters can be changed, e.g. the recovery filter
may be set to less than 70%, however it may lead to more time and effort required for finding a
solution and poor quality of the end result.
3. Wireline logs for the target group of wells are collected with the initial assumption on a single
core-log dependence, i.e. the same petrophysical interpretation.
4. For each cored interval a search range for depth shifts is defined.
5. The optimization algorithm is then selected for shifting parameters. The recommended optimiza-
tion algorithm is genetic which can provide global optimum within reasonable time and computing
speed and power.
6. Each iteration generates an ensemble of probable shifts for cored intervals.
7. For each probable shifting variant points sampled from wireline logs are interpolated to the shifted
core.
8. For each probable shifting variant core-log dependence parameters are identified. The numerical
model can be linear as well as nonlinear (e.g., polynominal or neuron network). The identification
is reduced to quality criterion optimization (correlation coefficient, sum squared error, etc.).
9. Quality function for each of the variants from the ensemble is calculated. The sum squared error
is recommended.
10. Optimum shift search stop or break point is checked. The stop may be conditioned by a set number
of iterations, value of quality function;
11. The output is the optimum depth shifts for cored intervals and corresponding optimum core-log
dependence parameters. When such core-log dependence is established petrophysical rock prop-
erties synthesized from wireline logs may be used for the intervals with no core data.
As noted earlier interval calibration algorithms are operational and efficient only subject to the
availability of a representative sampling of PPP from core with high recovery (the depth from where the
plug was drilled should be scribed on the sample). If the samples are not representative, then to reduce
the uncertainty it is desirable to use maximum of information a priori available and known for an adequate
petrophysical interpretation and model (e.g., to input information on geophysical characteristics of given
lithology types as constraints in the form in equation (5) or equation (10)).
Considerable volume of core dataset is represented by PPP of samples from low recovery intervals. The
ratio of such data to data from high recovery intervals could be times higher (in practice the ratio could
be as high as 10:1). When samples from low recovery intervals are calibrated the depth shift is required
SPE-171202-MS 7

not only for the cored intervals but also for the samples themselves within a given interval. Therefore the
number of calibration parameters for low recovery cases is comparable to the quantity of the samples, thus
it results in an optimization task for a function with a huge number of parameters (e.g. if there are 500
samples / plugs from low recovery intervals the optimization function will have more than 500 param-
eters).
Function optimization problem with such number of parameters is very complicated in terms of the
computational speed and power (dependence on number of parameters can be estimated as quadratic).
Therefore the algorithms discussed in [10] for interval calibration are ineffective for point-to-point
coordination of core. One of the solutions to this task is the approach based on problem optimization
decomposition into simpler and conditionally independent optimization problems.
Automatic point-to-point core-log depth shifting and matching
Development of point-to-point matching algorithms was initially targeted at improving the quality of
core-log coordination, quality of the petrophysical core-log interpretation and model by expanding the
volume of the core dataset employed for analysis (including data on core samples from intervals with low
recovery) and by enhancing the quality of core-log matching.
As mentioned above the bulk of the dataset on PPP from laboratory core analysis comes from intervals
with low recovery. On the one hand uncertainties on depths for samples from low recovery intervals are
times higher than for those samples from high recovery intervals. For the latter case the depth shift is the
same for each of such intervals, whereas in the former case the shifts are determined also for each of those
samples. At the same time for core-log petrophysical modeling the petrophysical properties of samples
from low recovery intervals help to expand the use and maximize the volume of the interpreted dataset
from laboratory core analysis.
Point-to-point algorithms differ from interval calibration algorithms in the workflow sequence or order
of operations aimed at decomposition of core-log matching problem. In algorithm [10] developed for
interval calibration the high level task is the search for the shift vector to apply to cored intervals to obtain
at the low level a petrophysical model with the best quality characteristics. As stated earlier if the number
of cored intervals is too big (e.g. 50-100), the timing for the high level task becomes impracticable and
unfeasible, let alone the task of depth shifting for every individual sample.
However it is possible to change the order of operations. Then at the high level the task will be to search
and calibrate the parameters of a petrophysical model (e.g., parameters of formulas in the petrophysical
interpretation). At the low level the task will include independent matching of core PPP to synthetic
log-derived PPP for each well. For instance if there are n wells with the same number k of core samples
with analyzed PPP, then in the first case one optimization problem will be solved for a given function with
n*k variables. In the second case the optimization problem will be solved for n independent functions with
k variables in each of them. It is obvious that in the second case the problem of optimization is much
simpler. In principle, the interval calibration algorithm may be used and for point-to-point coordination,
however it is clear that it would require significantly more computing power, speed and time.
The proposed algorithm of point-to-point core-log coordination uses the idea behind the method of
dynamic alignment (DTW) [11], but subject to constraints on sample depth shifting intervals. The direct
application of DTW method is impossible. Corrections should be introduced for the specific features of
the problem.
Structurally the coordination problem includes two subtasks of high level and low level optimization.
To calibrate PPP measured on core and by logs first synthetic PPP from logs are required. Search for
parameters of the core-log interpretation to derive synthetic PPP is a separate high-level optimization
problem. Parameters are searched with stochastic, gradient or combined methods (evolutionary strategies
and genetic algorithms). The low level optimization problem is to match core PPP to synthetic PPP from
logs within the framework of one particular core-log interpretation. Low level matching (as well as high
8 SPE-171202-MS

level matching) is easily transformed into a number


of parallel tasks, the computing speed accelerates
proportionately to the number of wells used for
core-log matching.
Thus, at the low level PPP data from core and
PPP data from logs are aligned by depth for each
model from an ensemble of probability models. At
the high level the characteristics of each model are
assessed and a new ensemble or set of models (and
synthetic PPP) is generated to improve the quality
of the outcome. For a new set of models at the low
level the synthetic PPP and actual PPP from core are
aligned by depth. Then at the high level the char-
acteristics of each model are assessed again. If one
of the models meets the quality criteria, the search
stops. If these quality criteria are not satisfied an-
other cycle of core-log model generation is initiated.
However it should be noted that the problem of
core-log coordination is generally multidimensional
and in certain cases other parameter apart from
porosity may or even should be considered such as
permeability, mineral composition etc.
Similar to interval calibration, constraints may be Figure 1—Point-to-point core-log depth coordination of samples from
low recovery intervals
imposed on the required model in the form of in-
equation (5) or equation (10). These constraints play an essential role when core data sampling is not
representative. This can be the case when sampling includes few (if any) lithological types with frequent
distribution in the target section. For instance core analysis is performed mostly for reservoir rocks.
Therefore the PPP dataset includes little information on PPP of non-reservoir rocks (shales, tight or
impermeable sandstones/limestones/dolomites, etc.). At the same time, PPP and logs for non-reservoir
units constitute essential information as the distribution of synthetic PPP can be required for non-reservoir
intervals with little data from them. Thus it is recommended to introduce information a priori known as
constraints in the form of equations/inequations. For example it is possible to set that for lithological type
1 (shale) characterized by the following values from downhole logs: aGR⫽1, aNGR⫽0.1, porosity equals
10% and for lithological type 2 (tight limestone) characterized by aGR⫽0 and aNGR⫽0.9, porosity
equals 3%.
Possible shifting intervals for points are defined automatically from recovery values, i.e. the lower the
recovery for a given cored interval, the bigger is the range of shifting variants within this cored interval.
The incompressibility constraint on the spacing between the samples is also imposed, i.e. any sample may
not be shifted closer than the initial distance to the sample above. Instrumentally for this algorithm the
values of shifting intervals are unloaded as an xls-file thus allowing the user to manually correct the
shifting intervals, as well as to adjust the option for reducing the initial distance between samples/points.
The user can group or cluster samples if it is known that a certain number of samples or plugs were taken
from a whole solid piece of rock and the distance between those sampling points is also known.
The genetic algorithm is employed to generate values of parameters for the required model. For each
set of parameter values point shifts are searched based on minimization of objective function (sum squared
divergence of the analytical computed PPP values from PPP values obtained as direct laboratory
measurements on core). The resulting model is accepted as the one with the minimum value of the
objective function.
SPE-171202-MS 9

It should be noted that in stratified thin-bedded


sections synthetic PPP may significantly diverge
from the actual PPP measured on core if no decon-
volution was applied to the initial data with correc-
tions for the geometrical factor and noise during
measurements and recording. Therefore core sam-
ples from thin-bedded formations are not recom-
mended for core-log calibration due to significant
influence of logging tool geometry on the quality of
the recorded downhole data. For successful core-log
depth shifting and matching in highly stratified sec-
tions (0.5-1 m thick beds) deconvolution should be
performed on log data to correct for the vertical
geometry of the acquisition tool and background
noise in recording channels.
Testing point-to-point core-log depth
shifting and matching algorithm on a
real dataset
The described algorithm for point-to point coordi-
nation was tested on an actual dataset from wells on
the fields located in the Republic of Bashkortostan
and operated by JSOC Bashneft.
The data sampling for this test application in-
cluded samples from intervals with recovery not Figure 2—Distribution of core-log model based on wells with low recov-
exceeding 70%. The total number of samples for the ery on wells with high recovery (further core-log coordination was
test was 2829. The volume of data on PPP measured performed without any adjustments in the petrophysical model)
on core from high recovery interval was about 10%
of all core data. Data on samples from high recovery intervals (over 70%) were used to control the quality
of the resulting core-log model.
Parameters of the linear dependence model for porosity in relation to GR and NGR (9) were identified.
The values of those two logs were preprocessed by double differential normalization with adjusted limits.
The search was run with constraints on nonpositivity of coefficient values set for NGR and GR subject
to the physical nature of the solution. For every target geological feature in the section type lithological
varieties were established from logs (in this case shales and sandtones were under study), for each of these
types constraints were applied to porosity values in the form of inequations.
Results of the test point-to-point core-log calibration for low recovery intervals are given in Figure 1.
The resulting core-log interpretation and model was applied through logs on samples from high recovery
intervals. The outcome from this operation on extending the PPP to wells with samples from high recovery
intervals is illustrated in Figure 2.
In the given test example only one parameter was used for calibration, namely porosity. However
additional parameters may be used for this matching procedure such as shaliness (Vclay) and volume
content of sandstone (Vsand). These characteristics can be compared against the macroscale core descrip-
tion and used for fine-tuning of a petrophysical model and core-log depth shifting and matching as an
additional element in the objective criterion function.
10 SPE-171202-MS

Conclusion
The use of samples from thin-bedded intervals is not recommended for core-log coordination due to the
significant influence of acquisition geometry (the geometrical factor of the logging tools) on the quality
of the data.
Automated algorithm of point-to-point core-log depth shifting and calibration was developed and
implemented. The algorithm allows not only optimum shifting of core samples but also optimum
dependence parameters for PPP versus downhole log data. One of the advantages is the simplicity and
ease of running parallel computations. Thus the use of this algorithm provides for automated core-log
depth shifting and calibration for big fields employing maximum available data on PPP within short and
reasonable time.
One of the features of this algorithm is that it employs significantly more information due to inclusion
of PPP measured on core from low recovery intervals to control and fine-tune the petrophysical model (the
volume of useful information for processing is times higher).
The developed algorithm can be applied for depth shifting of PPP measured on core to synthetic PPP
derived from logs and for depth shifting of downhole logs to logs on core.
Based on the proposed algorithm recommendations were formulated on additional core data required
for the dataset to reduce the uncertainty on core properties and parameters.

BIBLIOGRAPHY
1. Methodical recommendations on calculation of oil and gas reserves originally in place by
volumetric method. Edited by Petersilye N. I., Poroskun V.I., Yatsenko G.G. Moscow, Tver:
VNIGNI, NPTs “Tvergeofizika”, 2003. 261 pages.
2. Fontana E., Iturrino G.J., Tartarotti P. Depth-shifting and orientation of core data using a core–log
integration approach: A case study from ODP–IODP Hole 1256D Original Research Article //
Tectonophysics, Volume 494, Issues 1–2, 29 October 2010, pp. 85–100.
3. Agrinier P., Agrinier B. On the knowledge of the depth of a rock sample from a drilled core //
Comptes rendus de l’Academie des sciences. Serie 2, Mecanique, physique, chimie, sciences de
l’univers, sciences de la terre, 318(12), 1994, pp. 1615–1622.
4. URL: http://geoglobesoft.ru.
5. URL: http://prime.geotec.ru.
6. URL: http://www.amplituda.ru/ru/petrophysics/Progrobesp/saphir.html
7. URL: http://pangea.ru/ru/index.php?option⫽com_content&task⫽view&id⫽20&Itemid⫽237
8. URL: http://sis.slb.ru/sis/techlog
9. http://www.ldeo.columbia.edu/BRG/ODP/ODP/CLIP/clip.html
10. Nadezhdin O.V., Akbasheva A.I., Savichev V.I., Emchenko O.V. Algoritmy sinteza petro-
fizicheskih modelei v usloviyah estestvennoi neopredelennosti dannih izuchenia kerna i
geofizicheskih issledovaniy skvazhin [Algorithms for synthesis of petrophysical models under
natural variability of core analysis and well log data] // Neftyanoe hozyastvo. - 2012. -  4. - pp.
29 –31.
11. Sakoe H., Chiba S. Dynamic programming algorithm optimization for spoken word recognition
// IEEE Transactions on Acoustics, Speech and Signal Processing, 26(1), pp. 43–49, 1978

You might also like