Professional Documents
Culture Documents
g(x) =
c ⎡ log2 x
exp ⎢ − ( ) ⎤⎥. . . . . . . . . . . . . . . . . . . . . . . . (14)
metries of the problem. In fact, only rarely will these correspond to
a Cartesian coordinate system, where the distance is the Euclidian
x ⎣ 2 2 ⎦ distance:
The parameter is the dispersion parameter in both of those I I
(
) ( )
2 2
equations. Tarantola argues that, for consistency, as → ∞, the
1 −
2 = 1,1 −
2 ,1 $ +
1,N −
2 ,N . . . . . . . . . . . . . (19)
probability densities should approach the associated homogeneous
probability density for that parameter. We see that In fact, the homogeneous probability distribution that corre-
sponds to a Cartesian space (i.e., uniform distribution) is a very
lim f ( x ) = c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (15) special distribution. One may, of course, assume a uniform prior
→∞ probability distribution when there is no other information. This
and assumption is only as good as any other assumption. In well test-
ing, typical parameters we work with are permeability, wellbore-
c
lim g ( x ) = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (16) storage coefficient, skin, dual-porosity parameters, transmissivity
→∞ x and storativity ratios, and initial pressure. This list, of course, can
Permeability k
Wellbore storage C
Skin factor s
Storativity ratio
Transmissivity ratio
Distance to boundary re
200 800
60
100
150 600
40
100 400
50
50 200 20
0 0 0
0 2 4 6 0
500 1000 1500 2000 4 6 8 –20 –18 –16 –14 –12
k λ ×10–6
k λ
150 80
400 60
60
100
40 40
200
20 50
20
0 0 0 0
0 0.05 0.1 0.15 0.2 –6 –4 –2 0 5000 10000 15000 7 8 9 10
C C re re
100 100 150
150
100 100
50 50
50 50
0 0 0 2950 0
–20 0 20 40 –4 –2 0 2 4 3000 3050 7.99 8 8.01 8.02 8.03
s s pini pini
600 80
60
400
40
200
20
0 0
0 0.5 1 –10 –5 0 5
ω ω
Fig. 2—Histograms of untransformed and transformed parameters showing posterior probability distributions through
Monte Carlo simulations.
pini
p ini
3000 8.01 8000
re
re
8.5
8 6000
2950 4000 8
7.99
2900 2000 7.5
0 0.5 1 –6 –4 –2 0 2 4 0 2 4 6 –20 –15
ω ω λ ×10–6 λ
×10–6 ×10–6
6 –12 –12
–14 4 –14
4
–16 –16
λ
λ
λ
λ
2 2
–18 –18
–20 0 –20
0
0 0.5 1 –6 –4 –2 0 2 –10 0 10 20 30 –6 –2 0 2
ω ω s s
0.8 2 30 2
0 20
0.6
0
–2
ω
s
ω
s
0.4 10
–4 –2
0.2 0
–6
0
0 0.1 0.2 –6 –4 –2 0 0.1 0.2 –6 –4 –2
C C C C
×10–6
–12 0.2
–2
4 –14 0.15
–4
C
C
–16 0.1
λ
λ
2
–18 0.05 –6
0 –20 0
0 1000 2000 3 4 5 6 7 0 1000 2000 3 4 5 6 7
k k k k
Fig. 3—2D scatter plots of untransformed and transformed parameters. Uniform and normal-like distributions for transformed
parameters verify that the transformation is Cartesian. Transformed variables are indicated by symbols with overbars.
5
k
200
4.5
100
0.2 0.4 0.6 0.8 1 –2.5 –2 –1.5 –1 –0.5 0
C C
(a) (b)
Fig. 4—Contour plots of the objective function for transformed and untransformed parameters. The white asterisk shows the func-
tion minimum. The ranges for the parameters are the same. For the transformed case, the objective function can be approximated
better with a quadratic function.
8 8
C, STB/psi
C, STB/psi
6 6
4 4
2 2
0 0
0 2000 4000 6000 8000 10000 0 2000 4000 6000 8000 10000
(a) k, md (b) k, md
Fig. 5—Scatter plot of starting guesses. Red dots show successful results, and black dots show unsuccessful results. Success
was counted as convergence within 20 iterations to within a tolerance of 2% of the true parameter value. With transformed param-
eters, the algorithm successfully converged 39.4% of the time, whereas the convergence rate was 17.5% when no transformation
was used. Starting guesses that lead to successful convergence span the entire range of initial guesses for the transformed
case, but only small starting guesses for k lead to success for the untransformed case.
The wavelet transform is a linear transform. Each linear opera- Nonlinear Regression in the Wavelet Domain. Least-squares
tion can be expressed conveniently in terms of matrix operations. analysis aims to minimize the objective function in Eq. 27.
Eq. 24 can be written as
I N
I 2
E (
) = ∑ ⎡⎣ pi − pcalc ( ti ,
) ⎤⎦ , . . . . . . . . . . . . . . . . . . . . . . . (27)
Wf = AW f , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (26) i =1
where pi are the pressure data, ti are the corresponding time data,
I I
where each row of AW is a wavelet-basis function.
is the vector of reservoir parameters, and pcalc ( t ,
) is the model
pressure function. As also described by Awotunde and Horne
One can choose from a number of different wavelet bases to I
perform a wavelet transform. Daubechies wavelets (Daubechies (2008), the basic idea here is to replace pi and pcalc ( t ,
) with their
1992) have minimum size support for a given number of vanishing wavelet transforms and carry out least-squares regression in the
moments, allowing for representing discrete functions accurately transformed domain:
with a small number of wavelet coefficients. In our analyses, we I N
( )
EW (
) = ∑ Wpk − Wpcalc,k , . . . . . . . . . . . . . . . . . . . . . . . (28)
2
used Daubechies wavelets with four vanishing moments.
k =1
In the discrete wavelet transform (Eq. 26), in a strict sense, the
I
function f must be evenly spaced in time. Pressure gauges in some where EW (
) is the new objective function in the wavelet domain.
systems record pressure at fixed time intervals, but, in some sys- If we express Eq. 28 as a matrix operation,
tems, pressure is recorded logarithmically in time or recorded only I I I T I I
when there is sufficient change in its value. Using even sampling EW (
) = (Wp − Wpcalc ) (Wp − Wpcalc ), . . . . . . . . . . . . . . . . . (29)
is important for wavelet-based techniques that eventually rely on
reconstructing the signal. In our technique, as we will discuss, we using Eq. 26 in Eq. 29, we obtain
use a substantially reduced basis, which is far from providing a sen- I I I T I I
sible reconstruction. The reduced basis contains only a few wavelets EW (
) = ( p − pcalc ) AWT AW ( p − pcalc ), . . . . . . . . . . . . . . . . . (30)
selected on the basis of their magnitude or their sensitivities with
respect to reservoir parameters. Hence, for the applications presented where AW is a matrix with rows consisting of the orthonormal wavelet
here, uniform sampling is not necessary. Uniform sampling is impor- basis functions. Because we use an orthonormal basis, there are as
tant in applications requiring a reconstruction of the signal (e.g., many wavelets as the number of data points; hence, AW is a full
following noise removal). In our analyses, we did not use any inter- rank, N
N matrix. AWTAW gives the identity matrix, making Eq. 30
polation and assumed that the existing sampling in time provided the the same objective function as in Eq. 27, hence yielding exactly the
optimal representation of changes in the pressure transient. same performance as regular least squares in nonlinear regression.
Untransformed Transformed
10 20 10 20
8 8
15 15
6 6
C
10 10
4 4
2 5 2 5
0 0 0 0
0 500 1000 1500 2000 2500 3000 3500 4000 0 2000 4000 6000 8000 10000
(a) k (b) k
Fig. 6—Scatter plot of number of iterations. Average number of iterations is 12.1 for the untransformed case and 10.9 for the
transformed case. Note also that there are twice as many points successfully converging in the transformed-parameter case.
Time, hours
–3 –2 –1 0 1 2
10 10 10 10 10 10
3
10
Data
Wavelet fit
2
10
Pressure, psi
1
10
(a) (b)
Fig. 7—Pressure-transient fit (a) and progression of the objective function (b). The number of wavelets used is shown in dashed
curves. Regular least-squares regression (LS) follows more of a steepest-descent path, ending up in a local minimum. The wavelet
strategies converge to the global minimum, resulting in a good fit.
1 Data
10
Regular LS
Strategy 4
0
10
–1 0 1 2
10 10 10 10
Time, hours
(a) (b)
Fig. 8—(a) Synthetic data and fits using the least-squares and the wavelet approaches (Strategy 4). Strategy 4 gives a better
match between the data and the model curve. (b) Wavelet sensitivities for each parameter. The three wavelets in Group 1 were
used in the first three iterations. Subsequently, wavelets in Group 2 (wavelets sensitive to re) also were added to the basis. The
colors show the relative magnitudes for the quantities in each column such that dark red corresponds to the maximum value
and dark blue corresponds to zero in each column.
regressions shown in Fig. 6, to demonstrate the capabilities of stability of nonlinear regression, especially for reservoir models
wavelets, we intentionally picked a poor starting guess (k 100 with many parameters.
md, C 0.5 STB/psi, s 0, and pi 7,500 psi). All three wave- To test Strategy 4, we generated a synthetic-data set using
let strategies converged to the global minimum at k 11.45 md, the dual-porosity model (Warren and Root 1963) with boundary
C 9.1
103 STB/psi, s 7.78, and pi 3,877 psi, whereas effects (pseudosteady state). Dual-porosity models are known to
regular least squares converged to a distant local minimum. A be difficult to match reliably, hence our choice of this model as
reasonable starting guess would lead regular least squares to the a test. The true values for the parameters were k 200 md, C
same global minimum. 0.015 STB/psi, s 3.5, 0.2, 5
107, and re 2,000 ft.
In Strategy 1, the number of wavelets was chosen to be four For this synthetic-data set, the time data were sampled logarithmi-
(i.e., equal to the number of parameters). With five or more wave- cally between t 0.01 hours and t 100 hours. Fig. 8a shows the
lets, Strategy 1 followed the exact same path as the regular least synthesized data, and Fig. 8b shows the magnitudes of the wavelet
squares, giving a wrong answer. Using fewer wavelets than the coefficients and the wavelet sensitivities. In Fig. 8b, the leftmost
number of parameters renders the regression underdetermined and column shows the relative magnitudes of the 12 largest wavelets.
so is not applicable to Strategy 1. However, in Strategies 2 and 3, The magnitudes of the remaining wavelets were significantly lower
we started with three wavelets (an underdetermined case) to avoid than the ones shown here. The wavelets were numbered from 1
undesired local minima. In Strategy 2, the number of wavelets was (top, largest) to 12 (bottom, smallest). The columns to the right
increased to 10, starting with the fifth iteration, after which the show the wavelet sensitivities of the wavelets for each parameter,
regression quickly converged to the global minimum. In Strategy calculated at the starting guess of k 1000 md, C 0.1 STB/psi,
3, the wavelets were increased one by one at each iteration up to s 1, 0.1, 1
106, re 1,500 ft. A good fit could not
10 wavelets, after which the number of wavelets was kept constant. be found with regular least-squares analysis, as shown with the
Strategy 3 followed a steepest-descent path between Iterations 3 blue curve in Fig. 8a. Especially, the dual-porosity parameters
and 10 but then converged to the global minimum. The analysis and deviated significantly from the true answer.
shows that the performance of regression can be improved by We applied Strategy 4 in the following way: In the first three
controlling the amount of information taken into account (i.e., the iterations, we used only the three wavelets in Group 1. This set
number of wavelets) as the regression progresses. represents the wavelets most sensitive to each of the parameters,
Choosing the wavelet basis on the basis of the magnitudes of except for re. Leaving re out reduced the complexity of the problem
wavelets, as in Strategies 1 through 3, has a potential drawback. initially. In the first three iterations, we obtained better guesses for
When there is an abrupt change in the pressure transient, the wave- k, C, and s. Fig. 9 shows how the parameters changed with the
let coefficients in the vicinity of the steep change tend to take large progress of nonlinear regression. Starting with the fourth iteration,
values. Using large-magnitude wavelet coefficients corresponding we also included the wavelets in Group 2, which are the wavelets
to unrealistic abrupt changes (e.g., outliers) can reduce the perfor- most sensitive to changes in re. As seen in Fig. 8a, there is a good
mance of nonlinear regression. This problem can be avoided by match between the model curve and the data. Strategy 4 improved
first detecting and then eliminating outliers using wavelet-based the stability of nonlinear regression and allowed for accurate esti-
techniques (Athichanagorn et al. 2002). mation of reservoir parameters for this synthetic-data set.
Strategy 4: Estimation on the Basis of Sensitivity Coefficients. Discussion. In this section, we described several different examples
The last strategy demonstrates how wavelet-sensitivity coefficients capitalizing on the multiresolution property of wavelet analysis.
can be used to select a reduced wavelet basis. The basic idea in Wavelet transformation provides a suitable platform to control the
Strategy 4 is to limit the degrees of freedom initially until a better amount of detail included in the analysis at any moment. We have
guess is achieved. The degrees of freedom can be limited by using seen that the stability of the nonlinear regression can be improved
a basis that includes wavelets sensitive to a subset of the reservoir greatly by choosing appropriate strategies. As mentioned, what we
parameters only. After a few iterations, a better guess is obtained have included here is only a small subset of possible wavelet-based
and wavelets sensitive to other parameters can be added to the basis techniques. Many other suitable strategies could be designed on
to converge to the global minimum. Strategy 4 helps improve the the basis of the data set being analyzed. We should also note that,
0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
Iterations –6 Iterations Iterations
×10
0.5 1
Least Sq. Least Sq.
St. 4 St. 4
0.4 True Val. 0.8 2500
True Val.
0.3
λ 0.6 re 2000
ω
0.2 0.4
Least Sq.
0.1 0.2 1500 St. 4
True Val.
0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
Iterations Iterations Iterations
Fig. 9—The change in the parameter values during nonlinear regression using Strategy 4. The wavelet basis included the three
wavelets in Group 1 up to Iteration 3 (shown with vertical dotted line). In the subsequent iterations, additional three wavelets from
Group 2 were also added. Using Strategy 4, k, C, and s were refined first and then the remaining parameters were handled.
because a reduced basis set is used, noise elimination is achieved hand, parameter estimates found using the wavelet transform are
automatically in all of the strategies discussed in this section. much closer to the true answer.
k estimate
200 Regular LS
Wavelet
True Answer
150
p noise (seconds)
100
0 0.5 1 1.5 2
36 72 108 144
0.02
C estimate t noise (seconds)
Regular LS
0.015 Wavelet
True Answer
10 Regular LS
Wavelet
True Answer
5
p noise (seconds)
0
0 0.5 1 1.5 2
Fig. 10—Noise performance comparison of wavelet analysis (red) to regular least-squares analysis (blue). The noise in both
the time data and the pressure data is normally distributed. The horizontal axis shows the standard deviation of the noise. The
vertical axis shows the estimate (red and blue dashed lines) and the confidence intervals (shaded regions).
two different model functions were used to fit the same pressure • Strategy 4: Wavelets most sensitive to k and s were used
data, in an effort to analyze the effects of ambiguity. We considered in the first three iterations. From the fourth iteration through the
several reservoir models including dual-porosity reservoirs (Data sixth iteration, wavelets most sensitive to the remaining parameters
Sets 1, 8, 10, 15, and 20), reservoirs with rectangular and pseudos- (excluding k and s) were used. After the sixth iteration, we contin-
teady-state boundaries (Data Sets 9 and 17), cyclic production tests ued with np3 wavelets chosen by magnitude, hence representing
(Data Set 16), acidized wells (Data Set 18), hydraulically fractured all the parameters in the basis.
wells (Data Set 19), falloff tests (Data Set 11), and transition data We analyzed each strategy using both untransformed (Jeffreys)
analysis (Data Set 5). Data Sets 1, 16, and 17 were synthetically and transformed (Cartesian) parameters, making a total of 10 cases
generated, and the rest were real well-test data. Specific refer- for each data set. A Monte Carlo analysis with 100 realizations
ences to the sources of the data sets can be found in Dastan and was used to calculate 95% confidence intervals by adding to the
Horne (2009). original data realizations of normally distributed noise with 1-psi
We considered regular least squares and all four of the wavelet standard deviation. We have developed a graphical method to
strategies discussed in this paper. summarize the results for all the data sets. The graphs compare
• Strategy 1: We used np3 wavelets, where np is the number the strategies in terms of the confidence intervals and the shifts in
of parameters used in the fit function. the mean values of the parameters.
• Strategy 2: We used np1 wavelets in the first step (up to the Fig. 12 shows the results for Data Set 1 and can be used as a
sixth iteration) and np3 wavelets in the second step. guide to understanding Figs. 13 and 14, which give the full results
• Strategy 3: We started with three wavelets and incremented for all 20 data sets. The upper row shows the results for no param-
until reaching np3 wavelets. eter transformation (Jeffreys parameters), and the lower row shows
Regular LS Wavelet
2 2
10 20 10 20
0 0 15
10 15 10
–2 10 –2 10
10
C
10
C
–4 5 10
–4 5
10
–6 0 –6 0
10 10
–1 0 1 2 3 –1 0 1 2 3
10 10 10 10 10 10 10 10 10 10
(a) k (b) k
Fig. 11—Parameter-space map of starting guesses that successfully converged to the optimal point for regular-least-squares
(Regular LS) and wavelet (Strategy 3) approaches. There are four parameters—k, C, s, and pi —but only the k–C plane is shown,
for brevity. The color corresponds to the number of iterations it takes to reach the optimal point. Of the 1,000 starting guesses,
the wavelet-transform approach was successful for 258 while the regular-least-squares case was successful for 196.
No parameter
k transformation
C (Jeffreys)
s
ω
λ
Fig. 12—Analysis results for Data Set 1. Mean value of the parameters and the confidence intervals are shown with respect to
regular least squares with no parameter transformation. The upper row shows the results for the untransformed case, and the
lower row shows the results for Cartesian transformations. The horizontal dashed line in the center provides a reference to the
mean values of the untransformed, regular least-squares case. The other two dashed lines show the limits for acceptable con-
fidence intervals for parameters. Note that the vertical-axis scaling is different for each parameter.
the results for Cartesian-transformed regression. The columns (The very narrow confidence intervals for the least-squares cases
show the results for regular least squares, Strategy 1, Strategy 2, are because of the regression failing to move forward from the
Strategy 3, and Strategy 4. There are three horizontal dotted lines starting guess.) In Data Set 15, both wavelet strategies provided
in each row to help with the comparison of confidence intervals a good fit, whereas the regular least-squares approach failed. In
and shifts in mean values. The regular least-squares case with no Data Set 17, both the regular least squares and Strategy 3 provide
parameter transformation was used as the reference for the other a visually good fit, whereas Strategy 1 failed.
cases. The dotted line in the center shows the reference for mean The objective function used in nonlinear regression is identical
values. The vertical shift of the center of a colored rectangle shows for transformed and untransformed parameters. Hence, the global
how much the mean deviated with respect to the reference case. and local minima are in identical positions in the parameter space.
The height of each rectangle shows the confidence intervals. The Only the gradient and Hessian are different, resulting in a different
vertical scale can be inferred from the upper and lower dotted lines, path in nonlinear regression. In a comparison of untransformed and
which show the acceptable confidence intervals. The acceptable transformed cases in Figs. 12 and 13, it is possible to observe wider
limits were assumed to be 20% for the dual-porosity param- confidence intervals for transformed parameters in some of the
eters ( and ), 1 for the skin factor, and 10% for all other data sets. It should be noted that wider confidence intervals do not
parameters (Horne 1995). For visual convenience, the vertical scale necessarily mean that the parameter transformation did not work
for each parameter was set such that the acceptable confidence well. On the contrary, wider confidence intervals show that there
intervals align. Hence, for example, a rectangle that fits exactly were indeed multiple plausible solutions that the untransformed
between the upper and lower dotted lines would correspond to a case missed. Data Sets 2, 3, 5, 8, 9, and 10 are short of reaching
10% confidence interval for wellbore storage, 20% confidence radial-flow behavior, which creates ambiguity in the estimation.
interval for storativity, and a confidence interval of 1 (in absolute Indeed, Data Sets 2 and 10 are the same pressure data, fit using
value) for the skin factor. The arrow points at the ends of bars are different model functions. Similarly, Data Sets 3 and 8 are the same
used to denote the cases for which the confidence intervals were data sets fit using different model functions. For such ambiguous
too wide to be displayed on the graph (i.e., the confidence interval data, having smaller confidence intervals does not necessarily
for the corresponding parameter was wider than twice the accept- mean the fit is more reliable and a good visual match does not
able limit for that parameter). The white triangle in the center of a guarantee a good answer. In these data sets, because there are
rectangle shows that the mean deviated so much that the rectangle insufficient data in the infinite-acting part, in many cases, one
would actually fall outside the graph limits (i.e., the mean shifted can choose many different values of permeability and match to a
more than 20% for and , more than 2 for s, and more than 10% corresponding value of skin, resulting in large confidence intervals
for the rest of the parameters). in estimation. Relative performance of the various techniques was
Figs. 13 and 14 summarize the results of the analysis for all analyzed further in Dastan and Horne (2010).
20 data sets. We see that the wavelet transformation provides an
advantage over least squares when the reservoir description is
relatively complex, such as dual-porosity reservoirs (Data Sets 1, Conclusions
8, 10, 15, and 20) and reservoirs with boundaries (Data Sets 9 and In this work, we analyzed parameter-space and data-space transfor-
17). For relatively simple reservoir descriptions, wavelet-transform mations during nonlinear regression in well-test interpretation. We
analysis and regular least squares showed generally comparable showed that Cartesian transformations in the parameter space and
performance. Among the different strategies used for data trans- wavelet transformation in the data space improve the performance
formation, Strategy 4 can yield narrow confidence intervals, espe- of nonlinear regression significantly.
cially when the number of parameters is large. We proved the necessity of parameter transformations for
Examination of the p-vs.-t graphs for the parameter estimates reservoir parameters. We showed that most physical parameters
reveals that both least squares and wavelet transforms result in a are Jeffreys quantities, whereas nonlinear regression works best
good fit for all cases except Data Sets 12, 15, 17, and 19. In Data with Cartesian parameters. For the first time, we proposed suitable
sets 12 and 19, both least-squares and wavelet approaches failed. transformation pairs for commonly used reservoir parameters. We
3 4
k k
C C
s s
5 6
k k
C C
s s
7 8
k k
C C
s s
pi ω
λ
9 10
k k
C C
s s
re ω
λ
Fig. 13—Analysis of Data Sets 1 through 10. Fig. 12 can be used as a guide to understand this graph.
verified the validity of the transformations using a Monte-Carlo-based directly to the choice of the reduced wavelet basis. By appropriate
threshold algorithm. We have shown that the probability distribu- choice of the wavelets to be included in the reduced basis, we
tions of transformed parameters usually exhibit normal distribu- achieved direct control over the nonlinear regression. We have
tion, a strong indication of being Cartesian. proposed a number of strategies to control the information con-
For data transformations, we conducted nonlinear regression tained in the wavelet basis and have shown examples of how
on the wavelet transform of the pressure signal and obtained sig- these strategies can be used effectively. We have also discussed
nificant performance improvement. We showed that the wavelet the statistics of initial guesses and noise performance of the
transform is not only useful for data reduction and noise elimina- wavelet-transform-based nonlinear regression. Finally, we com-
tion but also is suitable for performance improvement. The per- pared the performance of some of the wavelet-based strategies on
formance improvement through the wavelet transform is related some 20 real- and synthetic-data sets.
13 14
k k
C C
s s
15 16
k k
C C
s s
ω
λ
17 18
k k
C C
s s
d
d
d
dw
19 20
k k
C C
s s
ω
λ
Fig. 14—Analysis of Data Sets 11 through 20. Fig. 12 can be used as a guide to understand this graph.
Our conclusions can be summarized as • Wavelets provide appropriate reweighting of data and improve
• Regression using Cartesian variables yields a more quadratic- the stability and performance of nonlinear regression.
like objective function, and the regression can be completed in • Wavelets provide automatic noise elimination and outlier
fewer iterations. removal.
• Cartesian transformation increases the probability of conver- • A reduced wavelet basis provides effective reduction of data points
gence substantially when a good starting guess is not available. in the analysis without having to delete actual data points.
• Using Cartesian transformations is especially useful for detect- • The reduced wavelet basis should be selected on the basis
ing ambiguous data sets with insufficient infinite-acting data. of wavelet amplitudes and sensitivity coefficients. Selection
Transformed analysis finds possible fits over a larger volume, on the basis of sensitivity coefficients (Strategy 4) allows for
resulting in wide confidence intervals for ambiguous data (as decoupling the regression on the basis of parameters and, hence,
should be the case for uncertain problems). improves the performance.