You are on page 1of 12

Nonlinear Analysis 74 (2011) 5987–5998

Contents lists available at ScienceDirect

Nonlinear Analysis
journal homepage: www.elsevier.com/locate/na

Inverse problem of groundwater modeling by iteratively regularized


Gauss–Newton method with a nonlinear regularization term
Alexandra Smirnova a,∗ , Necibe Tuncer b
a
Department of Mathematics and Statistics, Georgia State University, Atlanta, GA 30303, USA
b
Department of Mathematics, University of Florida, Gainesville, FL 32611, USA

article info abstract


Article history: A nonlinear minimization problem ‖F (d) − u‖ −→ min, ‖u − uδ ‖ ≤ δ , is a typical
Received 10 May 2011 mathematical model of various applied inverse problems. In order to solve this problem
Accepted 25 May 2011 numerically in the lack of regularity, we introduce iteratively regularized Gauss–Newton
Communicated by Ravi Agarwal
procedure with a nonlinear regularization term (IRGN–NRT). The new algorithm combines
two very powerful features: iterative regularization and the most general stabilizing
MSC:
term that can be updated at every step of the iterative process. The convergence
47A52
65F22
analysis is carried out in the presence of noise in the data and in the modified source
condition. Numerical simulations for a parameter identification ill-posed problem arising
Keywords: in groundwater modeling demonstrate the efficiency of the proposed method.
Ill-posed problem © 2011 Elsevier Ltd. All rights reserved.
Regularization
Gauss–Newton algorithm
Inverse problem of groundwater modeling

1. Introduction

The original iteratively regularized Gauss–Newton (IRGN) process [1,2]

dn+1 = dn − αn [F ′∗ (dn )F ′ (dn ) + τn I ]−1 (F ′∗ (dn )(F (dn ) − uδ ) + τn (dn − ξ )) (1.1)


was introduced by Bakushinsky in 1993 for solving a nonlinear operator equation
F (d) = u, ‖u − uδ ‖ ≤ δ, (1.2)
or a more general problem of minimizing a nonlinear functional
‖F (d) − u‖ −→ min (1.3)
with F : D ⊂ X → Y acting between two Hilbert spaces. The idea to regularize Gauss–Newton algorithm iteratively
proved to be extremely effective. Method (1.1) was successfully applied to a number of nonlinear ill-posed problems [3–6].
One of the remarkable features of this scheme is the lack of the requirement on d0 to be rather close to the exact solution d̂.
The larger the norm of d0 − d̂, the larger the value of τ0 must be used at the initial step. However, since τn → 0 as n → ∞,
one can still get a high quality of reconstruction that is consistent with the level of noise in the data. Besides, for iteration
(1.1) to converge, the nonlinear operator F does not need to be monotone or have any other restrictions on its spectrum. In
order to carry out the convergence analysis, Bakushinsky used the so-called source conditions

ξ − d̂ = F ′∗ (d̂)v, v ∈ Y, ‖v‖ ≤ ς , (1.4)

∗ Corresponding author. Tel.: +1 4044136409.


E-mail addresses: asmirnova@gsu.edu (A. Smirnova), tuncer@ufl.edu (N. Tuncer).

0362-546X/$ – see front matter © 2011 Elsevier Ltd. All rights reserved.
doi:10.1016/j.na.2011.05.075
5988 A. Smirnova, N. Tuncer / Nonlinear Analysis 74 (2011) 5987–5998

or

ξ − d̂ = θ (F ′∗ (d̂)F ′ (d̂))ω, ω ∈ D ⊂ X, ‖ω‖ ≤ ε, (1.5)

which enforced a special structure on ξ − d̂, and could also be viewed as a measure of ill-posedness of the problem. Even
though for most inverse problems, assumptions (1.4) and (1.5) are not algorithmically verifiable, it has been shown [5] that
process (1.1) is stable with respect to noise in (1.5):

ξ − d̂ = θ (F ′∗ (d̂)F ′ (d̂))ω + ν, ‖ω‖ ≤ ε, ‖ν‖ ≤ η, (1.6)

as long as the stopping time N = N (δ, η) is chosen properly. The convergence rates of IRGN scheme were further studied
in [7–16] and many other papers under Hölder

θ (λ) = λp , p > 0, (1.7)

and logarithmic (µ > 0)

λ
 −µ
, 0 < λ ≤ N,

θ (λ) = − ln (1.8)
eN
0, λ=0

source conditions. Unfortunately, the logarithmic source condition, as well as condition (1.5), (1.7) with p < 12 , yield certain
restrictions on nonlinearity of F , which limit their applicability.
Up until recently, algorithm (1.1) was justified only for the case of regularizer τn (dn − ξ ), which is the derivative of the
penalty term 12 τn ‖dn − ξ ‖2 in the corresponding Tikhonov functional. In [17], Smirnova et al. replaced τn (dn − ξ ) with
τn L∗ L(dn − ξ ):
dn+1 = dn − αn [F ′∗ (dn )F ′ (dn ) + τn L∗ L]−1 (F ′∗ (dn )(F (dn ) − uδ ) + τn L∗ L(dn − ξ )),

where L : D ⊂ X → Z is a linear operator mapping between Hilbert spaces X and Z, and τn L∗ L(dn − ξ ) is the derivative of
the penalty term with a seminorm generated by L, i.e., 12 τn ‖L(dn − ξ )‖2 . Convergence analysis of the above process is based
on the adjusted source condition [18]:

L∗ L(ξ − d̂) = F ′∗ (d̂)v, v ∈ Y, ‖v‖ ≤ ς , (1.9)

which takes form (1.4) for L = I. The new algorithm was successfully applied to an exponentially ill-posed problem in optical
tomographic imaging, for which the original IRGN process fails.
In this paper, iterative scheme (1.1) is modified further and the following iteratively regularized Gauss–Newton method
with a nonlinear regularization term (IRGN–NRT) is proposed

dn+1 = dn − αn [F ′∗ (dn )F ′ (dn ) + τn G′n (dn )]−1


(F ′∗ (dn )(F (dn ) − uδ ) + τn Gn (dn )), d0 ∈ D ⊂ X. (1.10)

As one can see, L∗ L is replaced with a nonlinear (in general) operator Gn (·). Additionally, a ‘feedback’ feature [19] is introduced
since, unlike L∗ L in [17], Gn (·) is allowed to depend on n. The convergence analysis of algorithm (1.10) is carried out under the
assumption that both, the right-hand side and the source condition, are satisfied with some errors. Apart from the stability
of the process toward noise in the data and in the source condition, the dependence of the convergence rates on both errors
is examined.
Numerical algorithm (1.10) combines two very important features: iterative regularization and the most general
stabilizer Gn (dn ) (assumptions on Gn (dn ) are listed in Theorem 2.1 below). This combination gives the following advantages:

• For various parameter identification problems in PDEs, the penalty term 12 τn ‖dn − ξ ‖2 often regularizes the solution
with respect to its spline expansion coefficients, while in practice a priori information is available in the physical space
itself [17]. One can regularize directly in the physical space by using τn ‖L(dn −ξ )‖2 or τn ‖L(dn −ξn )‖2 . Here ξn ∈ D ⊂ X.
One can take, for example, ξn = dn−1 .
• For (1.10), the penalty term in the corresponding Tikhonov functional can contain more than one seminorm τn ‖Li (dn −
ξ )‖2 to incorporate various types of a priori information. In particular, one of the seminorms may be generated by a
differential operator Lĩ in L2 .
• If a solution d̂ to F (d) = u is formed by two or more unknown parameters, coordinates of dn can be on different scales.
For example, in the diffusion based optical tomography problem [20], vector dn consists of diffusion and absorption
coefficients, which are between one and two orders of magnitude apart. In that case, absorption coordinates of dn must
be properly weighted to balance sensitivities with respect to both parameters [17]. The stabilizer Gn (dn ) can easily take
care of that, while dn − ξ cannot.
A. Smirnova, N. Tuncer / Nonlinear Analysis 74 (2011) 5987–5998 5989

• For images with discontinuities, edges and oscillations, a finite dimensional analog of Total Variation (TV) [21–23]
∫ 
|∇(dn − ξn )|2 + λ2 , λ > 0, (1.11)

can be used as the penalty term.


• The convergence analysis of (1.10) is carried out under modified Hölder-type source condition (2.2) with index 12 (see
below). Condition (2.2) is less restrictive than original condition (1.4) introduced in [7], since the set of admissible
elements vn ∈ Y in (2.2) is larger than the set of admissible elements v ∈ Y in (1.4).

The paper is organized as follows. In Section 2, the regularizing properties of algorithm (1.10) are investigated.
In Section 3, we present the inverse groundwater flow model [24] and formulate the numerical method. Results of
computational experiments are given in Section 4. Section 5 provides a brief summary of the work done and outlines future
plans.

2. Convergence and stability

In this section, we investigate stability of IRGN–NRT method (1.10) with respect to noise in the data:

‖ u − uδ ‖ ≤ δ (2.1)

and in the modified source condition:

Gn (d̂) = F ′∗ (dn )vn + νn , vn ∈ Y, νn ∈ X, ‖vn ‖ ≤ ρ, ‖νn ‖ ≤ σ . (2.2)

Theorem 2.1. 1. Let X and Y be Hilbert spaces and D ⊂ X be open. Suppose

X̂ := {d ∈ D ⊂ X : F (d) = u} ̸= ∅,
and d̂ ∈ X̂ is a solution of interest.
2. Assume that F : D ⊂ X → Y is a nonlinear differentiable operator and its Fréchet derivative is locally Lipschitz continuous

‖F ′ (d) − F ′ (d̃)‖ ≤ P ‖d − d̃‖, P ≥ 0, d, d̃ ∈ Bη (d̂), (2.3)


where

Bη (d̂) := {d ∈ D : ‖d − d̂‖ ≤ η}, η = l τ0 , (2.4)
with l defined in (2.8) below.
3. The family of operators Gn is Fréchet differentiable, and G′n are Lipschitz continuous, surjective and self-adjoint:

‖G′n (h) − G′n (h̃)‖ ≤ K ‖h − h̃‖, K ≥ 0, (2.5)


Gn (h) = Gn (h),
′∗ ′
h, h̃ ∈ X.
There exists m > 0 such that

(G′n (g )g̃ , g̃ ) ≥ m‖g̃ ‖2 , g , g̃ ∈ X. (2.6)

4. The regularization sequence {τn } and the line search sequence {αn } satisfy the assumptions

τn

τn > 0, 1≤ ≤ D < ∞, n = 0, 1, 2, . . . , lim τn = 0, 0 < α ≤ αn ≤ 1. (2.7)
τn+1 n→∞

5. The right-hand side and the source condition are contaminated with noise, and (2.1)–(2.2) are fulfilled.
6. The initial value of the regularization parameter τ0 is chosen so that

‖d0 − d̂‖ 2α D ρ
√ ≤ √ := l, (2.8)
τ0 m(1 − (1 − α)D)
and for the constants m, D and α the following is true

τ0 ρ
 
(1 − α)D + α D P + 2K ≤ 1. (2.9)
m m
5990 A. Smirnova, N. Tuncer / Nonlinear Analysis 74 (2011) 5987–5998

7. Assume that N = N (δ, σ ) is the minimal positive integer with

δ σ ρ
 
max , √ > . (2.10)
2τn τn m 4
Then

‖dn − d̂‖
(1) √ ≤ l for any n ≤ N (δ, σ ), (2.11)
τn
and

(2) ‖dN (δ,σ ) − d̂‖ = O(δ 1/2 + σ ), (2.12)


where {dn } is defined by (1.10).

Proof of Theorem 2.1. Assumption 3 of the theorem implies that iteration (1.10) is well defined as long as dn ∈ Bη (d̂).
Indeed, from inequality (2.6) it follows that for any element h ∈ Y,

([F ′∗ (dn )F ′ (dn ) + τn G′n (dn )]h, h) ≥ τn m‖h‖2 , τ n > 0. (2.13)

Since G′n (dn ) is onto, the operator F ′∗ (dn )F ′ (dn ) + τn G′n (dn ) is continuously invertible and

1
‖[F ′∗ (dn )F ′ (dn ) + τn G′n (dn )]−1 ‖ ≤ . (2.14)
τn m
Lipschitz continuity of F ′ and G′ yields

F (dn ) = u + F ′ (dn )(dn − d̂) + Un , (2.15)

and

Gn (dn ) = Gn (d̂) + G′n (dn )(dn − d̂) + Wn (2.16)

with
P K
‖U n ‖ ≤ ‖dn − d̂‖2 and ‖Wn ‖ ≤ ‖dn − d̂‖2 . (2.17)
2 2
From (2.1), (2.2), (2.16) and (2.17), one concludes

F ′∗ (dn )(F (dn ) − uδ ) + τn Gn (dn ) = [F ′∗ (dn )F ′ (dn ) + τn G′n (dn )](dn − d̂)
+ F ′∗ (dn )(u − uδ + Un + τn vn ) + τn (νn + Wn ). (2.18)

Identity (2.18) together with (1.10) results in the following estimate


 
P ‖dn − d̂‖2
‖dn+1 − d̂‖ ≤ (1 − αn )‖dn − d̂‖ + αn ‖[F (dn )F (dn ) + τn Gn (dn )] F (dn )‖ δ +
′∗ ′ ′ −1 ′∗
+ τn ρ
2
 
K ‖dn − d̂‖2
+ αn τn ‖[F (dn )F (dn ) + τn Gn (dn )] ‖
′∗ ′ ′ −1
+σ . (2.19)
2

In order to evaluate the norm of

Dn := [F ′∗ (dn )F ′ (dn ) + τn G′n (dn )]−1 F ′∗ (dn ) (2.20)

we use polar decomposition of the linear bounded operator F (dn ): ′

F ′ (dn ) = U |F ′ (dn )| = U (F ′∗ (dn )F ′ (dn ))1/2 . (2.21)

Here U : X → Y is a partial isometry, that is

‖Ud‖ = ‖d‖ for any d ∈ N⊥ , N := {h ∈ X : Uh = 0}. (2.22)

Introduce the notations

An := F ′∗ (dn )F ′ (dn ), Bn := G′n (dn ) and Cn := A1n/2 Bn−1/2 . (2.23)


A. Smirnova, N. Tuncer / Nonlinear Analysis 74 (2011) 5987–5998 5991

Taking into account condition (2.6), one derives [17]

‖Dn ‖ = ‖[An + τn Bn ]−1 A1n/2 U ∗ ‖ ≤ ‖[An + τn Bn ]−1 An1/2 ‖


‖[B1n/2 (B−
n
1/2
An B−
n
1/2
+ τn I )B1n/2 ]−1 A1n/2 ‖ = ‖B−
n
1/2
[Cn∗ Cn + τn I ]−1 Cn∗ ‖. (2.24)

Now using polar decomposition of the linear operator Cn , one gets


1/2
‖Dn ‖ ≤ ‖B−
n ‖ ‖[Cn∗ Cn + τn I ]−1 (Cn∗ Cn )1/2 ‖

1 λ 1
≤ √ max ≤ √ . (2.25)
m λ≥0 λ + τn 2 τn m

Combining (2.14), (2.19) and (2.25), one obtains


   
αn P ‖dn − d̂‖2 αn K ‖dn − d̂‖2
‖dn+1 − d̂‖ ≤ (1 − αn )‖dn − d̂‖ + √ δ+ + τn ρ + +σ
2 τn m 2 m 2

αn τn δ ρ σ
    
P K
≤ (1 − αn )‖dn − d̂‖ + √ √ + √ ‖d n − d̂ ‖ 2
+ αn + + √ . (2.26)
2 m 2 τn m m 2τn 2 τn m

Take arbitrary n < N (δ, σ ) and suppose that for any j such that 0 ≤ j ≤ n < N (δ, σ ) the induction assumption

‖dj − d̂‖
γj := √ ≤l (2.27)
τj

holds. From (2.7) and (2.26), one concludes

αn D τ0 αn D δ ρ σ
    
P
γn+1 ≤ (1 − αn )Dγn + √ +K γ +√2
+ +√ . (2.28)
2τn τn m
n
2 m 2 m m 2

Inequalities (2.8), (2.10) and (2.28) yield

αn D τ0 αn D
  
P
γn+1 ≤ (1 − α n )D l + √ +K l2 + √ ρ := bn l + an l2 + cn . (2.29)
2 m 2 m m

Estimate (2.29) together with conditions (2.7), (2.8) and (2.9) imply inequality (2.11). Indeed, by (2.8) it follows that

2α D ρ
(bn − 1)l + an l2 + cn ≤ ((1 − αn )D − 1) √
m(1 − (1 − α)D)
αn D τ0 2α D ρ αn D
  [ ]2
P
+ √ +K √ + √ ρ.
2 m 2 m m(1 − (1 − α)D) m
α
The function f (α) := √
m(1−(1−α)D)
is non-increasing since D ≥ 1. From (2.7) and (2.9), one gets 1 − (1 − αn )D ≥ 0. Hence,
one can estimate

2α D ρ 2αn D ρ 2αn D
((1 − αn )D − 1) √ ≤ ((1 − αn )D − 1) √ = − √ ρ.
m(1 − (1 − α)D) m(1 − (1 − αn )D) m

Therefore

2αn D αn D αn D τ0 4α 2 D2 ρ 2
  
P
(bn − 1)l + an l2 + cn ≤ − √ ρ + √ ρ + √ +K
m m 2 m 2 m m(1 − (1 − α)D)2
  
−αn Dρ m(1 − (1 − α)D)2 + αn D P2 + K τm0 2α 2 D2 ρ 2
≤ √
m m(1 − (1 − α)D)2
αn Dρ α D ρ τ0
[ 2 2    ]
≤ √ P + 2K − (1 − (1 − α)D) .
2
(2.30)
m(1 − (1 − α)D)2 m m
5992 A. Smirnova, N. Tuncer / Nonlinear Analysis 74 (2011) 5987–5998

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.2 0.4 0.6 0.8 1

Fig. 1. Mesh used in computation.

Condition (2.9) yields

α 2 D2 ρ τ0
  
P + 2K ≤ (1 − (1 − α)D)2 ,
m m
which means that γn+1 ≤ l and inequality (2.11) is satisfied. If one uses (2.11) for n = N (δ, σ ) along with (2.10), one obtains
convergence rate (2.12), hence proving the second statement of the theorem. 

3. Groundwater flow problem

The mathematical model of groundwater flow is given by the following linear initial-boundary value problem (IBVP)
ut − ∇ · (a∇ u) = f in Ω (3.1)
u=ψ on ∂ Ω (3.2)
u(0) = u0 in Ω , (3.3)
where u = u(t , ⃗ x) is the piezometric head, which expresses the energy per unit weight of the water through an aquifer Ω .
The domain Ω is bounded in Rn with smooth boundary ∂ Ω . The right-hand side f = f (t , ⃗ x) represents water sources and
sinks, while a = a(t , ⃗x) is the hydraulic conductivity. BVP (3.1)–(3.3) is equipped with the Dirichlet-type boundary condition
on ∂ Ω .
It is of practical importance to find a = a(t , ⃗ x) in order to explore the internal structure of the aquifer Ω . To this end,
wells are drilled to measure u = u(t , ⃗ x) at finitely many points in Ω . The goal of inverse groundwater flow problem is to
reconstruct a = a(t , ⃗ x), given f = f (t , ⃗
x), u = u(t , ⃗
x) and ψ = ψ(t ). If one considers the steady-state case (ut = 0), then

−∇ · (a∇ u) = f in Ω (3.4)
u = ψ on ∂ Ω , (3.5)
where a, f and ψ are time independent. It has been established in [25] that for given f ∈ L2 (Ω ) and a(⃗ x) ∈ L∞ (Ω ) such
that a(⃗
x) ≥ ã > 0 for all ⃗
x ∈ Ω , boundary value problem (3.4)–(3.5) has a weak solution u(⃗
x) ∈ H 1 ( Ω ) .
To simulate the data, we first transform (3.4)–(3.5) to a BVP with homogeneous boundary conditions: take U (⃗ x) ∈ H 1 (Ω )
such that it coincides with ψ on ∂ Ω and set w := u − U. One gets

−∇ · (a∇w) = f + ∇ · (a∇ U ) in Ω (3.6)


w = 0 on ∂ Ω . (3.7)
Now solve the forward problem by the finite element method (FEM) using the weak formulation of (3.6)–(3.7): find
w ∈ H01 (Ω ) such that for all v ∈ H01 (Ω )
∫ ∫ ∫
a∇w · ∇v dΩ = f v dΩ − a∇ U · ∇v dΩ . (3.8)
Ω Ω Ω
The Galerkin approximation of (3.8) is simply based on constructing the discrete weak equation on a finite dimensional
subspace V of H01 (Ω ). We define V first by partitioning the domain Ω into triangles. Let T denote the partition of Ω into
triangles such that no vertex of any triangle lies on the interior of a side of another triangle. We assume that T is a uniform
mesh in the sense that the triangles in T are the same size. The uniform mesh generated on Ω is given in Fig. 1. We define
V to be the space of continuous functions on Ω which are linear polynomials in each triangle of T :
V = {v : v is a piecewise linear polynomial such that v|∂ Ω = 0}.
A. Smirnova, N. Tuncer / Nonlinear Analysis 74 (2011) 5987–5998 5993

Let {ϕi }li=1 be the basis for V , then w(⃗


x) ∈ V is uniquely determined by
l

w(⃗x) = ci ϕi (⃗
x).
i=1

Since V is a finite dimensional subspace of H01 (Ω ), identity (3.8) reduces to a system of linear equations
AC = F ,
where A is a positive definite matrix with components

Aij = ∇ϕi · a∇ϕj dΩ .

C = (c1 , c2 , . . . , cl ) is the vector of unknown coefficients, and


∫ ∫
Fi = f ϕi dΩ − a∇ U · ∇ϕi dΩ
Ω Ω

is the forcing vector. The inversion procedure can be summarized as follows: solve the forward problem (FP) by using a
q
high accuracy FEM and given spatial distribution for a = a(⃗ x) to get exact measurements {wi }i=1 at the wells. Then set
a(⃗x) = i=1 di ϕi (⃗
x) and solve FP, in general, with fewer base functions. This yields the set of observables, which approximate
∑m
{wi }qi=1 and depend (nonlinearly) on the unknown coefficients {di }m i=1 . As a result, one arrives at the finite dimensional
minimization problem
 2
q
− −l
min ‖F (d) − w‖2 = min cj (d)ϕj (⃗
xi ) − wi . (3.9)
d∈Rm d∈Rm
i=1 j =1

To calculate the Jacobian of F


 
∂ Fi l
∂ cj
  −
∇ F (d) = = ϕj (x⃗i ) ,
∂ d(k) i=1,q
j =1
∂ d(k) i=1,q
k=1,m
k=1,m

one can use the implicit equation for C (d):


A(d)C (d) = F (d).
Solving for ∂ C(k) in the identity
∂d

∂A ∂C ∂F
( )
C + A (k) = ,
∂d k ∂d ∂ d(k)
one gets
∂C ∂F ∂A
 
= [A(d)]−1
− (k) [A(d)] F (d)
−1
∂ d(k) ∂ d(k) ∂d
with
∂A
  ∫
= ∇ϕi · ϕk ∇ϕj dΩ ,
∂ d(k) i ,j Ω

∂F
  ∫
=− ϕk ∇ U · ϕi dΩ .
∂ d(k) i Ω

Note that for the underlying continuous problem, the Fréchet derivative F ′ (d) is compact as an operator from L∞ (Ω ) →
L2 (Ω ) [26,24]. Hence at every Newton step one has to solve an ill-posed linear problem. The error accumulates drastically
unless the process is regularized.

4. Experimental results

For our numerical simulations, we consider the steady-state case with homogeneous boundary conditions (ψ = 0 and
w = u):
−∇ · (a∇ u) = f in Ω (4.1)
u = 0 on ∂ Ω . (4.2)
5994 A. Smirnova, N. Tuncer / Nonlinear Analysis 74 (2011) 5987–5998

4 400

3 300

2 200

1 100

0 0
1 1
1 1
0.5 0.5
y 0.5 y 0.5
x x
0 0 0 0

Fig. 2. The exact hydraulic conductivity â = â1 (x, y) and the corresponding forcing function f = f1 (x, y) for the first experiment.

4 400

3 300

2 200

1 100

0 0
1 1
1 1

0.5 0.5
0.5 0.5
y y
x x
0 0 0 0

Fig. 3. The exact hydraulic conductivity â = â2 (x, y) and the corresponding forcing function f = f2 (x, y) for the second experiment.

A uniform mesh on the square aquifer Ω = [0, 1] × [0, 1] for 169 nodes producing 288 elements, is used. It is assumed in
the experiment that there are q = 121 wells in the region. IRGN–NRT algorithm (1.10) is applied to solve (3.9) numerically,
(1) (2)
and two different stabilizing terms are compared: Gn (dn ) = dn − ξn and Gn (dn ) = L∗ L(dn − ξn ) with L acting between
coefficient and physical spaces: (Ld)j = i=1 di ϕi (x
⃗j ). For the above stabilizing terms, three numerical experiments have
∑m
been conducted. In the first experiment, the exact values of the hydraulic conductivity and the corresponding forcing
function are
â1 (x, y) = 2 and f1 (x, y) = 40π 2 sin π x sin π y,
respectively (see Fig. 2). For the second experiment,
â2 (x, y) = 1 + x + y
and
f2 (x, y) = −10π cos π x sin π y − 10π sin π x cos π y + 20π 2 (1 + x + y) sin π x sin π y,
as illustrated in Fig. 3. The purpose of the last experiment is to reconstruct a piecewise constant hydraulic conductivity

y ≤ 0.5

2
â3 (x, y) =
4 y > 0.5,

for which the right-hand side is

40π 2 sin π x sin π y y ≤ 0.5



f3 (x, y) =
80π 2 sin π x sin π y y > 0.5,
A. Smirnova, N. Tuncer / Nonlinear Analysis 74 (2011) 5987–5998 5995

6 800

5
600
4

3 0
400

2
200
1

0 0
1 1
1 1
0.5 0.5
0.5 0.5
y
x
0 0 0 0

Fig. 4. The exact hydraulic conductivity â = â3 (x, y) and the corresponding forcing function f = f3 (x, y) for the third experiment.

10

0
1
0.8 1
0.6 0.8
0.4 0.6
y 0.4
0.2 0.2 x
0 0

Fig. 5. The piezometric head for both experiments: u = u(x, y).

Table 1
(1)
Relative errors △1 and △2 for the IRGN–NRT algorithm with stabilizing terms Gn (dn ) and
(2)
Gn (dn ) respectively.
â(x, y) d0 ξ0 △1 △2
â =1+x+y 0.5 4 0.120 0.105
â =1+x+y 0.1 3 0.076 0.069
â =1+x+y 0.1 0.1 Diverges 0.102
â =1+x+y 2.5 2.5 0.063 0.060
â = 1 + xy 0.5 0.5 0.082 0.080
â = 1 + xy 0.5 3 0.167 0.163
â = 1 + xy 2 0.5 0.087 0.085
â = 1 + xy 0.1 0.1 0.117 0.116
â = 1 + xy 0.1 3 0.169 0.167

as presented in Fig. 4. For all the three cases, the exact solution to the forward problem is

u(x, y) = 10 sin π x sin π y

as shown in Fig. 5.
In Fig. 6, one can see numerical approximations of â1 (x, y), â2 (x, y) and â3 (x, y), obtained by method (1.10) with
(2)
Gn (dn ) = L∗ L(dn − ξn ), ξ0 = d0 = 3.8, after N (δ, σ ) = 7 iterations. The cross-sectional comparison for â = â3 (x, y)
is given in Fig. 7. For the first picture in Fig. 7, y = 0.167. The second picture in Fig. 7 shows the contour plots for y = 0.417.
(1) (2)
Table 1 illustrates how the two stabilizing terms Gn (dn ) and Gn (dn ) perform compared to one another for two model
diffusivity parameters, â(x, y) = 1 + x + y and â(x, y) = 1 + xy, and different values of d0 and ξ0 . In the last two columns
(1) (2)
of Table 1, the entries △1 and △2 are the relative errors for algorithm (1.10) with Gn (dn ) = Gn (dn ) and Gn (dn ) = Gn (dn )
respectively. The experiments have been performed in the presence of relative noise up to 3%. In both cases, {τn } decays
exponentially with τ0 = 10−6 .
5996 A. Smirnova, N. Tuncer / Nonlinear Analysis 74 (2011) 5987–5998

4 4
3.5 3.5
3 3
2.5 2.5
2 2
1.5 1.5
1 1
0.5 0.5
0 0
1 1
0.8 1 0.8 1
0.6 0.8 0.6 0.8
0.4 0.6 0.6
0.4
0.4 0.4
0.2 0.2 0.2 0.2
0 0 0 0

0
1
1
0.8
0.5 0.6
0.4
0.2
0 0

(2)
Fig. 6. Experimental results: approximate hydraulic conductivities a = ai (x, y), i = 1, 2, 3, for IRGN–NRT and Gn (dn ) = Gn (dn ).

5 5

4.5 4.5

4 4

3.5 3.5

3 3

2.5 2.5

2 2

1.5 1.5

1 1
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x

Fig. 7. Contour plots for the third experiment.

τ (2)
In Table 2, the impact of τ0 is analyzed, while the rate of decay for {τn } remains the same: τn = n+01 and Gn (dn ) = Gn (dn ).
When τ0 = 10−1 , the process is over-regularized and for τ0 = 10−15 the regularization is insufficient. Table 2 shows that
the scheme is rather stable to the choice of τ0 .
A. Smirnova, N. Tuncer / Nonlinear Analysis 74 (2011) 5987–5998 5997

Table 2
Relative errors △2 for different values of τ0 and
τn = nτ+01 .
â(x, y) = 1 + xy, d0 = 0.5 and ξ0 = 2
τ0 △2
1E−1 0.178
1E−3 0.086
1E−6 0.085
1E−10 0.081
1E−12 0.082
1E−15 Diverges

Table 3
Relative errors △2 for various regularizing sequences
τn and τ0 = 10−10 .
â(x, y) = 1 + xy, d0 = 0.5 and ξ0 = 2
τn △2
τn = τ0 e−n 0.085
τn = τ0 /(n + 1) 0.081
τn = τ0 / ln(n + e) 0.083
τn = τ0 0.084

Table 3 allows us to compare numerical performance of various regularizing sequences {τn } when τ0 = 10−10 . The choice
of {τn } does not seem to matter much. Probably this can be explained by the fact that ‖Gn (dn )‖ = ‖L∗ L(dn − ξn )‖ → 0 as
n → ∞, since we take ξn = dn−1 for n = 1, 2, . . . in our experiments.

5. Conclusion

To summarize, a more general stabilizer Gn (dn ) in IRGN–NRT algorithm (1.10) allows greater flexibility in taking into
account the nature of a particular inverse problem and utilizing specific a priori information available for each applied model.
For the inverse groundwater flow problem, the use of Gn (dn ) makes it possible to regularize the solution in a physical space
rather than coefficient one, by incorporating an appropriate linear operator in the penalty term. One can also update Gn (dn )
at every step by implementing the information obtained for previous value(s) of n.
In the future, we plan to study the possibility of using less information for computing the hydraulic conductivity a from
the sample values of the piezometric head u, since in practice there may be fewer wells available in the region. Besides, we
plan to apply the IRGN–NRT algorithm to the problem of estimating the coefficient of storage of the porous medium in the
time-dependent case. Last but not least, we will consider a finite dimensional analog of Total Variation (1.11) as a penalty
term in the corresponding Tikhonov functional for solving inverse problems with possibly non-smooth and/or discontinuous
solutions by the IRGN–NRT scheme.

References

[1] A.B. Bakushinsky, Iterative methods for nonlinear operator equations without regularity. New approach, Dokl. Russian Acad. Sci. 330 (1993) 282–284.
[2] J. Nocedal, S.J. Wright, Numerical Optimization, Springer-Verlag, New York, 1999.
[3] V.V. Vasin, A.L. Ageev, Ill-Posed Problems with a Priori Information, VNU, Utrecht, 1995.
[4] H. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publisher, Dordecht, Boston, London, 1996.
[5] A.B. Bakushinsky, M.Yu. Kokurin, Iterative methods for Ill-Posed Operator Equations with Smooth Operators, Springer, Dordrecht, Great Britain, 2004.
[6] B. Kaltenbacher, A. Neubauer, O. Scherzer, Iterative Regularization Methods for Nonlinear Ill-Posed Problems, in: Radon Series on Computational and
Applied Mathematics, vol. 6, Walter de Gruyter, Berlin, 2008.
[7] A.B. Bakushinsky, Iterative methods without saturation for solving degenerate nonlinear operator equations, Dokl. Russian Acad. Sci. 334 (1995) 7–8.
[8] B. Blaschke, A. Neubauer, O. Scherzer, On convergence rates for the iteratively regularized Gauss–Newton method, IMA J. Numer. Anal. 17 (1997)
421–436.
[9] T. Hohage, Logarithmic convergence rates of the iteratively regularized Gauss–Newton method for an inverse potential and inverse scattering problem,
Inverse Problems 13 (1997) 1279–1299.
[10] Q. Jin, A conergence analysis of the iteratively regularized Gauss–Newton method under the Lipschitz condition, Inverse Problems 24 (2008) 16 pp..
[11] F. Bauer, T. Hohage, A. Munk, Iteratively regularized Gauss–Newton method for nonlinear inverse problems with random noise, SIAM J. Numer. Anal.
47 (2009) 1827–1846.
[12] S. Langer, Complexity analysis of the iteratively regularized Gauss–Newton method with inner CG-iteration, J. Inverse Ill-Posed Probl. 17 (2009)
871–890.
[13] B. Kaltenbacher, B. Hofmann, Convergence rates for the iteratively regularized Gauss–Newton method in Banach spaces, Inverse Problems 26 (2010)
21 pp..
[14] A. Lechleiter, A. Rieder, Towards a general convergence theory for inexact Newton regularizations, Numer. Math. 114 (2010) 521–548.
[15] C. Toews, B. Nelson, Improving the Gauss–Newton convergence of a certain position registration scheme, Inverse Problems 26 (2010) 18 pp..
[16] M. Burger, B. Kaltenbacher, Regularizing Newton–Kaczmarz methods for nonlinear ill-posed problems, SIAM J. Numer. Anal. 44 (2006) 153–182.
[17] A.B. Smirnova, R.A. Renaut, T. Khan, Convergence and application of a modified iteratively regularized Gauss–Newton algorithm, Inverse Problems 23
(2007) 1547–1563.
5998 A. Smirnova, N. Tuncer / Nonlinear Analysis 74 (2011) 5987–5998

[18] K. Kunisch, N. Ring, Regularization of nonlinear ill-posed problems with closed operators, Numer. Funct. Anal. Optim. 14 (1998) 389–404.
[19] A.B. Bakushinsky, Iterative methods with fuzzy feedback for solving irregular operator equations, Dokl. Russian Acad. Sci. 428 (2009) 1–3.
[20] S.R. Arridge, Optical tomography in medical imaging: topical Review, Inverse Problem 15 (1999) R41–R93.
[21] L. He, M. Burger, S. Osher, Iterative total variation regularization with non-quadratic fidelity, J. Math. Imaging Vision 26 (2006) 167–184.
[22] A. Marquina, S. Osher, Image super-resolution by TV-regularization and Bregman iteration, J. Sci. Comput. 37 (2008) 367–382.
[23] M. Bachmayr, M. Burger, Iterative total variation schemes for nonlinear inverse problems, Inverse Problems 25 (2009) 26 pp..
[24] M. Hanke, A regularizing Levenberg–Marquardt scheme, with applications to inverse groundwater filtration problems, Inverse Problems 13 (1997)
79–95.
[25] R. Dautray, J.-L. Lions, Mathematical Analysis and Numerical Methods for Science and Technology, in: Evolution Problems. I. With the Collaboration
of Michel Artola, Michel Cessenat and Hlne Lanchon, vol. 5, Springer-Verlag, Berlin, 1992, Translated from the French by Alan Craig.
[26] K. Ito, K. Kunisch, On the injectivity and linearization of the coefficient-to-solution mapping for elliptic boundary value problems, J. Math. Anal. Appl.
188 (1994) 1040–1066.

You might also like