Professional Documents
Culture Documents
hnomathematik
Fa
hberei
h 3 { Mathematik und Informatik
Ronny Ramlau
Report 01{08
Juli 2001
Abstra
t
We
onsider Morozov's dis
repan
y prin
iple for Tikhonov{regularization of nonlinear
F already guarantee
a regularization parameter su
h that
F (x )k
1 holds.
additional smoothness assumptions on the solution of F (x) = y ensure
ky
operators.
1 Introdu
tion
A large variety of te
hni
al and physi
al problems
an be mathemati
ally modeled by an operator equation
F (x) = y ;
(1)
where F : X ! Y is a (nonlinear) operator between Hilbert spa
es, x the sear
hed{for
information and y the exa
t data. Typi
al examples of su
h problems arise in medi
al imaging
[22 or inverse s
attering [5. The available data usually stems from a measurement pro
ess.
Due to measurement errors, we have to deal with noisy data y whi
h satisfy
ky yk :
(2)
If the solution of (1) does not depend
ontinuously on the data, then the problem is
alled
ill posed. In
ase of inexa
t data, this instability requires regularization methods for treating
the inverse problem.
Be
ause F is a nonlinear operator, equation (1) might have several solutions. We will
all x
an x-minimum-norm-solution, i
F (x ) = y
(3)
and
kx xk = x2min
fkx xk : F (x) = yg :
(4)
D(F )
1
During the last de
ade most of the well known regularization methods for linear operator
equations have been generalized to spe
ial
lasses of nonlinear operators. E.g. iterative methods like Landweber iteration [15, 25, Levenberg{Marquardt methods [13, Gauss{Newton [1, 3,
onjugate gradient [14 and Newton{like methods [2 are easily to implement. Unfortunately,
these methods work only under relatively strong
onditions on the nonlinear operator and its
Fre
het derivative.
Another widely used method is Tikhonov{regularization, where the nonlinear equation (1) is
repla
ed by the minimization problem of nding a minimizer x of the Tikhonov fun
tional
J (x) = ky
F (x)k2 + kx xk2 :
(5)
Tikhonov{regularization works for a reasonably large
lass of nonlinear operators. In prin
iple,
it
an be applied to weakly sequentially
losed operators with Lips
hitz{
ontinuous Fre
het
derivative [10. As for all regularization methods, a main problem is the
hoi
e of the regularization parameter.
To obtain
onvergen
e rates for Tikhonov{regularization, one has to assume a smoothness
ondition x x = F 0(x) ! with su
iently small k!k. With an a priori parameter
hoi
e =
,
> 0, a
onvergen
e rate
p
(6)
kx x k k(
)
with k(
) > 0
an be obtained [11. An examination of the
onvergen
e proof shows that k(
)
is minimized by the optimal parameter
hoi
e =
opt,
opt = k!k 1, and
1=2
p
(7)
kx x k (1 2kL!kk!k)1=2 :
(L denotes the Lips
hitz{
onstant for the Fre
het derivative). In general, the value of k!k is
not available , and so is
opt. As a
onsequen
e, one will never get the optimal
onstant k for
an a priori parameter
hoi
e.
An alternative are a posteriori parameter strategies. A well studied method is Morozov's
dis
repan
y prin
iple, where a regularization parameter with
ky F (x)k =
;
(8)
1, is used. An advantage of Morozov's prin
iple is that, even without knowing k! k, one
gets always an estimate
(2(1
+
)k! k)1=2 p
k x k (1 Lk!k)1=2
(9)
(see Theoremp2.9). For
= 1, we get the optimal error bound (7), for
> 1 this bound
is multiplied by (1 +
)=2. A drawba
k of the dis
repan
y prin
iple is that a regularization
parameter with (8) might not exist for general nonlinear operators F . Moreover, even if su
h
a parameter exist it requires an additional optimization pro
ess to nd it numeri
ally. The
lassi
al algorithm for Morozov's dis
repan
y prin
iple applied to Tikhonov regularization for
linear operators A [18, 12, 19 would be to
hoose
1; 0 > 0, 0 < q < 1, j = qj 0, and to
ompute xj until
ky Axj k
1
(10)
x
An operator F between Bana
h spa
es is weakly sequentially
losed if for every sequen
e fxn g
D(F )
xn * x and F (xn ) * y
for n ! 1
3
We might summarize some well known results about strongly
ontinuous operators whi
h will
be needed later on.
Proposition 2.1
: X!Z
is
F
H s(
2 )
F : H s+"(
1 ) ,! H s(
1 ) !
~
Let x denote a minimizing element of the Tikhonov fun
tional J (x). By the minimizing
property of x1 we get for 1 < 0
J1 (x1 ) J1 (x0 ) J0 (x0 ) :
(14)
It is well known that for general nonlinear operators F the mapping ! ky F (x )k might be
dis
ontinuous. Due to the
ontinuouity of F , the mapping ! x then has to be dis
ontinuous
too. Nevertheless, ! J (x) is always
ontinuous:
Proposition 2.2 Let k ! > 0 for k ! 1, k > 0 for all k 2 R . Then
Jk (xk ) ! J (x )
Proof:
We have
for
k!1:
k
jJk (xk ) J(x )j = JJk(x(x )k ) JJ(x(x )) for
for k ;
k k
and thus it follows from (14) for k
Jk (xk ) J (x ) Jk (x ) J (x )
= (k )kx xk2
(15)
and for k
J (x ) Jk (xk )
With
we get altogether
Jk (xk )
k kxk
J(xk ) Jk (xk )
= ( k )kxk xk2 :
xk2 Jk (xk ) Jk (x) = ky
F (x)k2
J (x )
(16)
Now let us assume that no parameter fullling the dis
repan
y prin
iple (13) exists.
reasonable assumptions to the a priori guess x in J (x) parameters 0 > 1 with
ky F (x0 )k >
1
ky F (x1 )k <
exist:
Proposition 2.3 Let x
be
hosen s.t.
ky F (x)k >
1 ;
Under
(17)
(18)
(19)
If no regularization parameter with (13) exists, then 0 > 1 with (17), (18) an be found.
Proof:
We have
It then follows with y 6= F (x) that x0
onverges to x for 0 ! 1 and, due to the
ontinuouity
of F ,
ky F (x0 )k ! ky F (x)k :
Be
ause of (19), ky F (x0 )k >
1 must hold for 0 big enough. On the other hand, we
an
estimate ky F (x1 )k from above by
Assumption (19) is quite natural: Indeed, if ky F (x)k
1 holds, then x would be taken
as approximation to the solution x .
If no parameter with (13) exists, then ky F (x )k has a jump at a
ertain parameter ~.
Be
ause J(x ) is
ontinuous, kx xk must have a jump of the same size.
Assume that no parameter with (13) exists and (19) holds. Then there
exists a parameter ~ su
h that (17), (18) holds for all 1 < ~ < 0 , and 0 ; 1 arbitrary
lose
to ~ . Moreover, we get
Proposition 2.4
kx
x0 k
(
21 1)2 ;
40 kx1 xk
(20)
Proof:
Proposition 2.3 ensures the existen
e of u;1 > l;1 with ky F (xl;1 )k < <
1 < ky
F (xu;1 )k. Setting m;j = l;j + u;j 2 l;j , j = 1; 2; ::, we have either ky F (xm;j )k < or
ky F (xm;j )k >
1 ; in the rst
ase we set l;j+1 = m;j , u;j+1 = u;j , in the se
ond
ase
l;j+1 = l;j and u;j +1 = m;j . A
ording to the
onstru
tion of both sequen
es, fl;j g and
fu;j g
onverge to the same limit point, l;j " ~, u;j # ~ for j ! 1. Due to Proposition 2.2,
we
an espe
ially
hoose 0 ; 1 with 1 < ~ < 0 and (17), (18) su
h that
J1 (x1 ) = J0 (x0 )
(21)
and <
22 ,
=
21 1 hold. From the denition of J (x), (17), (18) and (21) it then follows
1 kx1
xk2
0 kx0
xk2 =
xk2
or
kx
kx
21 2 2 =
2
2 2 ;
ky F (x )k2 ky F (x )k2
xk2 ) > 1 kx1
xk2
kx
xk2
kx
xk2
kx
xk2
0 kx0
xk2 2
2
20 > 0 :
(22)
xk)
we arrive at
kx
x0 k
2
;
40 kx1 xk
whi
h shows that the distan
e between x1 and x0 for arbitrary
lose parameters 0 > 1 with
(17), (18) is bounded from below.
In the following we will show that strong
ontinuouity of F ensures the existen
e of a
regularization parameter with (13).
Proposition 2.5 Let fk gk 2N , k > 0 for all k 2 N be a sequen
e with k ! > 0 for
k ! 1, and xk a
orresponding minimizing element of the Tikhonov{fun
tional Jk (x). If F
is strongly
ontinuous, then there exists a weakly
onvergent subsequen
e x
kn
of xk ,
xk * x~ ;
n
Proof:
As in (16) we get
i.e.
is bounded. Thus, there exists a weakly
onvergent subsequen
e of fxk g (for simpli
ity
of notation, this subsequen
e will be denoted by xk again), xk * x~ and
xk
F (xk )
! y F (~x) ;
lim inf k
k + lim inf ( k + k )kx
k
lim inf ky F (xk )k2 + k kx xk k2 + ( k )kx xk k2
y
F (xk )
= klim
J (x ) = J (x ) :
!1 k k
xk
We might note that the minimizing element of J (x) does not have to be unique; but we are
only interested in nding a weakly
onvergent subsequen
e to an arbitrary minimizing element,
whi
h means we
an assign x~ to x . On the other hand, if J(x) has a unique minimizer, then
every subsequen
e of xk has a subsequen
e whi
h
onverges weakly to to the unique minimizer
x , and it follows by the
onvergen
e prin
iples that the sequen
e xk
onverges itself weakly
to x .
Let F be a strongly
ontinuous operator and x be
hosen su
h that (19) holds.
Then there exists a parameter for Tikhonov{regularization s.t.
Theorem 2.6
ky
F (x )k 1
(23)
holds.
Proof:
Let us assume there exists no su
h parameter. We set
M := f : ky F (x )k < g
and = sup M . A
ording to Proposition 2.3, 0 < < 1 holds. We have to
onsider two
ases:
1. 2 M . Then we
hoose a sequen
e k # . Due to Proposition 2.5 we
an nd a subsequen
e
fxkn g of fxk g with xkn * x . It is kn > , and be
ause no parameter with (13) exists ,
ky F (xkn )k > 1
must hold for all kn . But due to the strong
ontinuouity of F we observe
ky F (xkn )k ! ky F (x )k < ;
(24)
(25)
ky F (xkn )k <
(26)
F (x )k >
1 for kn ! 1, whi
h is a
ontradi
tion to (26).
For the numeri
al realization of Morozov's dis
repan
y prin
iple, we
an now propose the
well known iterative algorithm from linear inverse problems:
8
else qj = qj 1 + (1 qj 1)=2, j = qj u
Compute xj .
If ky F (xj )k >
1 then u = j
end
The algorithm produ
es a monotone de
reasing sequen
e of regularization parameters j at
the beginning. Only if the norm of the residual jumps behind the trust region, ky F (xj 1 )k <
<
1 < ky F (xj )k, a bigger parameter is used. However, if q is
hosen
lose to 1 or if
1
is reasonable big, su
h a jump will not o
ur in a numeri
al realization. In [28 it was outlined
that Morozov's dis
repan
y prin
iple yields sometimes a to small regularization parameter; this
an be avoided by
hoosing
1 big enough.
When an iterative algorithm for minimizing J(x) is used, then the
omputational eort might
depend on a good starting value for the iteration. For j+1 = j q, q < 1, one would like to
take the already
omputed minimizing element of Jj (x), xj , as starting value for the iteration
for minimizing Jj+1 (x). For q 1 this
an be justied if the mapping ! x is
ontinuous.
We will now show that for every sequen
e k ! exists at least a
onvergent subsequen
e s.t.
xkn ! x holds. For this result we have to employ a knew property of the operator F . We
require
xn * x for n ! 1 =) F 0 (xn ) z ! F 0 (x) z for n ! 1 :
(27)
In the next se
tion we will give several examples where this
ondition is fullled.
Let the assumptions of Proposition 2.5 hold. If the Fre
het derivative F 0 of F
is Lips
hitz{
ontinuous,
kF 0(x) F 0(z)k Lkx zk ;
(28)
Theorem 2.7
and in addition
ondition (27) holds, then there exists a
onvergent subsequen
e fx g of
kn
fxk g,
xk
! x~ ;
and x~ is a minimizer of J (x). If in addition J (x) has a unique minimizer, then the whole
sequen
e
onverges to the minimizer of J (x).
Proof:
A
ording to Proposition 2.5, we
an nd a weakly
onvergent subsequen
e fxkn g of fxk g
9
with weak limit x~, where x~ is a minimizer of J (x). For simpli
ity of notation, we will again
denote xkn by xk . The ne
essary
onditions for a minimum of J (x) and Jk (x) are
F 0 (x) (y F (x)) (x x) = 0
F 0 (x) (y F (x)) k (x x) = 0 :
It then follows
k xk
(29)
F 0 (xk ) (yk ) F 0 (x ) (y ) = F 0 (xk ) (yk
y ) + (F 0 (xk ) F 0 (x )) (y )
kyk y k ! 0 for k ! 1
and so kF 0(xk ) (yk y )k ! 0.
Condition (27) yields
F 0 (xk ) (y ) ! F 0 (x ) (y ) for k ! 1
and thus we have shown
F 0 (xk ) (yk ) F 0 (x ) (y ) ! 0 :
After the proof of Proposition 2.5 we have seen that in
ase of a unique minimizer of J(x),
the whole sequen
e xk did weakly
onverge to x~. With the above arguments the
onvergen
e
is also strongly.
At the end of this se
tion we would like to give a
onvergen
e and a
onvergen
e rate result.
10
If
denotes data with ky k y k k and xkk is a minimizer of the Tikhonov{fun
tional
(5) with y repla
ed by y k and the parameter k
hosen by Morozov's dis
repan
y prin
iple,
then xkk has a
onvergent subsequen
e. The limit of every
onvergent subsequen
e is an x{
minimum{norm{solution of (1). If, in addition, the x{ minimum{norm{solution x of (1) is
unique, then
xkk ! x for k ! 1 :
Theorem 2.8
y k
x
x = F 0 (x ) !
(30)
and
2. Lk! k < 1.
If the regularization parameter is
hosen s.t.
ky
kx
F (x )k 1
2(1 +
1)k!k
x k
1 Lk!k
1=2
(31)
(32)
Proof:
Theorem 2.6 ensures the existen
e of a parameter with (31). The proof is a modi
ation of
a
onvergen
e proof for an a priori parameter
hoi
e for Tikhonov regularization. As in [10,
p.246 we obtain
or
(1 Lk! k)kx
x k2 2 ky F (x )k2 + 2k! k( + ky F (x )k) ;
(34)
2
we arrive at
kx x k2 2(11 + L
1k)!kk!k :
11
3 Examples
In the following we will give some examples whi
h will meet the
onditions of Se
tion 2. For
the auto
onvolution operator it is shown that the operator is strongly
ontinuous and meets
(27) but not S
herzer's
onditions (12). Other examples
ome from bilinear operator equations
and medi
al imaging.
3.1
x(s t)x(t) dt
(35)
For x 2 L2 [a; b, 1 < a < b < 1, we get by setting x(t) = 0 for t 62 [a; b that suppF~ (x)(s)
[
; d, 1 <
< d < 1 , and we
an
onsider
F~ : L2 [a; b ! L2 [
; d :
Holders inequality yields
kF~ (x)kL2[
;d (d
)1=2 kxk2L2 [a;b :
The operator F~ is espe
ially a (symmetri
) bilinear operator, F~ (x) = B (x; x) with kB k =
(d
)1=2 , and it follows immediately that F~ is Fre
het dierentiable with derivative
F~ 0 (x)h = 2B (x; h) :
(36)
To obtain a weakly sequentially
losed operator, we have to assume some smoothness of the
solution of the equation F~ (x) = y. Therefore we
onsider the auto
onvolution operator between
a Sobolev spa
e of order > 0 and L2[
; d: We dene F by
i
F 2
L [
; d ;
(37)
F : H0 [a; b ,! L2 [a; b !
where i denotes the
ompa
t embedding from H0[a; b to L2[a; b. From Proposition 2.1 follows
that F is strongly
ontinuous and hen
e weakly sequentially
losed.
Proposition 3.1 The Fre
h
et derivative F 0 of F is Lips
hitz
ontinuous with
kF 0(x) F 0(z)kH[a;b!L2[
;d 2(d
)1=2 kx zkL2 [a;b
(38)
0
Proof:
We have (F 0(x) F 0(z))h = 2(x z) h and by Holders inequality
1 k(F 0(x) F 0(z))hk2
=
L2 [
;d
4
Zd Z b
0[
(39)
(x z)(s t)h(t) dt ds
2[
2[
0[
0[
12
0[
and the last two inequalities prove the Lips hitz ontinuouity.
A by-produ
t of (38) is
Proposition 3.2
Proof:
From xn * x in H0[a; b follows xn ! x in L2[a; b, and thus from (38) the proposition.
(40)
As a
onsequen
e, kF 0(xn) F 0(x) k ! 0 and we have nally shown that (27) holds. Morozov's dis
repan
y prin
iple
an be used as parameter
hoi
e for Tikhonov regularization of
the auto
onvolution operator, and Theorem 2.7 applies. Our results were given in one dimension only, but they easily extend to higher dimensions. Additionally, we might remark that
ondition (27), whi
h was the only new
ondition to the operator F , was a
onsequen
e of the
Lips
hitz{
ontinuouity of F 0.
We will now see that the
onditions (11), (12) from [27 will usually not hold. In
ase of the
auto
onvolution operator, (11) reads
(x z) v = z k :
(41)
(\
x z ) bv
:
zb
(42)
If jzb(!)j >
> 0 holds, then
ondition (12) will hold. But whenever zb(!) has zeros, then
z might not even belong to the proper fun
tion spa
e. To illustrate this, let us assume the
auto
onvolution operator between H0[a; b and L2[
; d with > 1=2. Then k has to belong to
H01=2 [a; b, espe
ially k 2 L1 [a; b and it follows that bk is a
ontinuous and bounded fun
tion.
The fun
tions x and v belong to H01=2[a; b, and xb vb is
ontinuous and bounded too. We
hoose
x; v; z and !0 in su
h a way, that zb(!0 ) = 0 and xb vb(!0 ) 6= 0. For a sequen
e !n ! !0 follows
jbk(!n)j ! 1, whi
h means that bk is not bounded and therefore does not belong to H01=2 [a; b.
As a
onsequen
e,
ondition (12) is violated.
3.2
13
(43)
with x = (f; ) 2 X1 X2, X1; X2 Hilbert spa
es, A a
ontinuous linear operator in f and B
a bilinear operator in (f; ):
A : X1 ! Y
B : X 1 X2 ! Y
(44)
(45)
(46)
(47)
(48)
(49)
Operators of type (43) o
ur in parameter estimation problems for partial dierential operators
[6, 9, 16, 24, 17 and in the area of medi
al imaging (
ompare next se
tion).
It is easy to see that F~ is Fre
het dierentiable with derivative
F~ 0 (f; )(h1 ; h2 ) = Ah1 + B (h1 ; ) + B (f; h2 ) ;
(50)
and
k(F~ 0(f; ) F~ 0(g; ))(h1; h2 )k = kB (h1; ) + B (f; h2) B (h1 ; ) B (g; h2)k
= kB (h1; ) + B (f g; h2)k
kB kk kkh1 k + B kf gkkh2k
2kB kk(f g; )kk(h1 ; h2)k :
Therefore F~ 0 is Lips
hitz
ontinuous with
onstant L = 2kB k. It remains to show that F~
is strongly
ontinuous and that F~ 0(fn; n)(h1; h2) ! F~ 0(f; )(h1; h2) if (fn; n) * (f; ) for
n ! 1 holds. In appli
ations it might happen that F~ already meets these
onditions as
operator from X1 X2 to Y . If not, this
an be a
hieved by assuming more \regularity" of the
solution of F~ (x) = y, whi
h means we have to
hange the denition area of F~ . Let us assume
that there exist fun
tion spa
es X1s and X2s, and
ompa
t embedding operators is1 : X1s ! X1 ,
is2 : X2s ! X2 . Then we
an
onsider
F : X1s X2s
is1
is2
~
Y
! X1 X2 F!
(51)
and get from Proposition 2.1 (2) that F is strongly
ontinuous and weakly sequentially
losed.
Now, exa
tly as for the auto
onvolution operator, we obtain
14
(52)
(53)
If (fn; n) * (f; ) in X1s X2s , then (fn; n) ! (f; ) in X1 X2 and (52) yields F 0(fn; n) !
F 0 (f; ) and F 0 (fn ; n ) ! F 0 (f; ) in the operator norm. This shows that Morozov's dis
repan
y prin
iple is appli
able and Theorem 2.7 holds.
We might remark that our argument will apply to arbitrary nonlinear
ontinuous and Fre
het
dierentiable operators F : X ! Y with Lips
hitz
ontinuous derivative as long as a fun
tion
spa
e X s with
ompa
t embedding to X is available. A
ommon
hoi
e for X s might be a
Sobolev spa
e over a bounded region
1 and for X the spa
e L2(
2 ).
3.3
Some of the most
hallenging ill{posed problems arise in the area of medi
al imaging. In
SPECT, one tries to re
onstru
t the distribution of a radiopharma
euti
al inside a human
body by measuring the intensity of the radiation outside the body. As the name suggests,
SPECT is related to the Computerized Tomography (CT), where one has to re
onstru
t the
density of a body by measuring the out
oming intensity of X{rays through the body. In
ontrast
to CT, where the measured intensity depends only on the intensity of the in
oming X{ray and
the density of the tissue along the path of the X{ray, depend the measurements for SPECT
on the a
tivity fun
tion f (whi
h des
ribes the distribution of the radiopharma
euti
al) and the
density of the tissue. The measured data y and the tuple (f; ) are linked by the Attenuated
Radon Transform (ATRT),
y = R(f; )(s; ! ) =
s
f (s! ? + t! )e
t1 (s!? +!) d
dt ;
(54)
2 R ; ! 2 S 1 . As for the Radon Transform, the data are represented as line integrals
over all possible unit ve
tors !. Usually both f and are unknown fun
tions, and R is a
nonlinear operator. During the last de
ade several papers on this problem were published
[4, 20, 21, 23, 29. Di
ken [8 examined the mapping properties of the ATRT and
on
luded
that under some reasonable assumptions to the smoothness of f and Tikhonov regularization
with a priori parameter
hoi
e is appli
able to regularize (54). In [26 a bilinear approximation
R~ to R was introdu
ed:
R~ (f; ~) =
R1
(s!? +!) d
?
f (s! + t! )e t 0
(1
~(s! ? + ! )d ) dt :
(55)
In this approximation the exponential term in (54) was simply repla
ed by the rst two terms
of its Taylor expansion around a guess 0 for the attenuation fun
tion = 0 + ~. Moreover,
iterative methods for solving y = R~ (f; ~) were proposed in this paper. In the following, it shall
be shown that Morozov's dis
repan
y prin
iple for Tikhonov regularization
an be applied to
SPECT. The analysis will be done for the ATRT operator only; for the bilinearized version
(55) a similar result yields.
To get the desired results like strong
ontinuouity or Fre
het dierentiability for the ATRT
operator, it has to be
onsidered between proper fun
tion spa
es. Additionally, there will be
some trouble with the unbounded growth of the exponential fun
tion for negative arguments.
In the following, we will summarize some results from [8. Di
ken introdu
es an operator R%
15
by
R% (f; )(s; ! ) =
f (s! ? + t! )E (
(s! ? + ! ) d ) dt :
(56)
Let the onditions of Proposition 3.3 hold. Then ondition (27) holds.
Proof:
Let s1; s2 > 2=5 be given. A
ording to Proposition 3.3, the Fre
het derivative of F is Lips
hitz
ontinuous for every s1; s2 > 2=5. Thus we
an nd s1 ; s2 with
2 < s = s < s = s :
5 1 2 1 2
Therefore
k(R%0 (f; ) R%0 (g; ))(h1; h2)k
k(f; ) (g; )kHs1 Hs2 k(h1 ; h2)kHs1 Hs2
0
0
0
0
k(f; ) (g; )kHs1 Hs2 k(h1 ; h2)kHs1 Hs2
and we have again
kR%0 (f; ) R%0 (g; )kHs1 Hs2 !L2(S1[
0
%;%)
2
0
(58)
The
embedding
from
H0s1 H0s2 to H0s1 H0s2 is
ompa
t, and therefore weak
onvergen
e in
H0s1 H0s2 indu
es norm
onvergen
e in H0s1 H0s2 . We
on
lude for a sequen
e (fn ; n) * (f; )
for n ! 1 in Ds1;s2;C H0s1 H0s2 that
kR%0 (fn; n) R%0 (f; )kHs1 Hs2 !L2 (S1[ %;%) ! 0 for n ! 1
0
0
holds, and
onsequently (27).
16
3.4
Con lusions
We have shown that Morozov's dis
repan
y prin
iple for Tikhonov regularization applies to
a wide
lass of problems. The existen
e of a regularization parameter with (13)
an be
guaranteed under mild restri
tions. If in addition (27) is assumed, then there exists for every
sequen
e k ! a subsequen
e kn with xkn ! x . In the above examples was demonstrated
that (27)
an often be
on
luded from the Lips
hitz
ontinuouity of the Fre
het derivative of
the nonlinear operator. We have shown that our
onditions are easy to handle and apply even
when (12) fails.
Referen
es
[1 A. W. Bakushinskii. The problem of the
onvergen
e of the iteratively regularized gauss{
newton method. Comput. Maths. Math. Phys., (32):1353{1359, 1992.
[2 B. Blas
hke. Some newton type methods for the regularization of nonlinear ill{posed
problems. Inverse Problems, (13):729{753, 1997.
[3 B. Blas
hke, A. Neubauer, and O. S
herzer. On
onvergen
e rates for the iteratively
regularized gauss{newton method. IMA Journal of Numeri
al Analysis, (17):421{436,
1997.
[4 Y. Censor, D. Gustafson, A. Lent, and H. Tuy. A new approa
h to the emission
omputerized tomography problem: simultaneous
al
ulation of attenuation and a
tivity
oe
ients.
IEEE Trans. Nu
l. S
i., (26):2275{79, 1979.
[5 D. Colton and R. Kress. Inverse A
ousti
and Ele
tromagneti
S
attering Theory. Springer,
Berlin, 1992.
[6 D. Colton and M. Piana. The simple method for solving the ele
tromagneti
inverse
s
attering problem: the
ase of TE polarized waves. Inverse Problems, 14(3):597{614,
1998.
[7 C. Cravaris and J. H. Seinfeld. Identi
ation of parameters in distributed parameter
systems by regularization. SIAM J. Contr. Opt., (23):217{241, 1985.
[8 V. Di
ken. A new approa
h towards simultaneous a
tivity and attenuation re
onstru
tion
in emission tomography. Inverse Problems, 15(4):931{960, 1999.
[9 O. Dorn. A transport-ba
ktransport method for opti
al tomography. Inverse Problems,
14(5):1107{1130, 1998.
[10 H. W. Engl, M. Hanke, and A. Neubauer. Regularization of Inverse Problems. Kluwer,
Dordre
ht, 1996.
[11 H.W. Engl, K. Kunis
h, and A. Neubauer. Convergen
e rates for Tikhonov regularization
of nonlinear ill-posed problems. Inverse Problems, (5):523{540, 1989.
17
[12 A. Frommer and P. Maass. Fast
g-based methods for Tikhonov regularization. SIAM J.
S
i. Comp., 5(20):1831{1850, 1999.
[13 M. Hanke. A regularizing levenberg{marquardt s
heme, with appli
ations to inverse
groundwater ltration problems. Inverse Problems, (13):79{95, 1997.
[14 M. Hanke. Regularizing properties of a trun
ated newton{
g algorithm for nonlinear illposed problems. Num. Fun
t. Anal. Optim., (18):971{993, 1997.
[15 M. Hanke, A. Neubauer, and O. S
herzer. A
onvergen
e analysis of the Landweber
iteration for nonlinear ill-posed problems. Numeris
he Mathematik, (72):21{37, 1995.
[16 T. Klibanov, T.R. Lu
as, and R.M. Frank. A fast and a
urate imaging algorithm in
opti
al/diusion tomography. Inverse Problems, 13(5):1341{1363, 1997.
[17 K. Kunis
h and X.-C. Tai. Sequential and parallel splitting methods for bilinear
ontrol
problems in Hilbert spa
es. SIAM J. Numer. Anal., 34(1):91{118, 1997.
[18 A. K. Louis. Inverse und s
hle
ht gestellte Probleme. Teubner, Stuttgart, 1989.
[19 P. Maass, S. V. Pereverzev, R. Ramlau, and S. G. Solodky. An adaptive dis
retization
s
heme for Tikhonov{regularization with a posteriori parameter sele
tion. Numeris
he
Mathematik, 87(3):485{502, 2001.
[20 F. Natterer. Numeri
al solution of bilinear inverse problems. Te
hni
al report 19/96,
Fa
hberei
h Mathematik der Universitat Munster.
[21 F. Natterer. Computerized tomography with unknown sour
es. SIAM J. Appl. Math.,
(43):1201{12, 1983.
[22 F. Natterer. The Mathemati
s of Computerized Tomography. B.G. Teubner, Stuttgart,
1986.
[23 F. Natterer. Determination of tissue attenuation in emission tomography of opti
ally dense
media. Inverse Problems, (9):731{736, 1993.
[24 F. Natterer and F. Wubbeling. A propagation-ba
kpropagation method for ultrasound
tomography. Inverse Problems, 11(6):1225{1232, 1998.
[25 R. Ramlau. A modied landweber{method for inverse problems. Numeri
al Fun
tional
Analysis and Optimization, 20(1& 2), 1999.
[26 R. Ramlau, R. Cla
kdoyle, F. Noo, and G. Bal. A
urate attenuation
orre
tion in spe
t
imaging using optimization of bilinear fun
tions and assuming an unknown spatially{
varying attenuation distribution. Z. angew. Math. Me
h., 80(9):613{621, 2000.
[27 O. S
herzer. The use of Morozov's dis
repan
y prin
iple for Tikhonov regularization for
solving nonlinear ill{posed problems. Computing, (51):45{60, 1993.
18
[28 O. S
herzer, Engl H. W., and K. Kunis
h. Optimal a posteriori parameter
hoi
e for
Tikhonov regularization for solving nonlinear ill{posed problems. SIAM J. Numer. Anal.,
30(6):1796{1838, 1993.
[29 A Wel
h, R Cla
k, F Natterer, and G T Gullberg. Toward a
urate attenuation
orre
tion
in SPECT without transmission measurements. IEEE Trans. Med. Imaging, (16):532{40,
1997.
[30 E. Zeidler. Nonlinear Fun
tional Analysis and its Appli
ations. Springer, New York, 1985.
19
ISSN 1435-7968
http://www.math.uni-bremen.de/zetem/beri hte.html
Reports
An Impli itly Restarted Symple ti Lan zos Method for the Symple ti Eigenvalue Problem,
Juli 1998.
98{02. Heike Fabender:
Sliding Window S hemes for Dis rete Least-Squares Approximation by Trigonometri Polynomials, Juli 1998.
Parallel Partial Stabilizing Algorithms for Large Linear Control Systems, Juli 1998.
Solving Algebrai
Ri
ati Equations on Parallel Computers Using Newton's Method with
Exa
t Line Sear
h, August 1998.
On the rate of onvergen e of innite horizon dis ounted optimal value fun tions, November
1998.
98{07. Peter Benner, Volker Mehrmann, Hongguo Xu:
A Note on the Numeri al Solution of Complex Hamiltonian and Skew-Hamiltonian Eigenvalue Problems, November 1998.
Numeri al simulation of a sili on oating zone with a free apillary surfa e, Dezember 1998.
The Parameterized
Error Analysis of the symple ti Lan zos Method for the symple ti Eigenvalue Problem,
Marz 1999.
99{03. Eberhard Bans
h, Alfred S
hmidt:
Simulation of dendriti rystal growth with thermal onve tion, Marz 1999.
Finite element dis retization of the Navier-Stokes equations with a free apillary surfa e,
Marz 1999.
99{05. Peter Benner:
Robustness of nonlinear systems and their domains of attra tion, August 1999.
Balan ed Trun ation Model Redu tion of Large-S ale Dense Systems on Parallel Computers, September 1999.
Collo ation methods for solving linear dierential-algebrai boundary value problems, Septem-
ber 1999.
99{09. Huseyin Ak
ay:
Hamilton and Ja obi ome full ir le: Ja obi algorithms for stru tured Hamiltonian eigenproblems, Oktober 1999.
A Hybrid Method for the Numeri al Solution of Dis rete-Time Algebrai Ri ati Equations,
November 1999.
99{13. Peter Benner, Enrique S. Quintana-Ort, Gregorio Quintana-Ort:
Numeri al Solution of S hur Stable Linear Matrix Equations on Multi omputers, November
1999.
99{14. Eberhard Bans
h, Karol Mikula:
Adaptivity in 3D Image Pro
essing, Dezember 1999.
00{01. Peter Benner, Volker Mehrmann, Hongguo Xu:
Perturbation Analysis for the Eigenvalue Problem of a Formal Produ t of Matri es, Januar
2000.
00{02. Ziping Huang:
Finite Element Method for Mixed Problems with Penalty, Januar 2000.
00{04. Eberhard Bans
h, Christoph Egbers, Oliver Mein
ke, Ni
oleta S
urtu:
Taylor-Couette System with Asymmetri
Boundary Conditions, Februar 2000.
00{05. Peter Benner:
Symple
ti
Balan
ing of Hamiltonian Matri
es, Februar 2000.
00{06. Fabio Camilli, Lars Grune, Fabian Wirth:
A regularization of Zubov's equation for robust domains of attra
tion, Marz 2000.
00{07. Mi
hael Wol, Eberhard Bans
h, Mi
hael Bohm, Domini
Davis:
Modellierung der Abkuhlung von Stahlbrammen, Marz 2000.
00{08. Stephan Dahlke, Peter Maa, Gerd Tes
hke:
Interpolating S
aling Fun
tions with Duals, April 2000.
00{09. Jo
hen Behrens, Fabian Wirth:
A globalization pro
edure for lo
ally stabilizing
ontrollers, Mai 2000.
00{10. Peter Maa, Gerd Tes hke, Werner Willmann, Gunter Wollmann:
Dete
tion and Classi
ation of Material Attributes { A Pra
ti
al Appli
ation of Wavelet
Analysis, Mai 2000.
00{11. Stefan Bos
hert, Alfred S
hmidt, Kunibert G. Siebert, Eberhard Bans
h, Klaus-Werner
Benz, Gerhard Dziuk, Thomas Kaiser:
Simulation of Industrial Crystal Growth by the Verti
al Bridgman Method, Mai 2000.
00{12. Volker Lehmann, Gerd Tes
hke:
Wavelet Based Methods for Improved Wind Proler Signal Pro
essing, Mai 2000.
00{13. Stephan Dahlke, Peter Maass:
A Note on Interpolating S
aling Fun
tions, August 2000.
00{14. Ronny Ramlau, Rolf Cla
kdoyle, Frederi
Noo, Girish Bal:
A urate Attenuation Corre tion in SPECT Imaging using Optimization of Bilinear Fun tions and Assuming an Unknown Spatially-Varying Attenuation Distribution, September
2000.
00{15. Peter Kunkel, Ronald Stover:
Symmetri ollo ation methods for linear dierential-algebrai boundary value problems,
September 2000.
00{16. Fabian Wirth:
The generalized spe tral radius and extremal norms, Oktober 2000.
00{17. Frank Stenger, Ahmad Reza Naghsh-Nil
hi, Jenny Niebs
h, Ronny Ramlau:
A unied approa
h to the approximate solution of PDE, November 2000.
00{18. Peter Benner, Enrique S. Quintana-Ort, Gregorio Quintana-Ort:
Parallel algorithms for model redu
tion of dis
rete{time systems, Dezember 2000.
00{19. Ronny Ramlau:
A steepest des ent algorithm for the global minimization of Tikhonov{Phillips fun tional,
Dezember 2000.
01{01. E
ient methods in hyperthermia treatment planning:
Torsten Kohler, Peter Maass, Peter Wust, Martin Seebass, Januar 2001.
01{02. Parallel Algorithms for LQ Optimal Control of Dis
rete-Time Periodi
Linear Systems:
Peter Benner, Ralph Byers, Rafael Mayo, Enrique S. Quintana-Ort, Vi
ente Hernandez,
Februar 2001.
01{03. Peter Benner, Enrique S. Quintana-Ort, Gregorio Quintana-Ort:
E
ient Numeri
al Algorithms for Balan
ed Sto
hasti
Trun
ation, Marz 2001.
01{04. Peter Benner, Maribel Castillo, Enrique S. Quintana-Ort:
Partial Stabilization of Large-S
ale Dis
rete-Time Linear Control Systems, Marz 2001.
01{05. Stephan Dahlke:
Besov Regularity for Edge Singularities in Polyhedral Domains, Mai 2001.
01{06. Fabian Wirth:
A linearization prin
iple for robustness with respe
t to time-varying perturbations, Mai
2001.
Adaptive Wavelet Methods for Saddle Point Problems - Optimal Convergen e Rates, Juli
2001.
01{08. Ronny Ramlau:
Morozov's Dis
repan
y Prin
iple for Tikhonov regularization of nonlinear operators, Juli
2001.