You are on page 1of 26

Zentrum fur Te

hnomathematik
Fa hberei h 3 { Mathematik und Informatik

Morozov's Dis repan y Prin iple for


Tikhonov regularization of nonlinear
operators

Ronny Ramlau

Report 01{08

Beri hte aus der Te hnomathematik


Report 01{08

Juli 2001

Morozov's Dis repan y Prin iple for Tikhonov


regularization of nonlinear operators
Ronny Ramlau
University of Bremen, Germany
July 18, 2001

Abstra t
We onsider Morozov's dis repan y prin iple for Tikhonov{regularization of nonlinear

F already guarantee
a regularization parameter su h that 
F (x )k  1 holds.
additional smoothness assumptions on the solution of F (x) = y ensure

operator equations. It is shown that minor restri tions to the operator


the existen e of
Moreover, some

an optimal onvergen e rate.

ky

Finally we investigate some pra ti ally relevant examples,

e.g. medi al imaging (Single Photon Emission Computed Tomography). It is illustrated


that the introdu ed onditions on

will be met in general by a large lass of nonlinear

operators.

1 Introdu tion
A large variety of te hni al and physi al problems an be mathemati ally modeled by an operator equation
F (x) = y ;
(1)
where F : X ! Y is a (nonlinear) operator between Hilbert spa es, x the sear hed{for
information and y the exa t data. Typi al examples of su h problems arise in medi al imaging
[22 or inverse s attering [5. The available data usually stems from a measurement pro ess.
Due to measurement errors, we have to deal with noisy data y whi h satisfy

ky yk  :

(2)

If the solution of (1) does not depend ontinuously on the data, then the problem is alled
ill posed. In ase of inexa t data, this instability requires regularization methods for treating
the inverse problem.
Be ause F is a nonlinear operator, equation (1) might have several solutions. We will all x
an x-minimum-norm-solution, i
F (x ) = y
(3)
and
kx xk = x2min
fkx xk : F (x) = yg :
(4)
D(F )
1

During the last de ade most of the well known regularization methods for linear operator
equations have been generalized to spe ial lasses of nonlinear operators. E.g. iterative methods like Landweber iteration [15, 25, Levenberg{Marquardt methods [13, Gauss{Newton [1, 3,
onjugate gradient [14 and Newton{like methods [2 are easily to implement. Unfortunately,
these methods work only under relatively strong onditions on the nonlinear operator and its
Fre het derivative.
Another widely used method is Tikhonov{regularization, where the nonlinear equation (1) is
repla ed by the minimization problem of nding a minimizer x of the Tikhonov fun tional
J (x) = ky

F (x)k2 + kx xk2 :

(5)
Tikhonov{regularization works for a reasonably large lass of nonlinear operators. In prin iple,
it an be applied to weakly sequentially losed operators with Lips hitz{ ontinuous Fre het
derivative [10. As for all regularization methods, a main problem is the hoi e of the regularization parameter.
To obtain onvergen e rates for Tikhonov{regularization, one has to assume a smoothness ondition x x = F 0(x) ! with su iently small k!k. With an a priori parameter hoi e = ,
> 0, a onvergen e rate
p
(6)
kx x k  k( )
with k( ) > 0 an be obtained [11. An examination of the onvergen e proof shows that k( )
is minimized by the optimal parameter hoi e = opt, opt = k!k 1, and
1=2
p
(7)
kx x k  (1 2kL!kk!k)1=2 :
(L denotes the Lips hitz{ onstant for the Fre het derivative). In general, the value of k!k is
not available , and so is opt. As a onsequen e, one will never get the optimal onstant k for
an a priori parameter hoi e.
An alternative are a posteriori parameter strategies. A well studied method is Morozov's
dis repan y prin iple, where a regularization parameter with
ky F (x )k = ;
(8)
 1, is used. An advantage of Morozov's prin iple is that, even without knowing k! k, one
gets always an estimate

(2(1
+
)k! k)1=2 p
k x k  (1 Lk!k)1=2
(9)
(see Theoremp2.9). For = 1, we get the optimal error bound (7), for > 1 this bound
is multiplied by (1 + )=2. A drawba k of the dis repan y prin iple is that a regularization
parameter with (8) might not exist for general nonlinear operators F . Moreover, even if su h
a parameter exist it requires an additional optimization pro ess to nd it numeri ally. The
lassi al algorithm for Morozov's dis repan y prin iple applied to Tikhonov regularization for
linear operators A [18, 12, 19 would be to hoose 1; 0 > 0, 0 < q < 1, j = qj 0, and to
ompute x j until
 ky Ax j k  1
(10)
x

holds. Hen e, (5) has to be minimized for a set of regularization parameters.


Let us now review some results on erning the dis repan y prin iple for Tikhonov regularization
of nonlinear operators. A parameter hoi e with (8) and = 1 was onsidered in [7. It was
shown that the existen e of a regularization parameter with (8) is onne ted with the onvexity
of a fun tional in dependen e of the noise level and the minimum norm least squares solution
of (1) within the given noise level . S herzer [27 onsidered (8) with > 1. Under the
assumption that for every x; z; v 2 X there exists a k(x; z; v) 2 X with
(F 0(x) F 0(z))v = F 0(z)k(x; z; v)
and
(11)
kk(x; z; v)k  K0kx zkkvk ;
(12)
it was shown that a parameter with (8) always exists. In [28 some examples of operators
whi h satisfy these onditions were given, but it seemed to us that it is rather di ult to
prove the above onditions for some nonlinear problems of interest, e.g. bilinear operators or
operators arising in medi al imaging. We might remark that a priori parameter hoi es will
work without these onditions, whi h means that there is a big gap between the appli ability
of a priori and a posteriori parameter hoi es.
The goal of this paper will be a relaxation of (11) and (12). In addition, we will not onsider
the lassi al dis repan y prin iple. For a numeri al realization it will not be possible to obtain
a parameter su h that (8) holds. As for the linear ase, we will hoose a trust region [; 1 ,
1 > 1, and determine the regularization parameter su h that the residual ky F (x )k belongs
to the trust region.
In Se tion 2 it will be shown that for strongly ontinuous operators su h an regularization
parameter always exists, and a onvergen e rate result is given. Se tion 3 ontains several
appli ations. The rst one fails to ful ll onditions (11), (12), but it an be shown easily
that our results apply. The last two examples stem from bilinear operator equations and
medi al imaging (Single Photon Emission Computed Tomography). In both ases it will be
demonstrated that our onditions an be met. As for an priori parameter hoi e, we get these
results without severe restri tions to the nonlinear operator.

2 Existen e of the regularization parameter


To use Morozov's dis repan y prin iple, we have to nd a parameter su h that
 ky F (x )k  1
1 > 1
(13)
holds. The problem is that for arbitrary nonlinear operators F the mapping ! ky F (x )k
might not be ontinuous (although ky F (x )k is monotoni ally in reasing in )[7. To
guarantee the existen e of an with (13), we need F to have some spe ial properties.
2.1

Some properties of nonlinear operators

An operator F between Bana h spa es is weakly sequentially losed if for every sequen e fxn g 
D(F )
xn * x and F (xn ) * y
for n ! 1
3

implies x 2 D(F ) and F (x) = y. F is alled strongly ontinuous, if


xn * x implies F (xn ) ! F (x) :

We might summarize some well known results about strongly ontinuous operators whi h will
be needed later on.
Proposition 2.1

Let X; Y; Z Bana h spa es over R .

1. If F is a linear ompa t operator, then F is strongly ontinuous.


2. If F : Y ! Z is ontinuous and the embedding X ,! Y is ompa t, then F
strongly ontinuous, and hen e also weakly sequentially losed.

: X!Z

is

For a proof, f. [30, Propositions 21.29, 21.81 .


We might remark that the se ond ase is ommon in ill posed problems when an operator F
a ting on Sobolev spa es an be de omposed into
i

F
H s(
2 )
F : H s+"(
1 ) ,! H s(
1 ) !
~

with " > 0, bounded


1 , ompa t embedding operator i and ontinuous operator F~ . Throughout this paper we will therefore onsider strongly ontinuous operators only.
2.2

Some results about the Tikhonov{fun tional

Let x denote a minimizing element of the Tikhonov fun tional J (x). By the minimizing
property of x 1 we get for 1 < 0
J 1 (x 1 )  J 1 (x 0 )  J 0 (x 0 ) :

(14)

It is well known that for general nonlinear operators F the mapping ! ky F (x )k might be
dis ontinuous. Due to the ontinuouity of F , the mapping ! x then has to be dis ontinuous
too. Nevertheless, ! J (x ) is always ontinuous:
Proposition 2.2 Let k ! > 0 for k ! 1, k > 0 for all k 2 R . Then
J k (x k ) ! J (x )

Proof:
We have

for

k!1:

 k
jJ k (x k ) J (x )j = JJ k(x(x )k ) J J (x(x )) for
for  k ;

k k
and thus it follows from (14) for  k
J k (x k ) J (x )  J k (x ) J (x )
= ( k )kx xk2

(15)

and for  k
J (x ) J k (x k )

With
we get altogether

J k (x k )

k kx k

 J (x k ) J k (x k )
= ( k )kx k xk2 :

xk2  J k (x k )  J k (x) = ky

F (x)k2

j  j k j  max min1f g ky F (x)k2; kx xk2


k
! 0 for k ! 1 :

J (x )

(16)



Now let us assume that no parameter ful lling the dis repan y prin iple (13) exists.
reasonable assumptions to the a priori guess x in J (x) parameters 0 > 1 with
ky F (x 0 )k > 1
ky F (x 1 )k <
exist:
Proposition 2.3 Let x
 be hosen s.t.
ky F (x)k > 1 ;

Under
(17)
(18)

(19)

If no regularization parameter with (13) exists, then 0 > 1 with (17), (18) an be found.

Proof:
We have

kx x k2  1 ky F (x )k2 + 0kx x k2


0
1
 ky F (x)k2 :


It then follows with y 6= F (x) that x 0 onverges to x for 0 ! 1 and, due to the ontinuouity
of F ,
ky F (x 0 )k ! ky F (x)k :
Be ause of (19), ky F (x 0 )k > 1 must hold for 0 big enough. On the other hand, we an
estimate ky F (x 1 )k from above by

ky F (x )k2  ky F (x )k2 + 1kx x k2


 2 + 1 kx x k2 ;
1

whi h means for small 1 either ky F (x 1 )k < or  ky F (x )k  1. Be ause we


have assumed that no parameter with (13) exists, we get ky F (x 1 )k < .

Assumption (19) is quite natural: Indeed, if ky F (x)k  1 holds, then x would be taken
as approximation to the solution x .
If no parameter with (13) exists, then ky F (x )k has a jump at a ertain parameter ~.
Be ause J (x ) is ontinuous, kx xk must have a jump of the same size.
Assume that no parameter with (13) exists and (19) holds. Then there
exists a parameter ~ su h that (17), (18) holds for all 1 < ~ < 0 , and 0 ; 1 arbitrary lose
to ~ . Moreover, we get

Proposition 2.4

kx

x 0 k 

( 21 1)2 ;
4 0 kx 1 xk

(20)

Proof:
Proposition 2.3 ensures the existen e of u;1 > l;1 with ky F (x l;1 )k < < 1 < ky


F (x u;1 )k. Setting m;j = l;j + u;j 2 l;j , j = 1; 2; ::, we have either ky F (x m;j )k < or
ky F (x m;j )k > 1 ; in the rst ase we set l;j+1 = m;j , u;j+1 = u;j , in the se ond ase
l;j+1 = l;j and u;j +1 = m;j . A ording to the onstru tion of both sequen es, f l;j g and
f u;j g onverge to the same limit point, l;j " ~, u;j # ~ for j ! 1. Due to Proposition 2.2,
we an espe ially hoose 0 ; 1 with 1 < ~ < 0 and (17), (18) su h that
J 1 (x 1 ) = J 0 (x 0 ) 
(21)
and  < 22 , = 21 1 hold. From the de nition of J (x), (17), (18) and (21) it then follows
1 kx 1

xk2

0 kx 0

xk2 =

xk2

or

kx

kx

 21 2 2  = 2 
 2 2 ;

and be ause of 0 > 1


0 (kx 1

ky F (x )k2 ky F (x )k2 

xk2 ) > 1 kx 1
xk2

kx

xk2 

Using (22) and

kx

xk2

kx

xk2

0 kx 0


xk2  2

2
2 0 > 0 :

xk2 = (kx 1 xk kx 0 xk)(kx 1 xk + kx 0


 2kx 1 xk(kx 1 xk kx 0 xk)
 2kx 1 xkkx 1 x 0 k ;

(22)
xk)

we arrive at

kx

x 0 k 

2
;
4 0 kx 1 xk

whi h shows that the distan e between x 1 and x 0 for arbitrary lose parameters 0 > 1 with
(17), (18) is bounded from below.

In the following we will show that strong ontinuouity of F ensures the existen e of a
regularization parameter with (13).
Proposition 2.5 Let f k gk 2N , k > 0 for all k 2 N be a sequen e with k ! > 0 for
k ! 1, and x k a orresponding minimizing element of the Tikhonov{fun tional J k (x). If F
is strongly ontinuous, then there exists a weakly onvergent subsequen e x

kn

of x k ,

x k * x~ ;
n

and x~ is a minimizing element of J (x).

Proof:
As in (16) we get

kx k xk2  min1f g ky F (x)k2 ;


k

i.e.
is bounded. Thus, there exists a weakly onvergent subsequen e of fx k g (for simpli ity
of notation, this subsequen e will be denoted by x k again), x k * x~ and
x k

kx x~k  lim inf kx x k k :


Moreover, due to the strong ontinuouity of F , we observe
y

F (x k )

! y F (~x) ;

and a ording to Proposition 2.2 we get


J k (x k ) ! J (x ) :

Altogether this yields


J (~x) = klim
ky
!1

F (x k )k2 + kx~ xk2

 lim inf k
k + lim inf ( k + k )kx
k


 lim inf ky F (x k )k2 + k kx x k k2 + ( k )kx x k k2
y

F (x k )

= klim
J (x ) = J (x ) :
!1 k k

x k

Now we have shown J (~x)  J (x ) and hen e x~ is a minimizer of J (x).

We might note that the minimizing element of J (x) does not have to be unique; but we are
only interested in nding a weakly onvergent subsequen e to an arbitrary minimizing element,
whi h means we an assign x~ to x . On the other hand, if J (x) has a unique minimizer, then
every subsequen e of x k has a subsequen e whi h onverges weakly to to the unique minimizer
x , and it follows by the onvergen e prin iples that the sequen e x k onverges itself weakly
to x .
Let F be a strongly ontinuous operator and x be hosen su h that (19) holds.
Then there exists a parameter for Tikhonov{regularization s.t.

Theorem 2.6

 ky

F (x )k  1

(23)

holds.

Proof:
Let us assume there exists no su h parameter. We set
M := f : ky F (x )k < g
and  = sup M . A ording to Proposition 2.3, 0 <  < 1 holds. We have to onsider two
ases:
1.  2 M . Then we hoose a sequen e k # . Due to Proposition 2.5 we an nd a subsequen e
fx kn g of fx k g with x kn * x  . It is kn > , and be ause no parameter with (13) exists ,

ky F (x kn )k > 1

must hold for all kn . But due to the strong ontinuouity of F we observe
ky F (x kn )k ! ky F (x  )k < ;

(24)
(25)

whi h is a ontradi tion to (24).


2.  62 M . Here we hoose k " , and we an nd a subsequen e fx kn g of fx k g with
x k ! x  . For all kn holds then the inequality
n
and ky F (x kn )k ! ky

ky F (x kn )k <
(26)
F (x  )k > 1 for kn ! 1, whi h is a ontradi tion to (26).


For the numeri al realization of Morozov's dis repan y prin iple, we an now propose the
well known iterative algorithm from linear inverse problems:
8

 hoose 1 > 1, 0 < q0 < 1 and 0 with (17)


 Set u = 0 , j=0
 while not (  ky F (x j )k  1)
 If ky F (x j )k > 1 then qj = qj 1, j = qj j
1

else qj = qj 1 + (1 qj 1)=2, j = qj u
 Compute x j .
 If ky F (x j )k > 1 then u = j

end
The algorithm produ es a monotone de reasing sequen e of regularization parameters j at
the beginning. Only if the norm of the residual jumps behind the trust region, ky F (x j 1 )k <
< 1 < ky F (x j )k, a bigger parameter is used. However, if q is hosen lose to 1 or if 1
is reasonable big, su h a jump will not o ur in a numeri al realization. In [28 it was outlined
that Morozov's dis repan y prin iple yields sometimes a to small regularization parameter; this
an be avoided by hoosing 1 big enough.
When an iterative algorithm for minimizing J (x) is used, then the omputational e ort might
depend on a good starting value for the iteration. For j+1 = j  q, q < 1, one would like to
take the already omputed minimizing element of J j (x), x j , as starting value for the iteration
for minimizing J j+1 (x). For q  1 this an be justi ed if the mapping ! x is ontinuous.
We will now show that for every sequen e k ! exists at least a onvergent subsequen e s.t.
x kn ! x holds. For this result we have to employ a knew property of the operator F . We
require
xn * x for n ! 1 =) F 0 (xn ) z ! F 0 (x) z for n ! 1 :
(27)
In the next se tion we will give several examples where this ondition is ful lled.
Let the assumptions of Proposition 2.5 hold. If the Fre het derivative F 0 of F
is Lips hitz{ ontinuous,
kF 0(x) F 0(z)k  Lkx zk ;
(28)

Theorem 2.7

and in addition ondition (27) holds, then there exists a onvergent subsequen e fx g of
kn
fx k g,

x k

! x~ ;

and x~ is a minimizer of J (x). If in addition J (x) has a unique minimizer, then the whole
sequen e onverges to the minimizer of J (x).

Proof:
A ording to Proposition 2.5, we an nd a weakly onvergent subsequen e fx kn g of fx k g
9

with weak limit x~, where x~ is a minimizer of J (x). For simpli ity of notation, we will again
denote x kn by x k . The ne essary onditions for a minimum of J (x) and J k (x) are
F 0 (x) (y F (x)) (x x) = 0
F 0 (x) (y F (x)) k (x x) = 0 :

It then follows
k x k

( k )x = F 0(x k )(y F (x k )) F 0(x )(y F (x ))

(29)

The left hand side of (29) an be rewritten as


k x k x ( k )x = ( k )x k + (x k x ) ( k )x :
We have already shown that kx k k is bounded and therefore k( k )x k k ! 0 as well as
k( k )xk ! 0 for k ! 1. In order to prove the strong onvergen e of x k to x it is su ient
to show that the right hand side of (29) onverges to zero. By setting y := y F (x ) and
y k = y F (x k ) we have
and

F 0 (x k ) (y k ) F 0 (x ) (y ) = F 0 (x k ) (y k

y ) + (F 0 (x k ) F 0 (x )) (y )

kF 0(x k ) (y k y )k  kF 0(x k )kky k y k :

By the Lips hitz{ ontinuouity of F 0, the norm of F 0(x k ) is uniformly bounded:

kF 0(x k )k  kF 0(x k ) F 0(x )k + kF 0(x )k


 Lkx k x k + kF 0(x )k < C :
Due to the weak onvergen e of x k to x we on lude

ky k y k ! 0 for k ! 1

and so kF 0(x k ) (y k y )k ! 0.
Condition (27) yields
F 0 (x k ) (y ) ! F 0 (x ) (y ) for k ! 1
and thus we have shown
F 0 (x k ) (y k ) F 0 (x ) (y ) ! 0 :
After the proof of Proposition 2.5 we have seen that in ase of a unique minimizer of J (x),
the whole sequen e x k did weakly onverge to x~. With the above arguments the onvergen e
is also strongly.

At the end of this se tion we would like to give a onvergen e and a onvergen e rate result.
10

Let F : X ! Y be a ompletely ontinuous operator and k ! 0 for k ! 1.

If
denotes data with ky k y k  k and x kk is a minimizer of the Tikhonov{fun tional
(5) with y repla ed by y k and the parameter k hosen by Morozov's dis repan y prin iple,

then x kk has a onvergent subsequen e. The limit of every onvergent subsequen e is an x{
minimum{norm{solution of (1). If, in addition, the x{ minimum{norm{solution x of (1) is
unique, then

x kk ! x for k ! 1 :

Theorem 2.8

y k

For proof, we refer to Proposition 3.5 in [28.


In general, the onvergen e might be arbitrary slow. It is therefore of interest to have a
onvergen e rate result.
Theorem 2.9 Let F : X ! Y be a ompletely ontinuous operator with onvex de nition area
D(F ) and let x be a x{minimum{norm{solution of F (x) = y and ky y k  . Assume that
(19) and (28) hold. Moreover, we require the following range onditions:
1. there exists ! 2 Y satisfying

x

x = F 0 (x ) !

(30)

and
2. Lk! k < 1.
If the regularization parameter is hosen s.t.

 ky

holds, then we obtain

kx

F (x )k  1

2(1 + 1)k!k
x k 
1 Lk!k


1=2

(31)

(32)

Proof:
Theorem 2.6 ensures the existen e of a parameter with (31). The proof is a modi ation of
a onvergen e proof for an a priori parameter hoi e for Tikhonov regularization. As in [10,
p.246 we obtain

ky F (x )k2 + kx x k2  2 + 2 k!k + 2 k!kky F (x )k + Lk!kkx x k2 (33)

or

(1 Lk! k)kx

x k2  2 ky F (x )k2 + 2 k! k( + ky F (x )k) ;
(34)
2

and be ause of (31) we have ky F (x )k  0 and + ky F (x )k  (1+ 1). Altogether

we arrive at

kx x k2  2(11 + L 1k)!kk!k :

11

3 Examples
In the following we will give some examples whi h will meet the onditions of Se tion 2. For
the auto onvolution operator it is shown that the operator is strongly ontinuous and meets
(27) but not S herzer's onditions (12). Other examples ome from bilinear operator equations
and medi al imaging.
3.1

The auto onvolution operator

We onsider the operator


F~ (x)(s) := (x  x)(s) =

x(s t)x(t) dt

(35)

For x 2 L2 [a; b, 1 < a < b < 1, we get by setting x(t) = 0 for t 62 [a; b that suppF~ (x)(s) 
[ ; d, 1 < < d < 1 , and we an onsider
F~ : L2 [a; b ! L2 [ ; d :
Holders inequality yields
kF~ (x)kL2[ ;d  (d )1=2 kxk2L2 [a;b :
The operator F~ is espe ially a (symmetri ) bilinear operator, F~ (x) = B (x; x) with kB k =
(d )1=2 , and it follows immediately that F~ is Fre het di erentiable with derivative
F~ 0 (x)h = 2B (x; h) :
(36)
To obtain a weakly sequentially losed operator, we have to assume some smoothness of the
solution of the equation F~ (x) = y. Therefore we onsider the auto onvolution operator between
a Sobolev spa e of order > 0 and L2[ ; d: We de ne F by
i

F 2
L [ ; d ;
(37)
F : H0 [a; b ,! L2 [a; b !
where i denotes the ompa t embedding from H0 [a; b to L2[a; b. From Proposition 2.1 follows
that F is strongly ontinuous and hen e weakly sequentially losed.
Proposition 3.1 The Fre h
et derivative F 0 of F is Lips hitz ontinuous with
kF 0(x) F 0(z)kH [a;b!L2[ ;d  2(d )1=2 kx zkL2 [a;b
(38)
0

 2(d )1=2 kx zkH a;b

Proof:
We have (F 0(x) F 0(z))h = 2(x z)  h and by Holders inequality
1 k(F 0(x) F 0(z))hk2
=
L2 [ ;d
4


Zd Z b



0[

(39)

(x z)(s t)h(t) dt ds

 (d )kx zk2L a;b khk2L a;b


 (d )kx zk2L a;b khk2H a;b
 (d )kx zk2H a;b khk2H a;b ;
2[

2[

2[

0[

0[

12

0[

and the last two inequalities prove the Lips hitz ontinuouity.

A by-produ t of (38) is
Proposition 3.2

Let xn * x in H0 [a; b for n ! 1. Then


kF 0(xn ) F 0(x)kH [a;b!L2[ ;d ! 0 for n ! 1 :
0

Proof:
From xn * x in H0 [a; b follows xn ! x in L2[a; b, and thus from (38) the proposition.

(40)

As a onsequen e, kF 0(xn) F 0(x) k ! 0 and we have nally shown that (27) holds. Morozov's dis repan y prin iple an be used as parameter hoi e for Tikhonov regularization of
the auto onvolution operator, and Theorem 2.7 applies. Our results were given in one dimension only, but they easily extend to higher dimensions. Additionally, we might remark that
ondition (27), whi h was the only new ondition to the operator F , was a onsequen e of the
Lips hitz{ ontinuouity of F 0.
We will now see that the onditions (11), (12) from [27 will usually not hold. In ase of the
auto onvolution operator, (11) reads
(x z)  v = z  k :

(41)

Using the Fourier transform gives (\


x z )  bv = zb  bk. Formally, k is then given by
k=

(\
x z )  bv
:
zb

(42)

If jzb(!)j > > 0 holds, then ondition (12) will hold. But whenever zb(!) has zeros, then
z might not even belong to the proper fun tion spa e. To illustrate this, let us assume the
auto onvolution operator between H0 [a; b and L2[ ; d with > 1=2. Then k has to belong to
H01=2 [a; b, espe ially k 2 L1 [a; b and it follows that bk is a ontinuous and bounded fun tion.
The fun tions x and v belong to H01=2[a; b, and xb  vb is ontinuous and bounded too. We hoose
x; v; z and !0 in su h a way, that zb(!0 ) = 0 and xb  vb(!0 ) 6= 0. For a sequen e !n ! !0 follows
jbk(!n)j ! 1, whi h means that bk is not bounded and therefore does not belong to H01=2 [a; b.
As a onsequen e, ondition (12) is violated.
3.2

Bilinear operator equation

Of great interest are operators whi h an be de omposed into


F~ (x) = Af + B (f; ) ;

13

(43)

with x = (f; ) 2 X1  X2, X1; X2 Hilbert spa es, A a ontinuous linear operator in f and B
a bilinear operator in (f; ):
A : X1 ! Y
B : X 1  X2 ! Y

(44)
(45)

B ((f1 + f2 ); ) = (B (f1 ; ) + B (f2 ; ))


B (f; (1 + 2 )) = (B (f; 1 ) + B (f; 2 ))
kB (f; )k  kB kkf kkk :

(46)
(47)
(48)

The fun tion spa e X1  X2 is turned into an Hilbert spa e by setting

h(f; ); (g;  )iX X := hf; giX + h;  iX :


1

(49)

Operators of type (43) o ur in parameter estimation problems for partial di erential operators
[6, 9, 16, 24, 17 and in the area of medi al imaging ( ompare next se tion).
It is easy to see that F~ is Fre het di erentiable with derivative
F~ 0 (f; )(h1 ; h2 ) = Ah1 + B (h1 ; ) + B (f; h2 ) ;

(50)

and

k(F~ 0(f; ) F~ 0(g;  ))(h1; h2 )k = kB (h1; ) + B (f; h2) B (h1 ;  ) B (g; h2)k
= kB (h1;   ) + B (f g; h2)k
 kB kk  kkh1 k + B kf gkkh2k
 2kB kk(f g;   )kk(h1 ; h2)k :
Therefore F~ 0 is Lips hitz ontinuous with onstant L = 2kB k. It remains to show that F~
is strongly ontinuous and that F~ 0(fn; n)(h1; h2) ! F~ 0(f; )(h1; h2) if (fn; n) * (f; ) for
n ! 1 holds. In appli ations it might happen that F~ already meets these onditions as
operator from X1  X2 to Y . If not, this an be a hieved by assuming more \regularity" of the
solution of F~ (x) = y, whi h means we have to hange the de nition area of F~ . Let us assume
that there exist fun tion spa es X1s and X2s, and ompa t embedding operators is1 : X1s ! X1 ,
is2 : X2s ! X2 . Then we an onsider
F : X1s  X2s

is1
is2

~
Y
! X1  X2 F!

(51)

and get from Proposition 2.1 (2) that F is strongly ontinuous and weakly sequentially losed.
Now, exa tly as for the auto onvolution operator, we obtain

kF 0(f; ) F 0(g;  )kXsXs!Y  Lk(f; ) (g;  )kX X


 Lk(f; ) (g;  )kXsXs
1

14

(52)
(53)

If (fn; n) * (f; ) in X1s  X2s , then (fn; n) ! (f; ) in X1  X2 and (52) yields F 0(fn; n) !
F 0 (f; ) and F 0 (fn ; n ) ! F 0 (f; ) in the operator norm. This shows that Morozov's dis repan y prin iple is appli able and Theorem 2.7 holds.
We might remark that our argument will apply to arbitrary nonlinear ontinuous and Fre het
di erentiable operators F : X ! Y with Lips hitz ontinuous derivative as long as a fun tion
spa e X s with ompa t embedding to X is available. A ommon hoi e for X s might be a
Sobolev spa e over a bounded region
1 and for X the spa e L2(
2 ).
3.3

Single Photon Emission Computerized Tomography (SPECT)

Some of the most hallenging ill{posed problems arise in the area of medi al imaging. In
SPECT, one tries to re onstru t the distribution of a radiopharma euti al inside a human
body by measuring the intensity of the radiation outside the body. As the name suggests,
SPECT is related to the Computerized Tomography (CT), where one has to re onstru t the
density of a body by measuring the out oming intensity of X{rays through the body. In ontrast
to CT, where the measured intensity depends only on the intensity of the in oming X{ray and
the density  of the tissue along the path of the X{ray, depend the measurements for SPECT
on the a tivity fun tion f (whi h des ribes the distribution of the radiopharma euti al) and the
density  of the tissue. The measured data y and the tuple (f; ) are linked by the Attenuated
Radon Transform (ATRT),
y = R(f; )(s; ! ) =
s

f (s! ? + t! )e

t1 (s!? +!) d

dt ;

(54)

2 R ; ! 2 S 1 . As for the Radon Transform, the data are represented as line integrals

over all possible unit ve tors !. Usually both f and  are unknown fun tions, and R is a
nonlinear operator. During the last de ade several papers on this problem were published
[4, 20, 21, 23, 29. Di ken [8 examined the mapping properties of the ATRT and on luded
that under some reasonable assumptions to the smoothness of f and  Tikhonov regularization
with a priori parameter hoi e is appli able to regularize (54). In [26 a bilinear approximation
R~ to R was introdu ed:
R~ (f; ~) =

R1
 (s!? +!) d
?
f (s! + t! )e t 0
(1

~(s! ? + ! )d ) dt :

(55)

In this approximation the exponential term in (54) was simply repla ed by the rst two terms
of its Taylor expansion around a guess 0 for the attenuation fun tion  = 0 + ~. Moreover,
iterative methods for solving y = R~ (f; ~) were proposed in this paper. In the following, it shall
be shown that Morozov's dis repan y prin iple for Tikhonov regularization an be applied to
SPECT. The analysis will be done for the ATRT operator only; for the bilinearized version
(55) a similar result yields.
To get the desired results like strong ontinuouity or Fre het di erentiability for the ATRT
operator, it has to be onsidered between proper fun tion spa es. Additionally, there will be
some trouble with the unbounded growth of the exponential fun tion for negative arguments.
In the following, we will summarize some results from [8. Di ken introdu es an operator R%
15

by

R% (f; )(s; ! ) =

f (s! ? + t! )E (




(s! ? + ! ) d ) dt :

(56)

The fun tion E 2 C 2 (R ) is hosen su h that


E (x) = exp( x) for x 2 R +
and jE j; jE 0j and jE 00j are bounded. For SPECT, the fun tions f and  will be nonnegative
with ompa t support. If we assume that f has its support in a dis with radius %, then
the operator R% oin ides with R for admissible sets (f; ). Fortunately, R% has mu h better
mapping properties than R. If the de nition area D(R%) is given by
D(R% ) := Ds1 ;s2 ;C = f(f; ) 2 H0s1  H0s2 j kf k1  C g ;
(57)
then the following Proposition holds:
2
2
1
Proposition 3.3 Let R% : Ds1 ;s2 ;C ! L (S  [ %; %). If s1 ; s2 > , then R% is a strongly
5
ontinuous. The Fre het derivative exists for all (f; ) 2 Ds1 ;s2 ;C and is Lips hitz ontinuous.
For a proof, nf. Theorem 4.10 in [8. Strong ontinuouity of R% follows from the de omposition
of F into a linear ompa t and an ontinuous operator. There is even a more detailed version of
the above Proposition given, with more possible ombinations of s1; s2 su h that the Proposition
still holds. An important point is to hoose s1 < 1=2, be ause for our medi al appli ation the
a tivity fun tion f will usually not belong to H0s1 for s1  1=2.
Theorem 3.4

Let the onditions of Proposition 3.3 hold. Then ondition (27) holds.

Proof:
Let s1; s2 > 2=5 be given. A ording to Proposition 3.3, the Fre het derivative of F is Lips hitz
ontinuous for every s1; s2 > 2=5. Thus we an nd s1 ; s2 with
2 < s = s < s = s :
5 1 2 1 2
Therefore
k(R%0 (f; ) R%0 (g;  ))(h1; h2)k  k(f; ) (g;  )kHs1 Hs2 k(h1 ; h2)kHs1 Hs2
0
0
0
0
 k(f; ) (g;  )kHs1 Hs2 k(h1 ; h2)kHs1 Hs2
and we have again
kR%0 (f; ) R%0 (g;  )kHs1 Hs2 !L2(S1[
0

%;%)

 k(f; ) (g;  )kHs Hs :


1
0

2
0

(58)

The
embedding
from
H0s1  H0s2 to H0s1  H0s2 is ompa t, and therefore weak onvergen e in
H0s1  H0s2 indu es norm onvergen e in H0s1  H0s2 . We on lude for a sequen e (fn ; n) * (f; )
for n ! 1 in Ds1;s2;C  H0s1  H0s2 that
kR%0 (fn; n) R%0 (f; )kHs1 Hs2 !L2 (S1[ %;%) ! 0 for n ! 1
0
0
holds, and onsequently (27).

16

3.4

Con lusions

We have shown that Morozov's dis repan y prin iple for Tikhonov regularization applies to
a wide lass of problems. The existen e of a regularization parameter with (13) an be
guaranteed under mild restri tions. If in addition (27) is assumed, then there exists for every
sequen e k ! a subsequen e kn with x kn ! x . In the above examples was demonstrated
that (27) an often be on luded from the Lips hitz ontinuouity of the Fre het derivative of
the nonlinear operator. We have shown that our onditions are easy to handle and apply even
when (12) fails.

Referen es
[1 A. W. Bakushinskii. The problem of the onvergen e of the iteratively regularized gauss{
newton method. Comput. Maths. Math. Phys., (32):1353{1359, 1992.
[2 B. Blas hke. Some newton type methods for the regularization of nonlinear ill{posed
problems. Inverse Problems, (13):729{753, 1997.
[3 B. Blas hke, A. Neubauer, and O. S herzer. On onvergen e rates for the iteratively
regularized gauss{newton method. IMA Journal of Numeri al Analysis, (17):421{436,
1997.
[4 Y. Censor, D. Gustafson, A. Lent, and H. Tuy. A new approa h to the emission omputerized tomography problem: simultaneous al ulation of attenuation and a tivity oe ients.
IEEE Trans. Nu l. S i., (26):2275{79, 1979.
[5 D. Colton and R. Kress. Inverse A ousti and Ele tromagneti S attering Theory. Springer,
Berlin, 1992.
[6 D. Colton and M. Piana. The simple method for solving the ele tromagneti inverse
s attering problem: the ase of TE polarized waves. Inverse Problems, 14(3):597{614,
1998.
[7 C. Cravaris and J. H. Seinfeld. Identi ation of parameters in distributed parameter
systems by regularization. SIAM J. Contr. Opt., (23):217{241, 1985.
[8 V. Di ken. A new approa h towards simultaneous a tivity and attenuation re onstru tion
in emission tomography. Inverse Problems, 15(4):931{960, 1999.
[9 O. Dorn. A transport-ba ktransport method for opti al tomography. Inverse Problems,
14(5):1107{1130, 1998.
[10 H. W. Engl, M. Hanke, and A. Neubauer. Regularization of Inverse Problems. Kluwer,
Dordre ht, 1996.
[11 H.W. Engl, K. Kunis h, and A. Neubauer. Convergen e rates for Tikhonov regularization
of nonlinear ill-posed problems. Inverse Problems, (5):523{540, 1989.
17

[12 A. Frommer and P. Maass. Fast g-based methods for Tikhonov regularization. SIAM J.
S i. Comp., 5(20):1831{1850, 1999.
[13 M. Hanke. A regularizing levenberg{marquardt s heme, with appli ations to inverse
groundwater ltration problems. Inverse Problems, (13):79{95, 1997.
[14 M. Hanke. Regularizing properties of a trun ated newton{ g algorithm for nonlinear illposed problems. Num. Fun t. Anal. Optim., (18):971{993, 1997.
[15 M. Hanke, A. Neubauer, and O. S herzer. A onvergen e analysis of the Landweber
iteration for nonlinear ill-posed problems. Numeris he Mathematik, (72):21{37, 1995.
[16 T. Klibanov, T.R. Lu as, and R.M. Frank. A fast and a urate imaging algorithm in
opti al/di usion tomography. Inverse Problems, 13(5):1341{1363, 1997.
[17 K. Kunis h and X.-C. Tai. Sequential and parallel splitting methods for bilinear ontrol
problems in Hilbert spa es. SIAM J. Numer. Anal., 34(1):91{118, 1997.
[18 A. K. Louis. Inverse und s hle ht gestellte Probleme. Teubner, Stuttgart, 1989.
[19 P. Maass, S. V. Pereverzev, R. Ramlau, and S. G. Solodky. An adaptive dis retization
s heme for Tikhonov{regularization with a posteriori parameter sele tion. Numeris he
Mathematik, 87(3):485{502, 2001.
[20 F. Natterer. Numeri al solution of bilinear inverse problems. Te hni al report 19/96,
Fa hberei h Mathematik der Universitat Munster.
[21 F. Natterer. Computerized tomography with unknown sour es. SIAM J. Appl. Math.,
(43):1201{12, 1983.
[22 F. Natterer. The Mathemati s of Computerized Tomography. B.G. Teubner, Stuttgart,
1986.
[23 F. Natterer. Determination of tissue attenuation in emission tomography of opti ally dense
media. Inverse Problems, (9):731{736, 1993.
[24 F. Natterer and F. Wubbeling. A propagation-ba kpropagation method for ultrasound
tomography. Inverse Problems, 11(6):1225{1232, 1998.
[25 R. Ramlau. A modi ed landweber{method for inverse problems. Numeri al Fun tional
Analysis and Optimization, 20(1& 2), 1999.
[26 R. Ramlau, R. Cla kdoyle, F. Noo, and G. Bal. A urate attenuation orre tion in spe t
imaging using optimization of bilinear fun tions and assuming an unknown spatially{
varying attenuation distribution. Z. angew. Math. Me h., 80(9):613{621, 2000.
[27 O. S herzer. The use of Morozov's dis repan y prin iple for Tikhonov regularization for
solving nonlinear ill{posed problems. Computing, (51):45{60, 1993.
18

[28 O. S herzer, Engl H. W., and K. Kunis h. Optimal a posteriori parameter hoi e for
Tikhonov regularization for solving nonlinear ill{posed problems. SIAM J. Numer. Anal.,
30(6):1796{1838, 1993.
[29 A Wel h, R Cla k, F Natterer, and G T Gullberg. Toward a urate attenuation orre tion
in SPECT without transmission measurements. IEEE Trans. Med. Imaging, (16):532{40,
1997.
[30 E. Zeidler. Nonlinear Fun tional Analysis and its Appli ations. Springer, New York, 1985.

19

Beri hte aus der Te hnomathematik

ISSN 1435-7968

http://www.math.uni-bremen.de/zetem/beri hte.html

| Vertrieb dur h den Autor |

Reports

Stand: 18. Juli 2001

98{01. Peter Benner, Heike Fabender:

An Impli itly Restarted Symple ti Lan zos Method for the Symple ti Eigenvalue Problem,

Juli 1998.
98{02. Heike Fabender:

Sliding Window S hemes for Dis rete Least-Squares Approximation by Trigonometri Polynomials, Juli 1998.

98{03. Peter Benner, Maribel Castillo, Enrique S. Quintana-Ort:

Parallel Partial Stabilizing Algorithms for Large Linear Control Systems, Juli 1998.

98{04. Peter Benner:

Computational Methods for Linear{Quadrati Optimization, August 1998.

98{05. Peter Benner, Ralph Byers, Enrique S. Quintana-Ort, Gregorio Quintana-Ort:

Solving Algebrai Ri ati Equations on Parallel Computers Using Newton's Method with
Exa t Line Sear h, August 1998.

98{06. Lars Grune, Fabian Wirth:

On the rate of onvergen e of in nite horizon dis ounted optimal value fun tions, November

1998.
98{07. Peter Benner, Volker Mehrmann, Hongguo Xu:

A Note on the Numeri al Solution of Complex Hamiltonian and Skew-Hamiltonian Eigenvalue Problems, November 1998.

98{08. Eberhard Bans h, Burkhard Hohn:

Numeri al simulation of a sili on oating zone with a free apillary surfa e, Dezember 1998.

99{01. Heike Fabender:

The Parameterized

SR Algorithm for Symple ti (Butter y) Matri es, Februar 1999.

99{02. Heike Fabender:

Error Analysis of the symple ti Lan zos Method for the symple ti Eigenvalue Problem,

Marz 1999.
99{03. Eberhard Bans h, Alfred S hmidt:

Simulation of dendriti rystal growth with thermal onve tion, Marz 1999.

99{04. Eberhard Bans h:

Finite element dis retization of the Navier-Stokes equations with a free apillary surfa e,

Marz 1999.
99{05. Peter Benner:

Mathematik in der Berufspraxis, Juli 1999.

99{06. Andrew D.B. Pai e, Fabian R. Wirth:

Robustness of nonlinear systems and their domains of attra tion, August 1999.

99{07. Peter Benner, Enrique S. Quintana-Ort, Gregorio Quintana-Ort:

Balan ed Trun ation Model Redu tion of Large-S ale Dense Systems on Parallel Computers, September 1999.

99{08. Ronald Stover:

Collo ation methods for solving linear di erential-algebrai boundary value problems, Septem-

ber 1999.
99{09. Huseyin Ak ay:

Modelling with Orthonormal Basis Fun tions, September 1999.

99{10. Heike Fabender, D. Steven Ma key, Niloufer Ma key:

Hamilton and Ja obi ome full ir le: Ja obi algorithms for stru tured Hamiltonian eigenproblems, Oktober 1999.

99{11. Peter Benner, Vin ente Hernandez, Antonio Pastor:


On the Kleinman Iteration for Nonstabilizable System, Oktober 1999.
99{12. Peter Benner, Heike Fabender:

A Hybrid Method for the Numeri al Solution of Dis rete-Time Algebrai Ri ati Equations,

November 1999.
99{13. Peter Benner, Enrique S. Quintana-Ort, Gregorio Quintana-Ort:

Numeri al Solution of S hur Stable Linear Matrix Equations on Multi omputers, November

1999.
99{14. Eberhard Bans h, Karol Mikula:
Adaptivity in 3D Image Pro essing, Dezember 1999.
00{01. Peter Benner, Volker Mehrmann, Hongguo Xu:

Perturbation Analysis for the Eigenvalue Problem of a Formal Produ t of Matri es, Januar

2000.
00{02. Ziping Huang:

Finite Element Method for Mixed Problems with Penalty, Januar 2000.

00{03. Gianfran es o Martini o:

Re ursive mesh re nement in 3D, Februar 2000.

00{04. Eberhard Bans h, Christoph Egbers, Oliver Mein ke, Ni oleta S urtu:
Taylor-Couette System with Asymmetri Boundary Conditions, Februar 2000.
00{05. Peter Benner:
Symple ti Balan ing of Hamiltonian Matri es, Februar 2000.
00{06. Fabio Camilli, Lars Grune, Fabian Wirth:
A regularization of Zubov's equation for robust domains of attra tion, Marz 2000.
00{07. Mi hael Wol , Eberhard Bans h, Mi hael Bohm, Domini Davis:
Modellierung der Abkuhlung von Stahlbrammen, Marz 2000.
00{08. Stephan Dahlke, Peter Maa, Gerd Tes hke:
Interpolating S aling Fun tions with Duals, April 2000.
00{09. Jo hen Behrens, Fabian Wirth:
A globalization pro edure for lo ally stabilizing ontrollers, Mai 2000.

00{10. Peter Maa, Gerd Tes hke, Werner Willmann, Gunter Wollmann:

Dete tion and Classi ation of Material Attributes { A Pra ti al Appli ation of Wavelet
Analysis, Mai 2000.

00{11. Stefan Bos hert, Alfred S hmidt, Kunibert G. Siebert, Eberhard Bans h, Klaus-Werner
Benz, Gerhard Dziuk, Thomas Kaiser:
Simulation of Industrial Crystal Growth by the Verti al Bridgman Method, Mai 2000.
00{12. Volker Lehmann, Gerd Tes hke:
Wavelet Based Methods for Improved Wind Pro ler Signal Pro essing, Mai 2000.
00{13. Stephan Dahlke, Peter Maass:
A Note on Interpolating S aling Fun tions, August 2000.
00{14. Ronny Ramlau, Rolf Cla kdoyle, Frederi Noo, Girish Bal:

A urate Attenuation Corre tion in SPECT Imaging using Optimization of Bilinear Fun tions and Assuming an Unknown Spatially-Varying Attenuation Distribution, September

2000.
00{15. Peter Kunkel, Ronald Stover:

Symmetri ollo ation methods for linear di erential-algebrai boundary value problems,

September 2000.
00{16. Fabian Wirth:

The generalized spe tral radius and extremal norms, Oktober 2000.

00{17. Frank Stenger, Ahmad Reza Naghsh-Nil hi, Jenny Niebs h, Ronny Ramlau:
A uni ed approa h to the approximate solution of PDE, November 2000.
00{18. Peter Benner, Enrique S. Quintana-Ort, Gregorio Quintana-Ort:
Parallel algorithms for model redu tion of dis rete{time systems, Dezember 2000.
00{19. Ronny Ramlau:

A steepest des ent algorithm for the global minimization of Tikhonov{Phillips fun tional,

Dezember 2000.
01{01. E ient methods in hyperthermia treatment planning:

Torsten Kohler, Peter Maass, Peter Wust, Martin Seebass, Januar 2001.

01{02. Parallel Algorithms for LQ Optimal Control of Dis rete-Time Periodi Linear Systems:
Peter Benner, Ralph Byers, Rafael Mayo, Enrique S. Quintana-Ort, Vi ente Hernandez,
Februar 2001.
01{03. Peter Benner, Enrique S. Quintana-Ort, Gregorio Quintana-Ort:
E ient Numeri al Algorithms for Balan ed Sto hasti Trun ation, Marz 2001.
01{04. Peter Benner, Maribel Castillo, Enrique S. Quintana-Ort:
Partial Stabilization of Large-S ale Dis rete-Time Linear Control Systems, Marz 2001.
01{05. Stephan Dahlke:
Besov Regularity for Edge Singularities in Polyhedral Domains, Mai 2001.
01{06. Fabian Wirth:
A linearization prin iple for robustness with respe t to time-varying perturbations, Mai
2001.

01{07. Stephan Dahlke, Wolfgang Dahmen, Karsten Urban:

Adaptive Wavelet Methods for Saddle Point Problems - Optimal Convergen e Rates, Juli

2001.
01{08. Ronny Ramlau:

Morozov's Dis repan y Prin iple for Tikhonov regularization of nonlinear operators, Juli
2001.

You might also like