You are on page 1of 16

Available online at www.sciencedirect.

com
Mathematics and Computers in Simulation 81 (2011) 15931608
Original article
A mollication regularization method for stable analytic
continuation

Zhi-Liang Deng
a,b
, Chu-Li Fu
a,
, Xiao-Li Feng
a
, Yuan-Xiang Zhang
a
a
School of Mathematics and Statistics, Lanzhou University, TianShui South Road 222, Lanzhou, Gansu 730000, PR China
b
School of Mathematical Sciences, University of Electronic and Science Technology of China, Chengdu 610054, PR China
Received 7 July 2009; received in revised form 7 November 2010; accepted 29 November 2010
Available online 14 December 2010
Abstract
In this paper, we consider an analytic continuation problem on a strip domain with the data given approximately only on the real
axis. The Gauss mollication method is proposed to solve this problem. An a priori error estimate between the exact solution and
its regularized approximation is obtained. Moreover, we also propose a new a posteriori parameter choice rule and get a good error
estimate. Several numerical examples are provided, which show the method works effectively.
2010 IMACS. Published by Elsevier B.V. All rights reserved.
AMS Classication: 30B40; 65J20
Keywords: Analytic continuation; Ill-posed problems; Mollication method; Error estimate; A posteriori
1. Introduction
The analytic continuation is a classical problem in complex analysis, which is frequently encountered in many
practical applications [6,15,19,22], while the stable numerical analytic continuation is a rather difcult problem. In
general, this problem is severely ill-posed. Some theoretical and numerical studies have been devoted to the problem.
The earlier works mainly focused on the results of the conditional stability [12,15]. However, it seems that there are few
applications of modern theory of regularization methods which have been developed intensively in the last fewdecades.
A simple computer algorithm was given in [6] that is based on the fast Fourier transform. In [2,24], a hypergeometric
summation method was used to reconstruct some analytic functions from exponentially spaced samples, where the
approximation errors and stability estimates were obtained.
In this paper, we consider the following problem of analytic continuation. Let function f(z) =f(x +iy) be analytic on
a strip domain D of the complex plane dened by
D:={z = x +iy C|x R, |y| < y
0
, y
0
is a positive constant}, (1.1)
where i is the imaginary unit. The data is only given on the real axis, i.e., f(z) |
y=0
=f(x) is known approximately and
we will extend f analytically from this data to the whole domain D.

This work is supported by the National Natural Science Foundation of China (Nos. 10671085, 10971089).

Corresponding author.
E-mail address: fuchuli@lzu.edu.cn (C.-L. Fu).
0378-4754/$36.00 2010 IMACS. Published by Elsevier B.V. All rights reserved.
doi:10.1016/j.matcom.2010.11.011
1594 Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608
One can refer to some literatures for its important practical applications, e.g., medical imaging [5] and integral
transformation [1]. It is well known that this problemis ill-posed [79] and some regularization methods are needed. In
the earlier paper [9], D.N. Ho et al. used the mollication method with Dirichlet kernel for the case of f( +iy) L
2
(R)
and de Valle Poussin kernel for the case of f( +iy) L
p
(R), p [1, ] to solve this problem respectively. Some
error estimates were obtained in [9]. However, there were no numerical results. In recent published paper [7,8], a
Fourier method and a modied Tikhonov method were proposed, respectively, in which the a priori parameter choice
rules and the corresponding error estimates were all given. Some numerical examples were implemented to verify the
effectiveness of these methods.
In the present paper, we will propose another different regularization methoda mollication method to solve
this problem. It is well known that mollication methods are well studied and being widely used as regularization
methods in many ill-posed problems [10,11,18]. The ill-posedness of many ill-posed problems is caused by the high-
frequency disturbance of the noise data. The basic idea of mollication methods is to use a convolution of the noise
data and a smooth function with a parameter to lter the high-frequency components of the noise data, such that the
problem becomes well-posed. Furthermore, by proper choice of the parameter, the solution of this well-posed problem
can approximate the solution of the original one. Meanwhile, there is a very close connection between mollication
methods and approximate inverse method which is also well studied and being used in computerized tomograph
and a lot of applications [16,17,20,21]. For simplicity, we assume A : L
2
(R) L
2
(R) be a linear compact operator
and its range non-closed. Then the problem of solving Af =g is ill-posed [4,13]. It is well known that for a function
f(x) L
2
(R), there must exist a smooth function m

(x) (e.g., Gauss or Dirichlet functions, etc.) with parameter ,


such that
f

(x):=f(x

), m

(x, x

)
can approximate f(x), where m

(x, x

) =m

(x x

) and , denotes the inner in L


2
(R). In the special case that m

(x)
is in the range of A

, we have, with

being the solution of the equation A

=m

and can be given analytically, the


relation
f

(x):=f(), m

(x ) = f(), (A

)(x ) = (Af)(),

(x ) = g(),

(x ).
For the noise data g

, it is easy to know that f

:=g

(),

(x ) with the reconstruction kernel

(x) associated
with m

(x, x

) is an approximation of the exact solution of the original equation Af =g [13]. This is just the so called
approximate inverse method proposed by A.K. Louis [16,21]. Meanwhile, the moment g

( ),

(x ) also implies
smoothing the data g

, so it is also a mollication method.


In this paper, we only consider a mollication method with the Gauss kernel [18,21] and one of the highlights is
that a new a posteriori parameter choice rule is given. The comparison of numerical effect between a priori and a
posteriori methods is also provided.
Let g denote the Fourier transform of function g(x) dened by
g() =
1

e
ix
g(x) dx, (1.2)
and || || denote the norm in L
2
(R) dened by
g =

R
|g(x)|
2
dx
1
2
. (1.3)
Moreover, suppose f( +iy) L
2
(R) for | y |<y
0
. It is obvious that
f(z) = f(x +iy) =
1

e
i(x+iy)

f() dx,
i.e.,
f(z) = f(x +iy) =
1

e
ix
e
y

f() d, |y| < y


0
. (1.4)
Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608 1595
In order to calculate problem (1.4), we assume that both the exact data f(x) and the measured data f

(x) belong to L
2
(R)
and satisfy
f f

, (1.5)
where >0 denotes the noisy level. We rewrite (1.4) in frequency space as the following equivalent problem:

f( +iy)(, y) = e
y

f(), (1.6)
or
e
y

f( +iy)(, y) =

f(). (1.7)
Problem (1.7) also can be denoted by the linear operator equation

A(, y)

f( +iy)(, y) =

f(), (1.8)
where (, y) :=e
y
is a multiplication operator. Problem (1.8) is a linear ill-posed operator equation [7,8].
Take the Gauss function

(x):=
1

exp

x
2

(1.9)
as the mollier kernel, where is a positive constant. Dene operator J

as
J

f(x):=

f(x):=

(t)f(x t) dt =

(x t)f(t) dt, for f(x) L


2
(R). (1.10)
We replace the original ill-posed problem by the new problem of searching its approximation f
,
(x +iy) which is
dened by
f
,
(x +iy):=
1

e
ix
e
y
(

)() d, (1.11)
where is, in fact, a regularization parameter. The new problem is well-posed [18,21].
The following lemma is necessary and its proof can be found in [23].
Lemma 1.1 ([23]). Let the function f() : (0, a] R be given by
f() =
b
[d log(1/)]
c
with a constant c R and positive constants a <1, b and d, then for the inverse function f
1
() we have
f
1
() =
1/b

d
b
log

c/b
(1 +o(1)) for 0. (1.12)
The rest of this paper is organized as follows. In Section 2 we give the a priori parameter choice rule and the
corresponding error estimate of the regularization approximate solution. We are especially interested in the a posteriori
parameter choice rule which is discussed in Section 3 and to the authors knowledge, there are few results about the a
posteriori parameter choice using Gauss mollication method. In Section 4, some numerical examples are proposed.
We not only focus on the comparison of the effect of a priori rule and a posteriori rule, but also on the comparison of
the effect with other methods given in [7,8].
2. The error estimate with a priori parameter choice
In this section, the error estimate of the mollication regularization method will be derived under the a priori
parameter choice rule. Suppose that the following source conditions as in [9] be hold:
f( +iy
0
) E, (2.1)
f( iy
0
) E. (2.2)
1596 Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608
Let us start with the case of 0 <y <y
0
. By the Parseval formula and triangle inequality, we know that
f(x +iy) f
,
(x +iy) =

f( +iy)

f
,
( +iy)

f( +iy)

f

( +iy) +

f

( +iy)

f
,
( +iy). (2.3)
Noting that

f( +iy) = e
y

f() and

f

( +iy) = e

2
4
e
y

f(), it follows that


f( +iy)

f

( +iy)
2
=

|e
y
(1 e

2
4
)

f()|
2
dx =

|e
y
(1 e

2
4
)

f()|
2
dx
+


0
|e
y
(1 e

2
4
)

f()|
2
dx =: I
1
+I
2
. (2.4)
Due to the a priori bound (2.1) we have
I
1
=

|e
y
(1 e

2
4
)

f( +iy
0
)e
y
0

|
2
dx max
<0
[e
y
(1 e

2
4
)e
y
0

]
2
E
2
. (2.5)
Using the inequality 1 e
x
x, x 0, we obtain
I
1

e
4

4
(y
0
y)
4
E
2
. (2.6)
Similarly to I
1
, applying (2.2) to I
2
, the following estimate is obtained
I
2
=

+
0
|e
y
(1 e

2
4
)

f( iy
0
)e
y
0

|
2
dx
e
4

4
(y
0
+y)
4
E
2
. (2.7)
Combining (2.4), (2.6) and (2.7), it is easy to see that


f( +iy)

f

( +iy)
2e
2

2
E
(y
0
y)
2
. (2.8)
For the second term of the right hand-side of (2.3), by using condition (1.5) we get

( +iy)

f
,
( +iy) =

|e

2
4
e
y
(

f()

f

())|
2
dx

1
2
max
R
e
(((
2

2
)/4)+y)
e
y
2

2
. (2.9)
Combining (2.6) and (2.9), the error estimate between the approximation solution and exact solution is given by
f( +iy) f
,
( +iy)
2e
2

2
E
(y
0
y)
2
+e
y
2

2
. (2.10)
Taking
=
y
0

log
E

1
2
(2.11)
in (2.10), we have
f( +iy) f
,
( +iy)
2e
2
y
2
0
E
(y
0
y)
2

log
E

1
+
1
y
2
y
2
0
E
y
2
y
2
0
=
2e
2
y
2
0
E
(y
0
y)
2

log
E

1
(1 +o(1)), for 0.
Therefore, we get the following theorem.
Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608 1597
Theorem 2.1. Assume conditions (1.5), (2.1), (2.2)hold. If the regularization parameter is taken by (2.11), then for
0 <y <y
0
, there holds the error estimate:
f( +iy) f
,
( +iy) 2e
2
y
2
0
E
(y
0
y)
2

log
E

1
(1 +o(1)), 0. (2.12)
Remark 2.1. In general, the a priori bound E in (2.1) and (2.2) is not known exactly. In this case, replacing in (2.11)
by
=
y
0

log
1

1
2
, (2.13)
then there holds the estimate
f( +iy) f
,
( +iy) 2e
2
y
2
0
E
(y
0
y)
2

log
1

1
(1 +o(1)), 0, (2.14)
where E is only a bounded positive constant and it is not necessary to be known exactly. This choice is helpful in
concrete computation.
It is obviously that we can not obtain the convergence of f
,
at |y | =y
0
from estimate (2.12). Now, we will give the
error estimate at y =y
0
under a stronger a priori assumption
f( +iy
0
)
p
E, p > 0, (2.15)
where || ||
p
denotes the norm on Sobolev space H
p
(R) dened by
f
2
p
:=

(1 +
2
)
p
|

f()|
2
dx. (2.16)
Theorem 2.2. Assume conditions(1.5), (2.15)hold. If we take the regularization parameter as
= ylog

1
2
{(p, y
0
)
E

log

(p, y
0
)
E

p
p+2
}, (2.17)
where (p, y
0
) = c(p)y
2p
p+2
0
, c(p) =

2 4

p
p+2
, then there holds the error estimate:
f( +iy
0
) f
,
( +iy
0
) (p, y
0
)E

log

(p, y
0
)
E

p
p+2
. (2.18)
Proof. It is easy to see by combining (2.15) and (2.4), that there holds


f( +iy
0
)

f

( +iy
0
)
2
=

|e
y
0
(1 e

2
4
)

f( +iy
0
)e
y
0
(1 +
2
)
p
2
(1 +
2
)

p
2
|
2
dx max
R
|(1 e

2
4
)(1 +
2
)

p
2
|
2
E
2
=: max
R
A(, , p)E
2
. (2.19)
For a xed constant
0
>0, we note that there hold A(, , p)
2p
0
for | |
0
, and A(, , p)

2
0
4

2
for | |<
0
,
respectively. We especially set
p
0
=

2

2
0
4
and get
0
=

2
2+p
. This yields
A(, , p) (4

p
p+2

2p
p+2
)
2
. (2.20)
Therefore,


f( +iy
0
)

f

( +iy
0
) c(p)
2p
p+2
E, (2.21)
1598 Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608
where c(p) = 4

p
p+2
. Combining (2.9) and (2.21), it follows that
f( +iy
0
) f
,
( +iy
0
) c(p)
2p
p+2
E +e
y
2
0

2
. (2.22)
Minimizing the right side of (2.22), we let c(p)
2p
p+2
E = e
y
2
0

2
. Denote e

y
2
0

2
=: . Simple computation shows

log
1

p
p+2
=
1
c(p)
y

2p
p+2
0

E
. By using (1.12) in Lemma 1.1, we obtain
= y
0
log

1
2
{(p, y
0
)
E

log((p, y
0
)
E

p
p+2
}, (2.23)
where (p, y
0
) = c(p)y
2p
p+2
0
. Substituting (2.23) into (2.22) yields the error estimate (2.18). The proof is completed.
From (2.18), it is obvious
f( +iy
0
) f
,
( +iy
0
) 0, for 0.
3. The error estimate with a posteriori parameter choice
In this section, we consider the a posteriori regularization parameter choice rule. The most general a posteriori rule
is the Morozovs discrepancy principle. However, it is too hard to obtain a stable error estimate for our method. A new
choice rule will be given below. Choose the regularization parameter as the solution of the equation
d():=e

2
4

f

= +

log log

1
, (3.1)
where >0 is a constant. To establish existence and uniqueness of solution of equation (3.1), we need the following
lemma:
Lemma 3.1. If >0, then there hold
(a) d() is a continuous function;
(b) lim
0
d() =0;
(c) lim
+
d() =

;
(d) d() is a strictly increasing function.
The proof is very easy and we omit it here.
To ensure the existence and uniqueness of the solution of equation (3.1), by Lemma 3.1, we can choose the constant
such that 0 < +

log log

1
<

. Denote
(x, y):=f
,
(x +iy) f(x +iy). (3.2)
We give the main result of this section as follows, here we only consider the case of y >0:
Theorem3.1. Assume the conditions (1.5), (2.1) and (2.2)hold and >0 such that 0 < +

log log

1
<

.
Take the solution of Eq. (3.1) as the regularization parameter, then there holds the error estimate:
f( +iy) f
,
( +iy) (E +o(1))
y
y
0

2 +

log log

y
0
y
y
0
, 0.
Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608 1599
Proof. By (3.2), the Parseval formula and triangle inequality, we have
(, 0) = f
,
(x) f(x) e

2
4

f

+. (3.3)
Moreover, equation (3.1) leads to
(x, 0) 2 +

log log

1
. (3.4)
In addition,
(, y
0
) = e

2
4
e
y
0
(



f) +(e

2
4
1)e
y
0
f max
R
e

2
4
e
y
0
+E e
y
2
0

2
+E. (3.5)
Due to the choice rule (3.1), the a priori assumptions (2.1), (2.2), we can see
+

log log

1
= (e

2
4
1)(



f +

f) (e

2
4
1)(



f) +(e

2
4
1)

f
+(e

2
4
1)

f = +

|(e

2
4
1)

f()|
2
dx

1
2
= +

|(e

2
4
1)e
y
0
f( +iy
0
)|
2
dx
+

+
0
|(e

2
4
1)e
y
0
f( iy
0
)|
2
dx

1
2
+[max
0
|(e

2
4
1)e
y
0
|
2
+max
0
|(e

2
4
1)e
y
0
|
2
]
1
2
E. (3.6)
Since |(e

2
4
1)e
y
0
|

2

2
4
e
y
0
:=(),

() =

2

4
e
y
0
(2 +y
0
) = 0 leads to =
2
y
0
or =0. Therefore
|(e

2
4
1)e
y
0
| e
2
2
y
2
0
. For 0, the same result is obtained as 0. By (3.6), we get

log log

2e
2

2
y
2
0
E. (3.7)
The inequality (3.7) also leads to
y
2
0

2E
e
2
log log
1

. (3.8)
Inserting (3.8) into (3.5), it follows that
(, y
0
) e

2E/(e
2
) log log
1

+E.
Noting that
lim
0
e

2E/(e
2
) log log
1

= 0,
we can assume that
(, y
0
) E +o(1), 0. (3.9)
It is easy to see
(, y)
2
=

|e

2
4
e
y

() e
y

f()|
2
dx =

|e
y
(e

2
4

f

()

f())|
2
dx. (3.10)
1600 Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608
By using the Hlder inequality, we know there holds the inequality
(, y)
2
=

e
2y
|
f
(, , )|
2y
y
0 |
f
(, , )|
2(1
y
y
0
)
dx

(e
2y
|
f
(, , )|
2y
y
0 )
y
0
y
dx

y
y
0

(|
f
(, , )|
2(1
y
y
0
)
)
y
0
y
0
y
dx

y
0
y
y
0
=

|e
y
0

f
(, , )|
2
dx

y
y
0

|
f
(, , )|
2
dx

y
0
y
y
0
, (3.11)
where
f
(, , ) = e

2
4

f

()

f(). By (3.3) and (3.9), it is obtained that
(, y)
2
(, y
0
)
2y
y
0 (, 0)
2
y
0
y
y
0 (E +o(1))
2y
y
0

2 +

log log

2
y
0
y
y
0
, 0, (3.12)
i.e.,
(, y) (E +o(1))
y
y
0

2 +

log log

1
y
y
0
. (3.13)
This completes the proof.
Certainly the choice rule (3.1) can be modied in a more general form:
e

2
4

f

= +(), (3.14)
where the function () satises lim
0
() =0. According to the proof of Theorem 3.1, if there holds
lim
0
e

2E
4()

log

2E
()

2
= C, (3.15)
then there must be
f( +iy) f
,
( +iy) (C +E)
y
y
0 [2 +()]
y
0
y
y
0 , 0, (3.16)
where C is a positive constant. For the case of y <0, the results are analogous.
Remark 3.1.
(1) The condition ||f

|| > certainly make sense otherwise f


,
(x +iy) =0 would be an acceptable approximation to
f(x +iy), see page 51 in [14].
(2) In the practical computation, we can take, in general, =1.1 in (3.1).
4. Numerical examples
In this section, to verify the validity of the regularization method discussed in Sections 2 and 3,
two same numerical examples as in [7,8] are implemented. In the last two sections, the error estimates
are mainly based on the Fourier transform. Therefore, the method described in (1.11) has the advantage
that it can be easily implemented by the fast Fourier transform techniques. We implement the pro-
posed algorithm in a Matlab program. In these numerical experiments, we always take y
0
=1 and x the
domain
{z = x +iy C||x| 10, 0 < y < 1}.
Suppose the vector F represents samples from the function f(x). Then we add a perturbation to the input data F and
obtain the perturbation data
F

= F +randn(size(F)), (4.1)
Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608 1601
Table 1
Example 4.1: Errors for the MD with M=201, =0.01.
y a priori a posteriori
e
r
e
i
e
r
e
i
0.1 0.4660 0.3118 0.0629 0.1648 0.0884 0.0172
0.3 0.3805 0.2505 0.1430 0.1622 0.0658 0.0519
where the function randn() generates arrays of randomnumbers whose elements are normally distributed with mean
0, variance
2
=1. We subdivide the interval [ 10, 10] into M equal parts. Then the noisy level can be calculated by
= F

F
l
2 :=

1
M +1
M+1

n=1
|F

(n) F(n)|
2
. (4.2)
In the present paper, we use e
r
and e
i
computed in virtue of (4.2) to denote the absolute errors of real and imaginary
parts, respectively.
Firstly, we would like to compare the a posteriori parameter choice rule (3.1) with the a priori
parameter choice rule (2.13). The Newtons dichotomy is used to solve the equation (3.1), where we
choose =1.1.
Secondly, we would like to compare the numerical effect of the mollication regularization method (MD) in
the present paper with another two regularization methods, i.e., the Fourier regularization method (FD) in [8] and the
modied Tikhonov regularization method (TD) in [7]. For the readers convenience, we list the computational formulas
of FD and TD as follows:
FD: f
,
max
(z) :=
1

e
ix
e
y

()
+
max
dx, (4.3)
where
+
max
=

1,
max
,
0, others
and
max
is the regularization parameter to be determined.
TD: f
,
:=
1

e
ix
e
y
1 +e
2y
0

() d +
1

+
0
e
ix
e
y

() d. (4.4)
Example 4.1. The function
f(z) = e
z
2
= e
(x+iy)
2
= e
y
2
x
2
(cos 2xy i sin 2xy)
is analytic in the domain
= {z = x +iy Cx R, |y| 1}
with
f(z)|
y=0
= e
x
2
L
2
(R),
and
Ref(z) = e
y
2
x
2
cos 2xy,
Imf(z) = e
y
2
x
2
sin 2xy.
In Figs. 1 and 2 and Table 1, we compare the numerical effectiveness using the a priori and a posteriori parameter
choice rules. It is not difcult to see that both rules achieve satisfactory effects. However, one can see that the a
posteriori rule is better than the a priori rule.
1602 Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608
10 5 0 5
0.2
0
0.2
0.4
0.6
0.8
1
1.2
a
b
exact solution
aposteriori
apriori
10 5 0 5 10
0.2
0
0.2
0.4
0.6
0.8
1
1.2
exact solution
aposteriori
apriori
Fig. 1. Example 4.1: Real parts (a) at y =0.1 with =0.01, (b) at y =0.3 with =0.001 for the MD.
Table 2
Example 4.1: Errors e
r
with M=101, =0.01 for three methods.
y FD MD TD
0.5 0.0676 0.3529 0.0730
0.7 0.1279 0.5688 0.1561
0.9 0.2982 0.9981 0.2901
Table 3
Example 4.1: Errors e
i
with M=101, =0.01 for three methods.
y FD MD TD
0.5 0.0449 0.2794 0.0521
0.7 0.1191 0.5255 0.1484
0.9 0.2945 0.9737 0.2859
Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608 1603
10 5 0 5 10
0.1
0.08
0.06
0.04
0.02
0
0.02
0.04
0.06
0.08
0.1
a
b
exact solution
aposteriori
apriori
10 5 0 5 10
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
exact solution
aposteriori
apriori
Fig. 2. Example 4.1: Imaginary parts (a) at y =0.1 with =0.01, (b) at y =0.3 with =0.001 for the MD.
Figs. 3 and 4, Tables 2 and 3 give the comparisons of the numerical results of the MD, FD and TD, from which
we can see that although the effectiveness of the FD and TD is better than the MD, the MD is still a satisfactory
method.
In the following example, we only consider the comparison of numerical effect for a priori rule and a posteriori
rule of the MD.
Example 4.2. The highly oscillatory function
f(z) = cos z = cos(x +iy) = cosh y cos x i sinh y sin x
is also analytic in the domain
= {z = x +iy C|x R, 0 < y < 1}
with
f(z)|
y=0
= cos x,
1604 Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608
10 5 0 5 10
0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
a
b
c
FD approximation
exact solution
MD approximation
TD approximation
10 5 0 5 10
0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
FD approximation
exact solution
MD approximation
TD approximation
10 5 0 5 10
0.5
0
0.5
1
1.5
2
2.5
FD approximation
exact solution
MD approximation
TD approximation
Fig. 3. Example 4.1: The comparisons of real parts for the FD, MD and TD at (a) y =0.5, (b) y =0.7, (c) y =0.9 with =0.01 respectively.
Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608 1605
10 5 0 5 10
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
a
b
c
FD approximation
exact solution
MD approximation
TD approximation
10 5 0 5 10
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
FD approximation
exact solution
MD approximation
TD approximation
10 5 0 5 10
1.5
1
0.5
0
0.5
1
1.5
FD approximation
exact solution
MD approximation
TD approximation
Fig. 4. Example 4.1: The comparisons of imaginary parts for the FD, MD and TD at (a) y =0.5, (b) y =0.7, (c) y =0.9 with =0.01 respectively.
1606 Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608
10 5 0 5 10
1.5
1
0.5
0
0.5
1
1.5
a
b
exact solution
apriori
aposteriori
10 5 0 5 10
1.5
1
0.5
0
0.5
1
1.5
exact solution
apriori
aposteriori
Fig. 5. Example 4.2: The comparisons of real parts at (a) y =0.1, (b) y =0.3 for the MD with =0.01.
and
Ref(z) = cosh y cos x,
Imf(z) = sinh y sin x.
Although cos x / L
2
(R), Figs. 5 and 6 show that the computed approximation is also good. Table 4 presents the
comparison results for Example 4.2 and further reects that the a posteriori rule has numerical advantages over the a
priori rule. In addition, from the Examples 4.1 and 4.2, we can see that the smaller the |y |, the better the approximate
effect of f(x +iy).
Table 4
Example 4.2: Errors with M=101, =0.01 for the MD
y a priori a posteriori
e
r
e
i
e
r
e
i
0.1 0.4660 0.3809 0.0836 0.1891 0.0831 0.0449
0.3 0.4660 0.4086 0.2378 0.1884 0.2513 0.1419
Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608 1607
10 5 0 5 10
0.2
0.15
0.1
0.05
0
0.05
0.1
0.15
a
b
exact solution
apriori
aposteriori
10 5 0 5 10
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
exact solution
apriori
aposteriori
Fig. 6. Example 4.2: The comparisons of imaginary parts at (a) y =0.1, (b) y =0.3 for the MD with =0.01.
5. Conclusion
In this paper, a mollication regularization method with Gauss kernel is used to give a stable analytic continuation
of analytic function in a strip domain. This method can be numerically implemented by fast Fourier transform. For a
priori and a posteriori choices of the regularization parameter, the convergence error estimates for the approximations
are obtained. Numerical verication by using two examples shows that the proposed method is effective and it is also
competitive with other regularization methods. However, there are also some limitations for the mollication method
with Gauss kernel used in this paper, e.g., the method only works for the special domain such as the strip domain in
the present paper, it is very difcult to obtain the order optimal estimates, it does not work for the ill-posed problems
for which their ill-posedness is equal to or more serious than the backward heat equation.
Acknowledgments
The authors thanks the referees for useful comments.
References
[1] R.G. Airapetyan, A.G. Ramm, Numerical inversion of the Laplace transform from the real axis, J. Math. Anal. Appl. 248 (2000) 572587.
[2] I. Ali, V.K. Tuan, Application of basic hypergeometric series to stable analytic continuation, J. Comp. Appl. Math. 118 (2000) 193202.
[4] H.W. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishes, Boston, 1996.
1608 Z.-L. Deng et al. / Mathematics and Computers in Simulation 81 (2011) 15931608
[5] C.L. Epstein, Introduction to the Mathematics of Medical Imaging, Society for Industrial and Applied Mathematics, Philadelphia, PA, 2008.
[6] J. Franklin, Analytic continuation by the fast Fourier transform, SIAM J. Sci. Stat. Comput. 11 (1990) 112122.
[7] C.L. Fu, Z.L. Deng, X.L. Feng, F.F. Dou, A modied Tikhonov regularization for stable analytic continuation, SIAM J. Numer. Anal. 47(4)
(2009) 29823000.
[8] C.L. Fu, F.F. Dou, X.L. Feng, Z. Qian, A simple regularization method for stable analytic continuation, Inverse problems 24 (2008),
065003(15pp).
[9] D.N. Ho, H. Shali, Stable analytic continuation by mollication and the fast Fourier transform, in: Method of Complex and Clifford Analysis
(Proceedings of ICAM Hanoi 2004), 2004, pp. 143152.
[10] D.N. Ho, A mollication method for ill-posed problems, Numer. Math. 68 (1994) 469506.
[11] D.N. Ho, Methods for Inverse Heat Conduction Equation Problems, Peter Lang, Frankfurt/Main/Bern/New York/Paris, 1998.
[12] P. Henrici, Applied and Computational Complex analysis, vol. 1, John Wiley & Sons, Inc., New York, 1988.
[13] P. Jonas, A.K. Louis, Approximate inverse for a one-dimensional inverse heat conduction problem, Inverse Problems 16 (2000) 175185.
[14] A. Kirsch, An Introduction to the Mathematical Theory of Inverse Problems, Springer, Berlin, 1996.
[15] M.M. Lavrentev, V.G. Romanov, S.P. Shishat.ski, Ill-posed problems of mathematical physics and analysis, in: Translations of Mathematical
Monographs, vol. 64, American Mathematical Society, Providence, Rhode Island, 1986.
[16] A.K. Louis, Approximate inverse for linear and some nonlinear problems, Inverse Problems 12 (1996) 175190.
[17] A.K. Louis, A unied approach to regularization methods for linear ill-posed problems, Inverse Problems 15 (1999) 489498.
[18] D.A. Murio, The Mollication Method and the Numerical Solution of Ill-posed Problems, Wiley-Interscience Publication, New York, 1993.
[19] A.G. Ramm, The ground-penetrating radar problem, III, J. Inverse Ill-posed Problems 8 (2000) 2330.
[20] A. Rieder, T. Schuster, The approximate inverse in action. II: Convergence and stability, Math. Comput. 72 (2003) 13991415.
[21] T. Schuster, The Method of Approximate Inverse: Theory and Applications, Lecture Notes in Mathematics, vol. 1906, Springer,
Berlin/Heidelberg, 2007.
[22] I. Sabba Stefanescu, On the stable analytic continuation with a condition of uniform boundedness, J. Math. Phys. 27 (1986) 26572686.
[23] U. Tautanhahn, Optimality for ill-posed problems under general source conditions, Numer. Funct. Anal. Optim. 19 (1998) 377398.
[24] V.K. Tuan, Stable analytic continuation using hypergeometric summation, Inverse problems 16 (2000) 7578.

You might also like