Professional Documents
Culture Documents
fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2401430, IEEE Transactions on Image Processing
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. ?, NO. ?, ? 2015
(2)
22
(f ) =
(Di f 2 ),
(3)
iI
I. I NTRODUCTION
N many image processing applications in medicine and
engineering, image restoration and reconstruction plays an
important role for processing and analysis, see for instance
[21], [22]. In this paper, we consider a basic forward image
restoration model
g = Hf + n,
(1)
, u) = Hf g22 + (f ) + Df u22 +
J(f
ui 2 ,
iI
(4)
where
(f ) =
(Di f 2 )
iI
with
(t) = (t) + |t|,
= (0+ ),
1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2401430, IEEE Transactions on Image Processing
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. ?, NO. ?, ? 2015
TABLE I
R EGULARIZATION FUNCTIONS .
t
1+t
t
1+t
log(t + 1)
log(t + 1)
(1+t)2
1+t
2
(1+t)
3
t
1+t
log(t + 1) t
(1+t)
2
T
iI Di Di is greater than a positive constant, then the
sequence generated by this algorithm converges to a critical
point of (4). The approach of this theoretical result is based
on the Kurdyka ojasiewicz (KL) inequality studied in [2],
[3]. On the other hand, we extend the analysis to (4) with
box constraints and obtain similar convergence results of the
related minimization algorithm.
Their approach in [2], [3] relies on assuming that the objective function to be minimized satisfies the so-called Kurdyka
ojasiewicz(KL) property [25], [28], which was developed for
nonsmooth functions by Bolte et al. [8], [9]. In both of these
works, the suggested approach gains its strength from the fact
that the class of functions satisfying the KL property is considerably large, and cover a wealth of nonconvex-nonsmooth
functions recently arising in many fundamental applications,
which contain the proximal-point algorithm for minimizing
a proper and lower semicontinuous (lsc) function and the
forward-backward scheme for solving problems about the sum
of a proper lsc function and a differentiable (nonconvex)
function, see [1], [11], [12], [16], [36], and have proved to
possess good convergence properties in the nonconvex case.
In [26], Le Thi et al. considered and studied the difference of
two proper lsc convex functions (it is called a DC function)
as an objective function. By using the KL properties, they
presented the convergence analysis of the DC optimization
algorithm when one of the lsc functions is strongly convex.
The paper is organized as follows. In Section 2, we present
the alternating minimization algorithm, fix some notations and
collect a few preliminary basic facts on nonsmooth analysis. In
Section 3, we study the convergence analysis of algorithms in
[34]. In Section 4, we consider the model with box constraints
and analysis the related algorithm. In Section 5, numerical
examples are given to demonstrate the theoretical results.
Finally, some concluding remarks are given in Section 6.
II. A LGORITHM AND SOME PRELIMINARIES
A. The Alternating Minimization Algorithm
To solve J of the form given by (4), a nonsmooth graduated
nonconvexity scheme is employed in [34]. The main idea is
to consider a sequence
0 = 0 < 1 < < k < < n = 1,
(5)
2 2
t(2+t)
(1+t)2
()2 t
t+1
2
(1+t)
3
2
(1+t)
2
ui 2 ,
iI
where (f ) =
(6)
(Di f ), (t) = (t) + |t|,
iI
= (0+ ).
From the properties of , , introduced in the Appendix A, we have
Algorithm 1
1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2401430, IEEE Transactions on Image Processing
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. ?, NO. ?, ? 2015
(j)
(j1)
=
max Di f
2
, 0 ,
2
Di f (j1)
for all i I;
Definition 4. A necessary (but not sufficient) condition for z
F Step
Rn to be a minimizer of a function f : Rn R is f (z) 0.
(j)
(j1)
(j1)
(j1)
f
= f
+ f
where f
solves A point that satisfies this requirement is called a critical point.
The Kurdyka-ojasiewicz property plays a central role in
our
analysis. Below, we recall the essential elements. First,
T
T
(j)
As (2H H + 2
Di Di )s = f J (f , u ). (7)
we introduce some notation. For any subset S Rn and any
iI
point x Rn , the distance from x to S is defined and denoted
(Here the iteration index is the superscript j.)
by dist(x, S) inf{y x, y S}, when S = , we have
that dist(x, S) = for all x.
In Algorithm 1,we note that the positive definite Definition 5. Let f : Rn R {+} be a
part (2HT H + 2 iI DTi Di ) of the Hessian matrix of proper lower semicontinuous function. The function f is
J (f , u(j1) ) is used in the optimization procedure. This said to have the Kurdyka-ojasiewicz property at z
n
procedure is to ensure the descent direction and coerciveness dom(f )( {z R |f (z) = }) if there exist (0, +],
of J (f , u). Numerical examples in [34] have shown that a neighborhood U of z and a continuous concave function
Algorithm 1 as the step size = 1 can provide good restored : [0, ) R+ such that
images with neat edges.
(0) = 0;
is continuous differentiable on (0, );
1+t
ential for short, of f at x domf , written f , in Table I are semi-convex (pseudo-convex)
functions and also
n
is defined as follows: f (x) = {x R |xn KL functions, see [10]. According to the definition of semi (xn ), x x }.
x, f (xn ) f (x), xn f
n
convex function, h(t) = t2 + (t) are Morse functions,
Definition 2. (Sublevel sets) Being given real numbers and i.e., for each critical point t of h, its Hessian 2 h(t) of h at t
we set [ f ] = {x Rn | f (x) }. We define is a nondegenerate endomorphism of Rn , and also there exist
similarly [ < f < ].
c1 , c2 0 such that
Definition 3. A function f : Rn Rm R {+} with
values f (x, u) is level-bounded in x locally uniformly in u,
|h(t) h(t)| c1 t t2 ,
h(t) c2 t t.
m
if for each u
R and R, there is a neighborhood
V N (
u) such that the set {(x, u)|u V, f (x, u) } is Therefore, we can determine the corresponding concave func
bounded in Rn Rm .
tion (in Definition 5) of h(t) is given by (s) = c s,
Let us recall a useful result from [39, Proposition 10.5 and where c is a positive constant. On the other hand, are
Exercise 8.8].
definitely semi-algebraic, thus are KL functions with the
Proposition 1. Let T (x, y) f (x) + g(y) + H(x, y), where form (s) = cs, c > 0. As a summary, J (f , u) is a KL
f : Rn (, +] is a proper lsc convex function, function, and its corresponding
concave function is given by
)
[
g : Rm (, +] is a proper continuously differentiable (s) = cs with 21 , 1 , c > 0.
ui
1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2401430, IEEE Transactions on Image Processing
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. ?, NO. ?, ? 2015
Df
(j1)
u(j1) 22
(j1)
ui
2
{( (j) (j) )}
f ,u
generated by Algorithm 1. In the light of [1],
[3], [11], [36], the following three conditions are a general
methodology which describes the mains steps to achieve this
goal. In particular we put in evidence how and when the KL
property is entering in action.
Theorem
{
}2. Suppose that the Assumptions H1 , H2 , H3 hold.
Let z(j) be a sequence generated by Algorithm 1. Then there
are two positive numbers a and b, we have
(C1) (Sufficient decrease condition) for each j N,
iI
(10)
)
(
and limj f (j) f (j1) 22 + u(j) u(j1) 22 = 0.
(iv) Define
1
f (j) = (1 )A(f (j) f (j1) ) +
j 1,
)
(13)
f (j) , u(j) J (f (j) , u(j) ).
)}
{(
of
Moreover, for all bounded subsequence
f (j ) , u(j )
{( (j) (j) )}
f ,u
, we obtain
)
(
as i .
(14)
J (z(0) ) J (
z) b (0)
+ (J (z ) J (
z)) < ,
a
a
(15)
(a and b are two positive numbers stated in Theorem 2, and
refers to a concave function required in Definition 5), then
+
z(j+1) z(j)
< +; and (iii)
(i) z(j) B(
z, ); (ii)
z(0)
z+2
j=0
(
)
{
}
J z(j) J (
z) as j , and z(j) converges to a
.
critical point z
The proof of Theorem 3 is given in Appendix D.
By using [2, Theorem
{
} 3.3], we can further obtain the local
convergence of {z(j) } to a global minimum.
Theorem 4. Let z(j) be a sequence generated by Algorithm
1 with an initial guess z(0) , satisfying (C1), (C2) and (C3),
and the assumptions H1 , H2 , H3 hold. If J has the Kurdyka (a global minimum point of J ).
ojasiewicz property at z
<
Then there exist > 0 and > 0 such that z(0) z
< J (z(0) ) < min J + , where < /3 and
and
min
J
1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2401430, IEEE Transactions on Image Processing
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. ?, NO. ?, ? 2015
and
have (i) z(j) converges to some z
z(j+1) z(j)
<
j=1
+; and (ii) J (
z) = min J .
Moreover, under the assumption of H1, H2, we know that J
is bounded below and coercive, and due to non-increasingness
of J along the sequence {z(j) }, as well as [2, Theorem 3.2],
we get the following convergence theorem with less required
conditions.
Theorem 5. Under the assumptions of H1, H2, H3 , if J
is a KL function, then any bounded sequence {z(j) } generated by Algorithm 1 converges to some critical point of
J . Moveover, the sequence {z(j) } has a finite length, i.e.
[0, 1) , d > 0.
minp Hf g2 +
(Di (f )2 )
f R
iI
(17)
s.t. l f r
where l and r are fixed vectors and inequalities are taken
componentwise.
According to the analysis in Section 2, we can consider the
following optimization problem for image restoration:
Di f 2
min Hf g22 + (f ) +
f Rp
iI
(18)
s.t. l f r
where (f ) =
(Di f 2 ). Similar to Algorithm 1,
iI
Algorithm 2
(j1)
=
max
D
f
,
0
,
i
2
2
Di f (j1)
for all i I;
ui
F Step
f (j)
=
{
2 }
2
arg min Hf g + (f ) +
Df u(j)
f S
(19)
where S = {f |l f r}.
(Here the iteration index is the superscript j.)
1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2401430, IEEE Transactions on Image Processing
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. ?, NO. ?, ? 2015
Next,
reduced
[23]. In
simplify
s
f (s) (m
)2
Active set:
As {i :
(s)
fi
= li } {i :
(l)i
(f )i
(PS (f ))i
(r)i
(s)
fi
(s)
{i : fi
{i :
(s)
fi
= ri }
into S:
As = As+1 .
if (f )i < (l)i
if (l)i (f )i (r)i
if (f )i > (r)i
where
s = min{, s },
where
(E )i,i =
1,
0,
(20)
i ,
i
/ ,
(s)
for i {1, , p} \ Bs .
(21)
Computing the stepsize in the F Step
s = arg min J (PS (f (s) + Rs 1 d(s) ), u(j) )
(22)
>0
0s =
Ad(s) , d(s)
(24)
(
)
Initialize f (0) = PS f (j1) ;
While s smax , (24) and (25) do not hold, Do
1. Search direction calculation: d(s) by (21);
2. Reduced Newton Hessian: Rs by (20);
s
s
3. Armijo line search: stepsize
m
by) (23);
( (s) =
(s+1)
s 1 (s)
4. update: f
= PS f + R
d
,
s := s + 1;
End Do
f (j) = f (s)
(Here the iteration index is the superscript s.)
(23)
s
Step length reduction can be accomplished by taking m
=
m s
0 , m = 1, 2, , for some (0, 1). We stop at the first
m for which the sufficient decrease condition
(
)
s
s
J f (s) (m
), u(j)
J (f (s) , u(j) ) /m
f (s)
1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2401430, IEEE Transactions on Image Processing
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. ?, NO. ?, ? 2015
(a) TwoCircles
(c) Text
(d) Liftingbody
ii
max(psnr)=18.8481, index = 11
50
50
50
V. E XPERIMENTAL R ESULTS
In this section, we test the performance of Algorithms 1 and
2, and observe their convergence results. All the numerical
examples are tested under Windows 7 and MATLAB R2010a
running on a DELL laptop with an Intel Core i5-2430M CPU
at 1.8 GHz and 2.92 GB of memory. Peak signal-to-noise ratio
F
(PSNR) (20 log10 gf
where g is the observed image,
q
f is the original image, q is the size of the image and F is
the Frobenius norm), is used to measure the restoration results.
CPU time is used to measure the efficiency of the restoration
method. The stopping criterion is that the relative change of the
successive iterates must be less than 104 . The initial value of
is set to be 1.1, and its value is updated at each iteration by
the following formula 1.8 as suggested in [34]. The value
of is set to be 0.015. The potential function t/(1 + t) in
Table I is employed in the following experiments. The value
of is set to be 0.5 in the used potential function.
The four testing images are shown in Figure 1: (a) TwoCircles.tiff of size 64 64; (b) Modified Shepp-Logan.tif of size
256 256; (c) Text.png of size 256 256; and (d) Liftingbody.png of size 512 512. To generate the observed images,
2
2
)
the two-dimensional truncated Gaussian function exp( s2t
2
for 3 s, t 3 with = 1.5 is used. The support of the
blurring function is 7 7.
A. Experiment 1
We first test the continuation approach by using k = nk in
(5). It is clear 0 = 0 and n = 1. We would like to test the
performance of Algorithm 1 for different values of n.
50
100
100
150
150
200
200
250
250
300
300
350
350
400
450
50
100
150
400
50
(a)
150
(b)
max(psnr)=18.5200, index = 20
50
50
50
50
100
100
150
150
200
200
250
250
300
300
350
350
400
450
100
400
0
50
100
150
450
50
(c)
100
150
(d)
Fig. 2. Two circles image: Gaussian noise with standard derivation of (a)
0.05; (b) 0.055; (c) 0.06; (d) 0.07. The maximum value of PSNR is given by
max(psnr) and attained at the optimum index n.
B. Experiment 2
In this experiment, we test the restoration model with box
constraints. Here we consider li = 0 and ri = 1 for all
pixel values i in the testing images. In view of the results
in Experiment 1, we simply adopt the optimum index n in
table II, which is yielded along with the maximum PSNR of
the different restored image obtained respectively, into the run
1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2401430, IEEE Transactions on Image Processing
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. ?, NO. ?, ? 2015
TABLE II
T HE MAXIMUM VALUE OF PSNR (max(psnr)) IS OBTAINED AT THE OPTIMUM n (index) FOR DIFFERENT IMAGES AND NOISE STANDARD DERIVATIONS
WHEN n GOES FROM 1 TO 70.
Tested
image
(a)
(b)
(c)
(d)
0.05
max(psnr) index
19.38
5
26.60
7
19.33
12
29.93
68
0.055
max(psnr)
index
18.85
11
25.72
10
19.12
14
28.89
68
0.06
max(psnr) index
18.52
20
24.97
67
19.01
13
27.80
69
max(psnr)=25.7157, index = 10
max(psnr)=26.6032, index = 7
100
100
0.07
max(psnr) index
18.05
65
23.47
68
18.25
69
25.74
66
max(psnr)=19.3337, index = 12
max(psnr)=19.1161, index = 14
100
100
100
100
200
200
300
300
400
400
100
100
200
200
300
300
400
400
500
500
600
600
50
100
150
700
50
(a)
100
500
150
50
(b)
150
50
100
100
150
(b)
max(psnr)=19.0050, index = 13
100
100
50
100
100
100
200
150
200
300
300
400
100
200
200
250
500
300
300
400
600
350
400
500
700
400
450
500
(a)
50
100
50
100
150
600
(c)
50
100
800
150
50
(d)
100
150
500
50
(c)
100
150
(d)
Fig. 4. Text image: Gaussian noise with standard derivation of (a) 0.05; (b)
0.055; (c) 0.06; (d) 0.07. The maximum value of PSNR is given by max(psnr)
and attained at the optimum index n.
TABLE III
N UMERICAL COMPARISON OF A LGORITHMS 1 AND 2 FOR I MAGES ( A )-( D )
IN F IGURE 1.
Tested
Algorithm
image
(a)
type
1
2
1
2
1
2
1
2
(b)
(c)
(d)
we may study the convergence analysis of the other nonconvex nonsmooth image restoration problem. For instance,
min{f g1 +Df qq }, where 0 < q < 1. The variables can
f
FROM
The properties of :
1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
[34].
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2401430, IEEE Transactions on Image Processing
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. ?, NO. ?, ? 2015
100
100
100
100
200
200
300
300
400
400
500
500
600
600
700
50
100
150
700
TABLE IV
P ERCENTAGE OF PIXEL VALUES OF 0 OR 1 FOR IMAGES ( A )-( D ) IN F IGURE
1.
Tested image
(a)
(b)
50
100
(a)
(c)
150
(d)
(b)
min/max pixel
0
255
0
255
0
1
0
255
percentage
73.90%
16.11%
58.18%
4.34%
92.72%
7.28%
0.0092%
0.0259%
100
100
100
100
200
200
300
300
400
400
500
500
600
50
100
150
600
(c)
(d)
= 1, 1 = ;
(0+ ) > 0 is finite, [0, 1];
(t) < 0, t R+ and (0+ ) is finite and less
than zero, (0, 1];
The properties of :
0
50
100
(c)
150
(d)
Time=7.8915, index = 7
(a)
(b)
(c)
(d)
tR
14
1.6
1.4
12
A PPENDIX B
P ROOF OF T HEOREM 1:
1.2
10
1
8
0.8
6
0.6
4
0.4
0.2
10
20
30
40
50
60
70
80
10
20
30
(a)
40
50
60
70
80
(b)
Time=9.3146, index = 12
Time=16.81, index = 68
30
45
40
25
35
30
TIME(sec)
20
15
10
25
20
15
10
5
5
0
10
20
30
40
50
60
70
80
10
(c)
20
30
40
n
50
60
70
80
(d)
(i) The inequality of (9) can be obtained by direct computation, see for instance [27]. The proof for the inequality of
(10) can be considered as follows. Under the assumptions of
the theorem and [3, Lemma 3.1], the following inequality can
be established:
C
f (j) f (j1) + f (j) f (j1) 2
2
Because A is positive definite, we obtain
(a)
(b)
t+
lim (t)
t0
If (0 ) 0 then
< 0 is well defined
min (A) C
)f (j ) f (j 1) 2 J (f (j 1) , u(j ) )
J (f (j) , u(j) ) + (
2
In accordance with the assumptions of the theorem, i.e.
min (A) > C /2, the inequality of (10) is satisfied.
For (ii) and (iii), we can establish the results by using (i) and
[2, Lemma 3.1].
For (iv), according to Algorithm 1, we first obtain
0 u J (f (j1) , u(j) )
(j)
(j)
=
((ui 2 ) + 2(ui Di f (j1) )) (27)
i
1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2401430, IEEE Transactions on Image Processing
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. ?, NO. ?, ? 2015
10
(j)
= (f (j) , u(j) ) satisfies (C2) with b =
which1 entails
1 max (A) + L + 2max (D).
f J (f (j1) , u(j) )
Because J (f , u) is proper and lsc, H1 and H2 hold, we have
(j)
j
p
sp
2HT (Hf (j1) g) + (f (j1) ) that {z } is contained in the level set {z R R |J
(0)
(j)
(j)
(j)
(j)
(j)
u J (f , u ) =
((ui 2 ) + 2(ui Di f )) as i +. By using (C1), we also see that the sequence
{J (z(ji ) )} is decreasing and limi z(ji +1) z(ji ) = 0.
i
Since C2 hold, we have (ji ) 0 as i . According
and
to the structure of J , let J (f , u) G (f ,
u) + g (u), where
f J (f (j) , u(j) ) = 2HT (Hf (j) g) + (f (j) ) +
G (f , u) = Hf g22 +Df u22 + i ui 2 , g (f ) =
(f ). Because of the lower semi-continuity of G (f , u), we
2DT (Df (j) u(j) ).
have
With (27) and (28), we obtain
G (
z) lim inf i+ G (z(ji ) ),
(j1)
(j)
(j)
(j)
and
the
convexity
property of G (z) imply
2(Df
Df ) u J (f , u )
z(ji ) G (
lim supi+ G (z(ji ) ) + (ji ) , z
z).
and
(ji )
(ji )
Additionally, limi , z z = 0.
1
Thus limi+ G (z(ji ) ) = G (
z).
(1 )A(f (j) f (j1) ) + ( (f (j) ) (f (j1) ))
suppose
f (j ) , u(j )
is a bounded sequence, then
{(
)}
A PPENDIX D
f (j 1) , u(j )
is also a bounded sequence. By using
(
)
(
)
P
ROOF OF T HEOREM 3:
j . According to the properties of (), () is a satisfies (C1) and (C2), we obtain (i) and (ii). For (iii), by using
uniform continuous function on bounded subsets, thus the last (C3), (i) and (ii), we know z(j) z
and J (z(j) ) J (
z),
point of (iv) is yielded.
is a critical point of J .
as j . It remains to show that z
For (v), we obtain the results by using [2, Proposition 3.1].
The sequence (z(j) , (j) ) Graph( J ) {(z, )|
of J (f , u ). As is L-Lipschitz function, we have and bounded below, we know J (z) satisfy level-bounded
in z locally uniformly in . And because J (z) (the con1
(1 )A(f (j) f (j1) ) + ( (f (j) ) (f (j1) ))
tinuity of (f ) and (0+ ) about ) is continuous, we
and
A
(f (j) f (j1) ) =
and
2(Df (j1) Df (j) )) 2max (D)f (j) f (j1) .
Therefore, for j N, we obtain
(j) = (f (j) , u(j) )
(
)
1
1 max (A) + L + 2max (D) f (j) f (j1)
(
)
1
1 max (A) + L + 2max (D) z(j) z(j1)
A PPENDIX F
P ROOF OF T HEOREM 7:
Because of the positive definite matrix A, there exist dl , dr
such that 0 < dl < dr < + and
dl f 2 f T Rs f dr f 2
(29)
1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2401430, IEEE Transactions on Image Processing
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. ?, NO. ?, ? 2015
A PPENDIX G
P ROOF OF T HEOREM 8:
R EFERENCES
1 (j1)
), > 0.
(j)
1
S (f (j) )
f f (j1) , Rj1 d(j1) +
1
(j)
f f (j1) 22 S (f (j1) )
2
By using (7), (20), (21) and (29), we get
S (f (j) ) +
S (f (j) )
11
min (A)
1
+
)f (j) f (j1) 22
dr
2
S (f (j1) )
min (A)
1
J (f (j) , u(j) ) + S (f (j) ) + (
+
)f (j)
dr
2
f (j1) 22 J (f (j1) , u(j) ) + S (f (j1) )
i.e.,
Hf (j) g22 + (f (j) ) + Df (j) u(j) 22 +
min (A)
1
S (f (j) ) + (
+
)f (j) f (j1) 22
dr
2
Hf (j1) g22 + (f (j1) ) + Df (j1) u(j) 22
+S (f (j) )
(30)
Combined with (9), and Theorem 1(i) and (ii), we know
J (f , u) + S (f ) satisfies (C1) defined in Theorem 2.
Because of the compactness and convexity of S and
the shrinkage properties of f and u in [42], Theorem
5{ as }well
( {(as Theorem
)}) 1(iii), we confirm the sequence
z(j) = f (j)(, u(j)
generated
by Algorithm 2 is bound)
ed, and (j) = f (j) , u(j) J (f (j) , u(j) ) satisfies (12)
and (13) as f S, i.e. (C2) in Theorem 2 is met.
Similar to the proof of Theorem 2, we confirm that the
condition (C3) is also satisfied. According to Theorem 3, the
conclusion of this theorem is completed.
ACKNOWLEDGMENT
The authors would like to thank Dr. M. Nikolova and Dr.
Liwei Zhang for their constructive suggestions.
1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2401430, IEEE Transactions on Image Processing
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. ?, NO. ?, ? 2015
tiques r
eels, Les Equations
aux D
eriv
ees Partielles, Editions
du centre
National de la Recherche Scientifique, Paris, 889, 1963.
[29] J. J. Mor
e and G. Toraldo, On the solution of large quadratic programming problems with bound constraints, SIAM J. Optim., vol. 1, no. 1,
pp. 93113, 1991.
[30] B. Morini, M. Pocelli, and R. Chan, A reduced Newton method for
constrained linear least-squares problems, J. Comput. Appl. Math., vol.
233, no. 9, pp. 22002212, 2010.
[31] M. Nikolova, Minimizers of cost-functions involving non-smooth datafidelity terms. application to the processing of outliers, SIAM J. Numer.
Anal., vol. 40, no. 3, pp. 965994, 2002.
[32] M. Nikolova, A variational approach to remove outliers and impulse
noise, J. Math. Imaging Vision, vol. 20, no. 12, pp. 99120, 2004.
[33] M. Nikolova, M. Ng, S. Zhang, and W. Ching, Efficient reconstruction
of piecewise constant images using nonsmooth nonconvex minimization,
SIAM J. Imaging. Sci., vol. 1, no. 1, pp. 225, 2008.
[34] M. Nikolova, M. Ng, and C. Tam, Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction, IEEE Tran. Image
Processing, vol. 19, no. 12, pp. 30733088, 2010.
[35] M. Nikolova, M. Ng, and C. Tam, On 1 Data Fitting and Concave
Regularization for Image Recovery, SIAM J. Sci. Comput., vol. 35, no.
1, pp. 397430, 2013.
[36] P. Ochs, Y. Chen, T. Brox, and T. Pock, ipiano: Inertial proximal algorithm for nonconvex optimization, SIAM Journal on Imaging Sciences,
vol. 7, no. 2, pp.13881419, 2014.
[37] R. Pytlak, An efficient algorithm for large-scale nonlinear programming
problems with simple bounds on the variables, SIAM J. Optim., vol. 8,
no. 2, pp. 532560, 1998.
[38] R. Pytlak and T. Tarnawski, Preconditioned conjugate gradient algorithms for nonconvex problems with box constraints, Numerische
Mathematik, vol. 116, no. 1, pp. 149175, 2010.
[39] R. T. Rockafellar and R. Wets, Variational Analysis, Grundlehren der
Mathematischen Wissenschaften, vol. 317. Springer, Berlin, 1998.
[40] L. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise
removal algorithms, Physica, vol. 60, no. 1, pp. 259268, 1992.
[41] A. Tikhonov and V. Arsenin, Solutions of Ill-Posed Problems. Washington, DC, 1977, Winston.
[42] Y. Wang, J. Yang, W. Yin, and Y. Zhang, A new alternating minimization
algorithm for total variation image reconstruction, SIAM J. Imaging. Sci.,
vol. 1, no. 3, pp. 248272, 2008.
PLACE
PHOTO
HERE
12
Jin Xiao received the B.Sc.degree in Hunan University of Science and Technology, the M.Sc. degree in Dalian University of Technology, in 2004
and 2006, respectively. He is currently pursuing
the Ph.D. degree with College of Mathematics and
Econometrics, Hunan University, China. From 2006
to now, He is an assistant, lecturer with School
of Mathematics and Computation Science, Lingnan
Normal University, Guangdong, China. His main
research interests include image processing and numerical optimization.
1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.