You are on page 1of 30

Applied Mathematical Modelling 54 (2018) 82–111

Contents lists available at ScienceDirect

Applied Mathematical Modelling


journal homepage: www.elsevier.com/locate/apm

A class of upper and lower triangular splitting iteration


methods for image restoration
Hong-Tao Fan a,b, Mehdi Bastani c, Bing Zheng a,∗, Xin-Yun Zhu d
a
School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, PR China
b
Institute of Applied Mathematics, College of Science, Northwest A & F University, Yangling, Shaanxi 712100, PR China
c
Department of Mathematics, University of Mohaghegh Ardabili, 56199-11367 Ardabil, Iran
d
Department of Mathematics, University of Texas of the Permian Basin, Odessa, TX 79762, USA

a r t i c l e i n f o a b s t r a c t

Article history: Based on the augmented linear system, a class of upper and lower triangular (ULT) split-
Received 7 April 2016 ting iteration methods are established for solving the linear systems arising from image
Revised 12 August 2017
restoration problem. The convergence analysis of the ULT methods is presented for im-
Accepted 19 September 2017
age restoration problem. Moreover, the optimal iteration parameters which minimize the
Available online 22 September 2017
spectral radius of the iteration matrix of these ULT methods and corresponding conver-
Keywords: gence factors for some special cases are given. In addition, numerical examples from image
Augmented linear system restoration are employed to validate the theoretical analysis and examine the effectiveness
Upper and lower triangular splitting and competitiveness of the proposed methods. Experimental results show that these ULT
Iteration method methods considerably outperform the newly developed methods such as SHSS and RGHSS
Convergence analysis methods in terms of the numerical performance and image recovering quality. Finally, the
Image restoration SOR acceleration scheme for the ULT iteration method is discussed.
© 2017 Elsevier Inc. All rights reserved.

1. Introduction

Images are usually degraded by blur and noise during image acquisition and transmission. Image restoration is a widely
studied problem in several applied scientific areas, such as removing the noise from magnetic resonance images (MRIs),
chest X-rays, and digital angiographic images in medical imaging [1,2], restoration of aging and deteriorated films in engi-
neering [3], restoring degraded images obtained by telescopes or satellites in astronomy [4], restoration of degraded images
in optical systems [5], and many other areas (see [6,7]). Restoration is a process that involves reconstructing or recovering a
degraded image using a priori knowledge related to the degradation phenomenon. The input–output relationship of image
restoration can be written as follows [8]:
g(x, y ) = H[ f (x, y )] + n(x, y ), (1.1)
where H is a degradation operator, f(x, y) is the original image, g(x, y) is the degraded image (recorded image), and n(x, y)
is additive noise. It can be shown that if H is a linear and space-invariant operator, then Eq. (1.1) can be written as the first
type Fredholm integral equation
 +∞  +∞
g(x, y ) = h(x − ξ , y − η ) f (ξ , η )dξ dη + n(x, y ), (1.2)
−∞ −∞


Corresponding author.
E-mail addresses: fanht14@lzu.edu.cn (H.-T. Fan), bastani.mehdi@yahoo.com (M. Bastani), bzheng@lzu.edu.cn (B. Zheng), zhu_x@utpb.edu (X.-Y. Zhu).

https://doi.org/10.1016/j.apm.2017.09.033
0307-904X/© 2017 Elsevier Inc. All rights reserved.
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 83

where h(x, y) is usually known as the point spread function (PSF) and n(x, y) is independent of the spatial coordinates. In
this paper, the PSF is assumed to be known. The discretization of (1.2) leads to the following formulation:
+∞ 
 +∞
g(x, y ) = h(x − k, y − l ) f (k, l ) + n(x, y ). (1.3)
k=−∞ l=−∞

It has been shown that the Eq. (1.3) can be expressed in the matrix–vector equation as [8]
g = A f + η, (1.4)
where A is a blurring matrix of size and f, η and g are
n2 × n2 n2 -dimensional
vectors representing the original image, noise,
and blurred and noisy (degraded) image, respectively. Given some assumptions of the value outside the field of view (FOV)
are known as boundary conditions (BCs). The structure of the blurring matrix A depends on the used BCs. Zero, periodic,
reflexive, antireflective and mean are five known BCs which have been widely used in the literature. In the zero BCs, it is as-
sumed that the outside of FOV is zero (black). The proposed assumption leads to block Toeplitz with Toeplitz blocks (BTTB)
for blurring matrix A. The periodic BCs is implemented by considering the periodic extension of data in the outside of FOV.
In this BCs, the matrix A has the block circulant with circulant blocks (BCCB) structure. It is shown that the matrix–vector
multiplications are effectively computed by fast Fourier transforms (FFTs) in zero and periodic BCs [7]. Reflecting the FOV
data to outside leads to the reflexive BCs. In this BCs, the matrix A has block Toeplitz-plus-Hankel with Toeplitz-plus-Hankel
blocks (BTHTHB) structure. It is shown that for the symmetric PSF, the two-dimensional discrete cosine transform (DCT-III)
can be applied to diagonalize the blurring matrix A [7]. The antireflective BCs is constructed by the antireflection of FOV
data to outside. In this BCs, the blurring matrix structure is block Toeplitz-plus-Hankel-plus-rank-2-correction. The discrete
sine transform (DST-I) can be used to diagonalize the coefficient matrix A for the symmetric PSF [9]. The mean BCs can
be viewed as an adaptive antireflection. This BCs reduces the ringing effects and keeps the C 1 continuity. The Kronecker
product approximations method has been presented to implement the image restoration process with different BCs such
as whole-sample symmetric BCs, reflective BCs, antireflective BCs and mean BCs [10–13]. Since the proposed approximation
does not require the symmetry condition of PSF, we apply it to implement the computations in the mean BCs [13].
In general, since the linear system (1.4) is ill-conditioned, to reduce the number of iterations when using the pre-
condition technique with iteration methods, the standard approach to preconditioning cannot be used. One approach for
preconditioning such ill-conditioned problems is to construct a matrix P that clusters the large singular values around one,
but leaves the small singular values alone, one can see [14] for more details. In this paper, the Tikhonov regularization
method [14–16] is used to solve this linear system, by transforming it into the following equivalent problem:
min ||A f − g||22 + μ2 ||L f ||22 ,
f

where 0 < μ < 1 is a penalty parameter and L is an auxiliary operator and chosen as the identity matrix. Other effective
techniques for solving image restoration problems, one may refer to [17–20]. To attain its minimum, we turn to solve the
following normal equation
(AT A + μ2 I ) f = AT g, (1.5)
which is equivalent to the 2n2 -by-2n2 linear system
    
I A e g
= , (1.6)
−AT μ2 I f 0
   
K¯ x̄ b̄

where e = g − A f . It is worth noting that the matrix A arising from image restoration problem (1.4) is highly structured
and severely ill-conditioned, having the property that the singular values decay to, and cluster at zero. Thus, to accelerate
the rate of convergence with the case L = I, we can use two BCCB matrices PA and PL which are the approximation of
matrices A and L, respectively, to precondition the corresponding normal system (AT A + μ2 LT L ) f = AT g, where PA = F ∗ A F
and PL = F ∗ L F denote the FFT of corresponding matrices and the elements of diagonal matrices A and L are made up
of eigenvalues of PA and PL , respectively. Additionally, by recasting equivalently the original system (1.4), employing the
Tikhonov regularization method, into the 2n2 -by-2n2 linear system (1.6), the magnitude of singular values (i.e., the be-
haviour of ill-conditioned) of latter system can be greatly improved. Then the precondition technique is therefore considered
in the present paper. The detailed theoretical analysis concerning the singular values of coefficient matrix K¯ is given in the
Appendix part. The linear system (1.6) can be regarded as a special case of augmented systems, for which many efficient
iteration methods have been presented in the literature. Examples of such methods include Uzawa-type methods [21–29],
Hermitian and skew-Hermitian splitting (HSS) methods [30–34], matrix splitting iterative methods [35–37], relaxation
iterative methods [38], restrictive preconditioners for conjugate gradient (RPCG) methods [39,40], Krylov subspace iterative
methods combined with block-diagonal, block-tridiagonal, constraint, SOR and HSS preconditioners [41–44], iterative null
space methods [45,46] and the references therein.
Recently, based on the HSS iteration method [48], Lv et al. [49] established a special HSS (SHSS) iteration method,
which is different from the HSS iteration method, for solving the linear system in image restoration problem (1.6). Sub-
sequently, Aghazadeh et al. [50] split the Hermitian part H = 12 (A + AT ) of A as sum of two matrices G and P, where G is
84 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

a Hermitian positive definite matrix and P is a Hermitian positive semidefinite matrix, and proposed a restricted version
of the generalized HSS (RGHSS) iteration method to solve image restoration problem (1.6). The convergence of the RGHSS
method was investigated with a detailed theoretical analysis and the optimal parameter was found in the restricted version.
Experimental results demonstrated that the RGHSS method is more effective and accurate than the SHSS method. Moreover,
Bai et al. [34] introduced a block triangular and skew-Hermitian splitting method for positive-definite linear systems. In
[51], Krukier et al. proposed a generalized skew-Hermitian triangular splitting method for saddle-point linear systems. More
recently, Zheng and Ma [47] proposed a triangular splitting method for augmented systems. A class of regularized HSS
iteration methods for the solution of large, sparse linear systems in saddle-point form are presented by Bai and Benzi [52].
In this paper, a class of upper and lower triangular (ULT) splitting iteration methods are presented to solve the linear
systems arising from image restoration. The convergence rate of each ULT iteration method is investigated, and the optimal
iteration parameters and corresponding convergence factors are obtained for some special cases of the ULT method. Numer-
ical examples are given to illustrate the performance of the proposed methods which indicates that the ULT methods are
more competitive and effective than two recently published methods, such as the SHSS method [49] and the RGHSS method
[50], and can be efficiently applied to restore images. Nevertheless, although the ULT iteration method can be simplified
as the method [47] when the (2,2)-block equals zero, we would like to emphasize that the (2,2)-block of coefficient
matrix in linear system (1.6) for image restoration problem is a scalar matrix μ2 I, which makes that theoretical results
obtained are much different from those in [47]; for example, the eigenvalue distribution of the preconditioned matrices (see
Theorems 3.3 and 5.3), singular values of 2n2 -by-2n2 linear system (1.6) (see Appendix), and the SOR acceleration scheme
for the ULT-type iteration (see Theorem 7.1).
The arrangement of this paper is as follows. In Section 2, we study the first type of the ULT (ULT-I) splitting iteration
method for solving the augmented systems. The application of the ULT-I method is described to solve the linear systems
arising from image restoration problem in Section 3. The convergence analysis of the ULT-I iteration method is also inves-
tigated. In Section 4, the second type of the ULT (ULT-II) splitting iteration method is presented to solve the augmented
systems. Similarly, we apply the ULT-II method to the image restoration problem and investigate its convergence properties
in Section 5. To demonstrate the efficiency of these proposed methods, some numerical tests from the image restoration
problem are provided in Section 6. The SOR acceleration scheme is discussed in Section 7. Finally, we end this paper with
some brief conclusions.

2. The first type of ULT splitting method

As mentioned before, the linear system (1.6) can be viewed as a special case of augmented systems which can be
described as follows:
    
E B e g
= . (2.1)
−BT C f h
   
K x b

Without loss of generality, we assume that E ∈ Rn×n is nonsingular, C ∈ Rm×m is symmetric positive semi-definite, B ∈ Rn×m
is a rectangular matrix with n ≥ m and the coefficient matrix K is nonsingular with nul l (B ) ∩ nul l (C ) = 0, where null(•)
stands for the null space of a matrix.
Based on the triangular splitting method [47], it will naturally lead to two types of ULT methods: ULT-I and ULT-II. Here
it is worth noting that these two methods can be applied to the ill-posed problems. Numerical experiments in Section 6 will
show that these methods have their strengths in recovering different blurred images and are always superior to the other
methods in terms of the numerical performance and image recovering quality, Next, we will discuss the first type of ULT
(ULT-I) splitting iteration method for solving the augmented systems (2.1) with nonzero (2, 2)-block and then use it to solve
the linear systems (1.6) from image restoration.
For a given symmetric positive definite matrix Q ∈ Rm×m , we first present following two splittings for the coefficient
matrix K of the augmented system (2.1):
   
E 0 0 −B
K = − = K1 − K2 ,
−BT C+Q 0 Q
   
E B 0 0
K = − T = K3 − K4 . (2.2)
0 C+Q B Q

Note that when C = 0, the ULT-I splittings in (2.2) reduce to the triangular splittings in [47] and therefore the induced
method can be perceived as a generalization of the one there. However, since the regularized parameter μ is not equal to
zero in the context of image restoration, the theoretical results obtained by the ULT-type iteration method vary greatly from
the results in [47]. In fact, in addition to discussing the convergence and optimal parameters with the similar case that
Q = τ1 I in [47] (see Theorems 3.2 and 3.3), we study convergence and optimal parameters of the ULT-type method with a
new case that Q = τ1 I + AT A (see Corollaries 3.2 and 5.2, Theorems 3.4–3.5 and 5.4–5.5). Additionally, the eigenvalue distri-
−1 −1
bution of the preconditioned matrices PULT −I
K¯ and PULT −II
K¯ (see Theorems 3.3 and 5.3), singular values of 2n2 -by-2n2
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 85

linear system (1.6) (see Appendix) and the SOR acceleration scheme for the ULT-type iteration (see Theorem 7.1) will also
be investigated.
The ULT-I splittings of the matrix K in (2.2) lead to equivalent reformulations of the augmented system (2.1) into two
systems of fixed-point equations as follows:

K1 x(k+ 2 ) = K2 x(k ) + b,
1

k = 0, 1, . . . . (2.3)
K3 x(k+1) = K4 x(k+ 2 ) + b.
1

By iterating alternatively between these two fixed-point systems (2.3) in their blockwise forms, the ULT-I iteration method
can be obtained to solve the augmented system (2.1) as follows:
⎧   

⎪ E 0 (k+ 12 ) = 0 −B (k )

⎨ −BT C + Q x
0 Q
x + b,

    k = 0, 1, . . . . (2.4)


⎪ E B 0 0
⎩ 0 C + Q x(k+1) = BT Q x(k+ 2 ) + b.
1

It is not difficult to see that each iteration of the ULT-I method alternates between the lower triangular matrix K1 and the
upper triangular matrix K3 , hence the classical methods can be applied to solve two subsystems. Furthermore, note that
the Eq. (2.4) includes a parameter matrix Q in each iteration scheme, and one may expect to choose a suitable parameter
matrix Q to accelerate the convergence rate of the ULT-I iteration method. This motivates us to choose different (optimal)
matrices Q so as to enhance the efficiency of ULT-I method in solving the augmented system (2.1).
Using (2.4) we can rewrite the ULT-I iteration method as a standard stationary iteration scheme
x(k+1) = T1 x(k ) + c1 , k = 0, 1, 2, . . . ,
where
 −1   −1  
E B 0 0 E 0 0 −B
T1 = , (2.5)
0 C+Q BT Q −BT C+Q 0 Q

and
 −1    −1 
E B 0 0 E 0 −1
c1 = In+m + T b  PULT −I b,
0 C+Q B Q −BT C+Q
−1
where PULT −I
= K3−1 (In+m + K4 K1−1 ), the definition of which can also be found in [48]. Note that T1 and PULT
−1
−I
K are
the iteration matrix and the preconditioned matrix of the ULT-I iteration method, respectively. Here, we call the matrix
PULT −I the preconditioner of the ULT-I iteration method.

3. Convergence analysis of the ULT-I method

In this section we study the convergence properties of the new method, which can be generalized into the two-step
splitting iteration framework. The following lemma describes a general convergence criterion for the two-step splitting
iteration method [48].

Lemma 3.1. Let M ∈ R× , M = Mi − Ni (i = 1, 2 ) be two splittings of the matrix M , and x(0) be a given initial vector. If {x(k) }
is a two-step iteration sequence defined by

M1 x(k+ 2 ) = N1 x(k ) + u,
1

k = 0, 1, 2, . . . , (3.1)
M2 x(k+1) = N2 x(k+ 2 ) + u,
1

then
x(k+1) = M2−1 N2 M1−1 N1 x(k ) + M2−1 (I + N2 M1−1 )u, k = 0, 1, 2, . . . , (3.2)

where I denotes the identity matrix with order . Furthermore, if the spectral radius ρ (M2−1 N2 M1−1 N1 )
of the iteration matrix
T = M2−1 N2 M1−1 N1 is less than one, then the iteration sequence {x(k) } converges to the unique solution x∗ ∈ R of the system
of linear equations M x = u for all initial vectors x(0 ) ∈ R .

In the next theorem, the convergence property of the ULT-I method is presented.

Theorem 3.1. For the augmented system (2.1), let E ∈ Rn×n be nonsingular, C ∈ Rm×m be symmetric positive semi-definite, and
B ∈ Rn×m be a rectangular matrix and nul l (B ) ∩ nul l (C ) = 0. If the parameter matrix Q ∈ Rm×m is symmetric positive definite,
then the iteration matrix T1 of the ULT-I iteration is given by
 
0 −E −1 B{Im − (C + Q )−1 [Im + Q (C + Q )−1 ][C + BT E −1 B]}
T1 = . (3.3)
0 Im − (C + Q )−1 [Im + Q (C + Q )−1 ][C + BT E −1 B]
86 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

If λ is an eigenvalue of the iteration matrix T1 of the ULT-I method, then λ = 0 is an eigenvalue of the iteration matrix T1 with
multiplicity n, and the other m eigenvalues of the matrix T1 satisfy λi = 1 − ξi . Here ξ i (i = 1, 2, . . . , m) is the eigenvalue of the
matrix (C + Q )−1 [Im + Q (C + Q )−1 ][C + BT E −1 B].

Proof. By setting M1 = K1 , N1 = K2 , M2 = K3 , N2 = K4 in Lemma 3.1, and direct calculation, we have


 −1   −1  
E B 0 0 E 0 0 −B
T1 =
0 C+Q BT Q −BT C+Q 0 Q
    
E −1 −E −1 B(C + Q )−1 0 0 E −1 0 0 −B
=
0 (C + Q )−1 BT Q (Q + C )−1 BT E −1 (Q + C )−1 0 Q
  
−E −1 B(C + Q )−1 BT −E −1 B(C + Q )−1 Q 0 −E −1 B
=
(C + Q )−1 BT (C + Q )−1 Q 0 −(Q + C )−1 BT E −1 B + (Q + C )−1 Q
 
0 −E −1 B{(C + Q )−1 Q (C + Q )−1 Q − (C + Q )−1 [Im + Q (Q + C )−1 ]BT E −1 B}
=
0 (C + Q )−1 Q (C + Q )−1 Q − (C + Q )−1 [Im + Q (Q + C )−1 ]BT E −1 B
 
0 −E −1 B{Im − (C + Q )−1 [Im + Q (C + Q )−1 ][C + BT E −1 B]}
= . (3.4)
0 Im − (C + Q )−1 [Im + Q (C + Q )−1 ][C + BT E −1 B]

If λ is an eigenvalue of the iteration matrix T1 , then from (3.4) we have


|λIn+m − T1 | = λn |(λ − 1 )Im + (C + Q )−1 [Im + Q (C + Q )−1 ][C + BT E −1 B]|.
So λ = 0 is an eigenvalue of the iteration matrix T1 with multiplicity n, and the other m eigenvalues of the matrix T1 satisfy
λi = 1 − ξi . Here ξ i (i = 1, 2, . . . , m) is the eigenvalue of the matrix (C + Q )−1 [Im + Q (C + Q )−1 ][C + BT E −1 B]. This completes
the proof. 

In particular, for K¯ ∈ R2n ×2n , if we set E = I, B = A, and C = μ2 I (here I is an identity matrix with order n2 ), then the
2 2

convergence property of the ULT-I iteration method can be reduced to the following corollary.
2 2 2 2
Corollary 3.1. For K¯ ∈ R2n ×2n , suppose that the parameter matrix Q ∈ Rn ×n is symmetric positive definite. Then the corre-
sponding iteration matrix T1 of the ULT-I method is given by
 
0 −A{(μ2 I + Q )−1 Q (μ2 I + Q )−1 Q − (μ2 I + Q )−1 [I + Q (μ2 I + Q )−1 ]AT A}
T1 = . (3.5)
0 I − (μ2 I + Q )−1 [I + Q (μ2 I + Q )−1 ][μ2 I + AT A]

Furthermore, if λ is an eigenvalue of the iteration matrix T1 of the ULT-I method, then λ = 0 is an eigenvalue of the iteration
matrix T1 with multiplicity n2 , and the other n2 eigenvalues of the matrix T1 satisfy λi = 1 − ξi , here ξ i , (i = 1, 2, . . . , n2 ) is the
eigenvalue of the matrix (μ2 I + Q )−1 [I + Q (μ2 I + Q )−1 ][μ2 I + AT A].

The following theorem gives us necessary and sufficient conditions for guaranteeing the convergence of the ULT-I
method when applied to solve the image restoration problem (1.6).

Theorem 3.2. For K¯ ∈ R2n ×2n , suppose that the parameter matrix Q ∈ Rn ×n is symmetric positive definite. Let T11 = (μ2 I +
2 2 2 2

Q ) [I + Q (μ I + Q ) ][μ I + A A]. Then the ULT-I method is convergent if and only if


−1 2 −1 2 T

ξmin (T11 ) > 0 and ξmax (T11 ) < 2,


where the ξ max and ξ min are the largest and smallest eigenvalues of the matrix T11 , respectively.

Proof. From corollary 3.1, it can be easily seen that ρ (T1 ) < 1 if and only if |1 − ξi | < 1, i = 1, 2, . . . , n2 , where ξ i is the
eigenvalue of the matrix T11 , and this completes the proof.


In particular, if we choose the parameter matrix as Q = sI (s > 0), then the above convergence result can be simplified as
follows:

Corollary 3.2. Let s be a positive constant and the parameter matrix be defined as Q = sI. For the linear system (1.6), the ULT-I
method is convergent if and only if
( μ2 + s ) 2 + s 2
τmax (AT A ) < , (3.6)
μ2 + 2 s
where τ max (AT A) is the largest eigenvalue of the matrix AT A.

Proof. From Theorem 3.2, it can be seen that the ULT-I iteration method is convergent if and only if
 μ2 + 2 s   μ2 + 2 s 
ξmin ( μ 2
I + A T
A ) > 0 and ξ ( μ 2
I + A T
A ) < 2,
( μ2 + s ) 2 max
( μ2 + s ) 2
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 87

which is equivalent to

μ2 + 2 s μ2 + 2 s
(μ2 + τmin (AT A )) > 0 and (μ2 + τmax (AT A )) < 2, (3.7)
(μ + s )
2 2 ( μ2 + s ) 2
where τ min (AT A) is the smallest eigenvalue of the matrix AT A. Due to the semi-definiteness (at least) of the matrix AT A, the
first equation in (3.7) always holds. On the other hand, the second equation in (3.7) can be equivalently written as

( μ2 + s ) 2 + s 2
τmax (AT A ) <
μ2 + 2 s
which completes the proof. 

The spectral distribution of the preconditioned matrix is closely related to the convergence rate of Krylov subspace
methods. A tightly clustered spectrum or positive real spectrum of the preconditioned matrix is desirable. Now, we derive
−1
some properties of the ULT-I preconditioned matrix PULT −I
K¯ . In the sequel, we use sp(•) to denote the spectrum of one
matrix. The following theorem describes the eigenvalue distribution of the preconditioned matrix P −1 K¯ . ULT −I

Theorem 3.3. Assume that Q ∈ R n2 ×n2


is a symmetric positive definite matrix. Let the preconditioner PULT −I be defined as in (2).
Moreover, suppose that sp(Q ) ⊆ [σ1 , σn2 ] and sp(AT A ) ⊆ [τ1 , τn2 ]. Then the preconditioned matrix PULT
−1
−I
K¯ has an eigenvalue
1 with multiplicity at least n2 , and the remaining eigenvalues are real and located in the positive interval
 
(μ2 + 2σ1 )(μ2 + τ1 ) (μ2 + 2σn2 )(μ2 + τn2 )
, .
(μ2 + σ1 )(μ2 + σn2 ) (μ2 + σ1 )(μ2 + σn2 )
−1
Proof. From the relation between the iteration matrix (3.5) and the preconditioned matrix PULT −I
K¯ , we have
−1
−I K = I2n2 − T1
PULT ¯
 
I A{(μ2 I + Q )−1 Q (μ2 I + Q )−1 Q − (μ2 I + Q )−1 [I + Q (μ2 I + Q )−1 ]AT A}
= . (3.8)
0 (μ2 I + Q )−1 [I + Q (μ2 I + Q )−1 ][μ2 I + AT A]
−1
It then follows from (3.8) that the preconditioned matrix PULT −I
K¯ has an eigenvalue 1 with multiplicity at least n2 . The
remaining eigenvalues are the same as those of the matrix (μ I + Q )−1 [I + Q (μ2 I + Q )−1 ][μ2 I + AT A].
2

Since the matrices Q and AT A are respectively symmetric positive definite and symmetric positive semi-definite, we
know that σn2 ≥ σ1 > 0, τn2 ≥ τ1 ≥ 0,

sp(μ2 I + Q ) ⊆ [μ2 + σ1 , μ2 + σn2 ], (3.9)


and

sp(μ2 I + AT A ) ⊆ [μ2 + τ1 , μ2 + τn2 ]. (3.10)


As the matrix I + Q (μ2 I + Q )−1 is also symmetric positive definite, we have
 1 1

sp((μ2 I + Q )−1 ) ⊆ , . (3.11)
μ2 + σ n 2 μ2 + σ 1
Hence, it can be seen that
 σ1 σn 2 
sp(I + Q (μ2 I + Q )−1 ) ⊆ 1 + ,1 + , (3.12)
μ + σ1
2 μ + σn 2
2

σ 2 σ −1
since μ2 +nσ − μ2 +1σ ≥ 0. Then, from the Eqs. (3.9)–(3.12) the remaining eigenvalues of the preconditioned matrix PULT −I

n2 1
are real and located in the positive interval
 
(μ2 + 2σ1 )(μ2 + τ1 ) (μ2 + 2σn2 )(μ2 + τn2 )
, .
(μ2 + σ1 )(μ2 + σn2 ) (μ2 + σ1 )(μ2 + σn2 ) 

The following theorem presents the optimal parameter sopt of the ULT-I method to solve the linear system (1.6) when
2 ×n2
the parameter matrix is defined as Q = sI ∈ Rn (s > 0).

Theorem 3.4. Let s be a positive constant and Q = sI ∈ Rn ×n . Moreover, let τ min and τ max be the minimum and maximum
2 2

eigenvalues of AT A, respectively. Then for the ULT-I iteration method, the optimal parameter sopt which minimizes the spectral
radius ρ (T1 ) is
1
  
sopt = τmin + τmax + (τmin + τmax )2 + 2μ2 (τmin + τmax ) .
2
88 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

Moreover, the corresponding optimal convergence factor is

(μ2 + 2sopt )(μ2 + τmax ) − (μ2 + sopt )2


ρopt (T1 ) = . (3.13)
(μ2 + sopt )2
Proof. Let τ i , i = 1, . . . , n2 be eigenvalues of AT A. To ensure the convergence of the ULT-I method, from the expression (3.5),
we have to show that ρ (T1 ) < 1 under the condition provided by the theorem. Notice that
 
   
 (μ2 + 2s )(μ2 + τi )   (μ2 + 2s )(μ2 + τmin )   (μ2 + 2s )(μ2 + τmax ) 
ρ (T1 ) = maxT 1 −  
τi ∈σ (A A ) ( μ2 + s ) 2  = max 1 − ( μ2 + s ) 2 , 1 − ( μ2 + s ) 2  .

Therefore, there exists a constant s̄ such that



1 − (μ +2(μs )(2 +μs )+2 τmin ) ,
2 2
if 0 < s ≤ s̄,
ρ ( T1 ) =
(μ2 +2s )(μ2 +τmax )
( μ2 + s ) 2 − 1, if s ≥ s̄.

If 0 < s ≤ s̄, then it can be easily seen that ρ (T1 ) < 1. Otherwise, if s ≥ s̄, then by setting (μ +2(μ
s )(μ2 +τmax )
2
2 +s )2 − 1 < 1, it can be
shown that ρ (T1 ) < 1 whenever
 τ max < μ2 . Furthermore, for τ max ≥ μ2 the spectral radius of iteration matrix is less than
τ −μ2 + τ 2 −μ4
one if s > max{s̄, max 2
max
}.
To find the optimal parameter of ULT-I method, we minimize the spectral radius of the iteration matrix. Thus, we have

sopt = arg min ρ (T1 ).


s

From the above discussion, we can see that




(μ2 + 2s )(μ2 + τmin ) (μ2 + 2s )(μ2 + τmax )
ρ (T1 ) = max 1 − , −1 .
( μ2 + s ) 2 ( μ2 + s ) 2
If sopt is the optimal point, then it must satisfy the following equation:

(μ2 + 2s )(μ2 + τmin ) (μ2 + 2s )(μ2 + τmax )


1− = − 1.
( μ2 + s ) 2 ( μ2 + s ) 2
Then the solution sopt of the above equation is given as follows:
1
  
sopt = τmin + τmax + (τmin + τmax )2 + 2μ2 (τmin + τmax ) .
2
It follows from τ max ≥ τ min ≥ 0 that the optimal parameter sopt > 0 satisfies the convergence condition mentioned above.
Thus, the optimal convergence factor can be obtained as follows:

(μ2 + 2sopt )(μ2 + τmax ) − (μ2 + sopt )2


ρopt (T1 ) = ,
(μ2 + sopt )2
and the proof is completed. 

As in the Theorem 3.4, the optimal parameter sopt of the ULT-I method is given by the following theorem to solve the
image restoration problem (1.6) when the parameter matrix is defined as Q = sI + AT A (s > 0).

Theorem 3.5. Let s be a positive constant and Q = sI + AT A. Moreover, let τ min and τ max be the minimum and maximum
eigenvalues of AT A, respectively. Then for the ULT-I iteration method, the optimal parameter sopt which minimizes the spectral
radius ρ (T1 ) is the solution sopt of the following equation:
   
( a + s ) 2 ( b + s ) 2 − b( d + 2 s ) + ( a + s ) 2 − a ( c + 2 s ) ( b + s ) 2 = 0 ,
where

a = μ2 + τmax , b = μ2 + τmin , c = μ2 + 2τmax , d = μ2 + 2τmin .


As a consequence, the optimal convergence factor can be derived as follows:

(μ2 + 2sopt + 2τmax )(μ2 + τmax ) − (μ2 + sopt + τmax )2


ρopt (T1 ) = .
(μ2 + sopt + τmax )2
Proof. Let τ i , i = 1, . . . , n2 , be eigenvalues of AT A. As in Theorem 3.4,
 
 (μ2 + 2s + 2τi )(μ2 + τi ) 
ρ (T1 ) = maxT 1 − 
τi ∈σ (A A ) ( μ2 + s + τi ) 2 
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 89


   
 (μ2 + 2s + 2τmin )(μ2 + τmin )   (μ2 + 2s + 2τmax )(μ2 + τmax ) 

= max 1 −
(μ2 + s + τmin )2 , 1 − (μ2 + s + τmax )2  .

Therefore, there exists a constant s̄ such that



1 − (μ +2(sμ+2 τmin )(μ2 +τmin )
2
2 +s+τ )2 , if s ≤ s̄,
ρ ( T1 ) = min
(3.14)
(μ2 +2s+2τmax )(μ2 +τmax )
(μ2 +s+τmax )2 − 1, if s ≥ s̄.

It can be easily shown that the spectral radius of the iteration matrix T1 is less than one for 0 < s ≤ s̄. Now, suppose that
s ≥ s̄. In this case, setting ρ (T1 ) < 1 leads to the following equivalent condition:

μ2 + 2τmax
s2 + s(μ2 + τmax ) + (1 − )(μ2 + τmax ) > 0.
2
If μ2 + 2τmax < 2, then the above condition is fulfilled and ρ (T1 ) < 1. Furthermore, for μ2 + 2τmax > 2, the spectral radius
of iteration matrix is less than one if
  
−(τmax + μ2 ) + (τmax + μ2 )2 + 2(μ2 + τmax )(μ2 + 2τmax − 2 )
s > max s̄, .
2

To minimize the spectral radius ρ (T1 ), the optimal parameter s must satisfy the following equation:

(μ2 + 2s + 2τmin )(μ2 + τmin ) (μ2 + 2s + 2τmax )(μ2 + τmax )


1− = − 1.
(μ2 + s + τmin )2 (μ2 + s + τmax )2
Then the optimal parameter is the solution sopt of the following equation:
   
( a + s ) 2 ( b + s ) 2 − b( d + 2 s ) + ( a + s ) 2 − a ( c + 2 s ) ( b + s ) 2 = 0 ,
where

a = μ2 + τmax , b = μ2 + τmin , c = μ2 + 2τmax , d = μ2 + 2τmin .

If the roots of the above equation with respect to s > 0 exist, then the optimal convergence factor is found by substituting
sopt directly into (3.14), yielding

(μ2 + 2sopt + 2τmax )(μ2 + τmax ) − (μ2 + sopt + τmax )2


ρopt (T1 ) = .
(μ2 + sopt + τmax )2
This completes the proof. 

4. The second type of ULT splitting method

In this section, we discuss the second type of ULT (ULT-II) splitting iteration method for solving the augmented systems
(2.1) with nonzero (2, 2)-block and consider using this method to solve the linear systems (1.6) originating from image
restoration. Assuming that Q ∈ Rm×m is a given symmetric positive definite matrix, two splittings of the coefficient matrix
K are given as folllows:
   
E 0 0 −B
K = − = L1 − L2
−BT Q 0 Q −C
   
E B 0 0
= − T = L3 − L4 . (4.1)
0 C+Q B Q

For C = 0 the ULT-II splittings (4.1) reduce to the presented triangular splittings in [47]. Using the ULT-II splittings of the ma-
trix K in (4.1), an equivalent form of the augmented system (2.1) can be written as two systems of fixed-point equations

L1 x(k+ 2 ) = L2 x(k ) + b,
1

k = 0, 1, . . . . (4.2)
L3 x(k+1) = L4 x(k+ 2 ) + b.
1

By iterating alternatively between these two fixed-point systems in their blockwise forms, the ULT-II splitting iteration
method to the augmented system (2.1) can be derived as follows:
90 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

⎧   

⎪ E 0 ( + 1
) 0 −B
x(k ) + b,

⎨ −BT Q x
k 2 =
0 Q −C
    k = 0, 1, . . . . (4.3)


⎪ E B
⎩ 0 C+Q x ( k+1 )
=
0 0 ( k+ 1 )
x 2 + b.
BT Q

Owing to the ULT-II iteration alternates between the lower triangular matrix L1 and the upper triangular matrix L3 at
each iterate, it can be more convenient to solve these two subsystems by utilizing the iterative methods.
It can be seen that a parameter matrix Q is incorporated in each iteration scheme of the Eq. (4.3). In actual implemen-
tations, we may choose an appropriate parameter matrix Q to enhance the convergence rate of the ULT-II iteration method.
This motivates us to choose an ideal (or optimal) matrix Q so as to improve the efficiency of ULT-II method in solving the
augmented system (2.1).
From (4.3) we can rewrite the ULT-II iteration method as a standard stationary iteration scheme:
x(k+1) = T2 x(k ) + c2 , k = 0, 1, 2, . . .
where
 −1   −1  
E B 0 0 E 0 0 −B
T2 = , (4.4)
0 C+Q BT Q −BT Q 0 Q −C

and
 −1    −1 
E B 0 0 E 0 −1
c2 = In+m + b  PULT −II b,
0 C+Q BT Q −BT Q
−1
where PULT −II
= L3−1 (In+m + L4 L1−1 ); see also [48]. Here, the matrices T2 and PULT
−1
−II
K are respectively named the iter-
ation matrix and the preconditioned matrix for the ULT-II iteration method. The matrix PULT −II is called the preconditioner
of the ULT-II iteration method.

5. Convergence analysis of the ULT-II method

In this section we investigate the convergence properties of the proposed method. Based on the convergence criterion
for the two-step splitting iteration framework from Lemma 3.1, we can obtain the following convergence results.

Theorem 5.1. For the augmented system (2.1), let E ∈ Rn×n be nonsingular, C ∈ Rm×m be symmetric positive semi-definite, and
B ∈ Rn×m be a rectangular matrix and nul l (B ) ∩ nul l (C ) = 0. If the parameter matrix Q ∈ Rm×m is symmetric positive definite,
then the iteration matrix T2 of the ULT-II iteration is defined by
 
0 −E −1 B[(C + Q )−1 (−C + Q ) − 2(C + Q )−1 BT E −1 B]
T2 = . (5.1)
0 Im − 2(C + Q )−1 (C + BT E −1 B )

If λ is an eigenvalue of the iteration matrix T2 of the ULT-II method, then λ = 0 is an eigenvalue of the iteration matrix T2 with
multiplicity n, and the other m eigenvalues of the matrix T2 satisfy λi = 1 − 2ηi . Here ηi (i = 1, 2, . . . , m) is the eigenvalue of the
matrix (C + Q )−1 (C + BT E −1 B ).

Proof. By setting M1 = L1 , N1 = L2 , M2 = L3 , N2 = L4 in Lemma 3.1, and by employing algebraic manipulations, we


obtain
 −1   −1  
E B 0 0 E 0 0 −B
T2 =
0 C+Q BT Q −BT Q 0 Q −C
    
E −1 −E −1 B(C + Q )−1 0 0 E −1 0 0 −B
=
0 (C + Q )−1 BT Q Q −1 BT E −1 Q −1 0 Q −C
  
−E −1 B(C + Q )−1 BT −E −1 B(C + Q )−1 Q 0 −E −1 B
=
(C + Q )−1 BT (C + Q )−1 Q 0 −Q B E B + Q −1 (Q − C )
−1 T −1

 
0 −E −1 B[(C + Q )−1 (−C + Q ) − 2(C + Q )−1 BT E −1 B]
=
0 (C + Q )−1 (−C + Q ) − 2(C + Q )−1 BT E −1 B
 
0 −E −1 B[(C + Q )−1 (−C + Q ) − 2(C + Q )−1 BT E −1 B]
= . (5.2)
0 Im − 2(C + Q )−1 (C + BT E −1 B )

If λ is an eigenvalue of the iteration matrix T2 , then we have


|λIn+m − T2 | = λn |(λ − 1 )Im + 2(C + Q )−1 (C + BT E −1 B )|.
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 91

So λ = 0 is an eigenvalue of the iteration matrix T2 with multiplicity n, and the other m eigenvalues of the matrix T2 satisfy
λi = 1 − 2ηi . Here ηi (i = 1, 2, . . . , m) is the eigenvalue of the matrix (C + Q )−1 (C + BT E −1 B ). This completes the proof. 
In particular, for image restoration problem (1.6), if we set E = I, B = A, and C = μ2 I (here I is an identity matrix with
order n2 ), then the convergence property of the ULT-II iteration method can be simplified as in the following corollary:
2 2 2 2
Corollary 5.1. For K¯ ∈ R2n ×2n , suppose that the parameter matrix Q ∈ Rn ×n is symmetric positive definite. Then the corre-
sponding iteration matrix T2 of the ULT-II method is given by
 
0 −A[(μ2 I + Q )−1 (−μ2 I + Q ) − 2(μ2 I + Q )−1 AT A]
T2 = . (5.3)
0 I − 2(μ2 I + Q )−1 (μ2 I + AT A )

Moreover, if λ is an eigenvalue of the iteration matrix T2 of the ULT-II method, then λ = 0 is an eigenvalue of the iteration
matrix T2 with multiplicity n2 , and the other n2 eigenvalues of the matrix T2 satisfy λi = 1 − 2ηi . Here ηi (i = 1, 2, . . . , n2 ) is
the eigenvalue of the matrix (μ2 I + Q )−1 (μ2 I + AT A ).

In the next theorem, necessary and sufficient conditions are given for guaranteeing the convergence of the ULT-II method
in solving image restoration problem.

Theorem 5.2. For K¯ ∈ R2n ×2n , suppose that the parameter matrix Q ∈ Rn ×n is symmetric positive definite. Let T22 = (μ2 I +
2 2 2 2

Q ) (μ I + A A ), then the ULT-II method is convergent if and only if


−1 2 T

ηmin (T22 ) > 0 and ηmax (T22 ) < 1,


where the ηmax and ηmin are the largest and smallest eigenvalues of the matrix T22 , respectively.

Proof. From Corollary 5.1, it can be shown that ρ (T2 ) < 1 if and only if |1 − 2ηi | < 1, i = 1, 2, . . . , n2 , where ηi is the eigen-
value of the matrix T22 , and this completes the proof.


Now, we consider a special case of Theorem 5.2. To do so, suppose that the parameter matrix is defined as Q = sI (s > 0),
then the above convergence theory simplifies to the following result.

Corollary 5.2. Let s be a positive constant and let Q = sI. Then the ULT-II method is convergent to the solution of linear system
(1.6) if and only if
τmax (AT A ) < s, (5.4)
where τ max (AT A) denotes the largest eigenvalue of the matrix AT A.

Proof. From Theorem 5.2 and by setting Q = sI, it can be shown that the ULT-II iteration method is convergent if and only
if
 1
  1

ηmin ( μ2 I + A T A ) > 0 and ηmax ( μ2 I + A T A ) < 1 ,
μ2 + s μ2 + s
or equivalently,
1 1
(μ2 + τmin (AT A )) > 0 and (μ2 + τmax (AT A )) < 1, (5.5)
μ2 + s μ2 + s
where τ min (AT A) denotes the smallest eigenvalue of the matrix AT A. Due to the semi-definiteness of the matrix AT A, the first
equation in (5.5) is always true. Then by simplifying the second equation in (5.5), we can derive that
τmax (AT A ) < s.
This completes the proof. 

Now, consider the coefficient matrix K¯ in the image restoration problem (1.6) and the preconditioner PULT −II in Eq.
−1
(4.4). The following theorem describes the eigenvalue distribution of the preconditioned matrix PULT −II
K¯ .
2 2
Theorem 5.3. Assume that Q ∈ Rn ×n is a symmetric positive definite matrix. Let the preconditioner PULT −II be defined as in
(4.4). Furthermore, let sp(Q ) ⊆ [σ1 , σn2 ], and sp(AT A ) ⊆ [τ1 , τn2 ]. Then the preconditioned matrix PULT
−1
−II
K¯ has an eigenvalue
1 with multiplicity at least n2 . The remaining eigenvalues are real and located in the positive interval
 
2 ( μ2 + τ1 ) 2 ( μ2 + τn 2 )
, .
μ2 + σ n 2 μ2 + σ 1
−1
Proof. It is easy to see that the preconditioned matrix PULT −II
K¯ can be obtained by utilizing the iteration matrix T2 as
follows:
 
−1 I A[(μ2 I + Q )−1 (−μ2 I + Q ) − 2(μ2 I + Q )−1 AT A]
−II K = I2n2 − T2 = .
PULT ¯ (5.6)
0 2(μ2 I + Q )−1 (μ2 I + AT A )
92 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

−1
It follows from (5.6) that the preconditioned matrix PULT −II
K¯ has an eigenvalue 1 with multiplicity at least n2 and the
remaining eigenvalues are the same as those of the matrix 2(μ2 I + Q )−1 (μ2 I + AT A ).
Since the matrices Q and AT A are, respectively, symmetric positive definite and symmetric positive semi-definite, we
know that σn2 ≥ σ1 > 0, τn2 ≥ τ1 ≥ 0,

sp(μ2 I + Q ) ⊆ [μ2 + σ1 , μ2 + σn2 ] (5.7)


and
sp(μ2 I + AT A ) ⊆ [μ2 + τ1 , μ2 + τn2 ]. (5.8)
Then, from Eq. (5.7), we get
 1 1

sp((μ2 I + Q )−1 ) ⊆ , . (5.9)
μ2 + σ n 2 μ2 + σ 1
−1
Therefore, Eqs. (5.8) and (5.9) imply that the remaining eigenvalues of the preconditioned matrix PULT −II
K¯ are real and
located in the positive interval
 
2 ( μ2 + τ1 ) 2 ( μ2 + τn 2 )
, .
μ2 + σ n 2 μ2 + σ 1
This completes the proof. 

In the next theorem the optimal parameter sopt of ULT-II method is given to solve the image restoration (1.6), when the
2 ×n2
parameter matrix is given by Q = sI ∈ Rn (s > 0).

Theorem 5.4. Let s be a positive constant and Q = sI ∈ Rn ×n . Furthermore, suppose that τ min and τ max are the minimum and
2 2

maximum eigenvalues of AT A, respectively. Then the the optimal parameter sopt of ULT-II iteration method which minimizes the
spectral radius ρ (T2 ) is given as
sopt = μ2 + τmin + τmax .
In addition, the optimal spectral radius is determined as follows:
τmax − τmin
ρopt (T2 ) = . (5.10)
τmax + τmin + 2μ2
Proof. Let τ i , i = 1, . . . , n2 , be eigenvalues of AT A. To guarantee the convergence of the ULT-II method, from the expression
(5.3), we have to show that ρ (T2 ) < 1 under the condition provided by the theorem. We have
 
 ( μ2 + τ ) 
ρ (T2 ) = maxT 1–2 2 i 
τi ∈σ (A A ) (μ + s )

   
 (μ2 + τmin )   (μ2 + τmax ) 

= max 1 − 2 , 1 − −2 .
( μ2 + s )   ( μ2 + s ) 
Therefore, there exists a constant s̄ such that

1 − 2 (μ(μ+2 +
τmin )
2

s)
, if 0 < s ≤ s̄
ρ ( T2 ) = (5.11)
2 (μ(μ+2 τ+max )
2

s)
− 1, if s ≥ s̄.

If μ2 + 2τmin < s ≤ s̄, from (5.11), then ρ (T2 ) < 1. Otherwise, if s ≥ max{τmax , s̄} and s̄ < s < μ2 + 2τmax , then it can be
shown that ρ (T2 ) < 1.
To find the optimal parameter of ULT-II method, we minimize the spectral radius of the iteration matrix. Thus, we have
sopt = arg min ρ (T2 ).
s

Thus, if sopt is the optimal point, it must satisfy the following equation
(μ2 + τmin ) (μ2 + τmax )
1−2 =2 − 1,
(μ2 + sopt ) (μ2 + sopt )
which yields
sopt = μ2 + τmin + τmax > 0.
After verifying that the convergence conditions μ2 + 2τmin < sopt and τmax < sopt < μ2 + 2τmax hold, and substituting sopt
into Eq. (5.11), the desired result is obtained. 

Next, we present the optimal parameter sopt for the ULT-II method to solve the linear system (1.6) arising from image
restoration when the parameter matrix is considered as Q = sI + AT A (s > 0).
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 93

Theorem 5.5. Let s be a positive constant and Q = sI + AT A. Furthermore, suppose that τ min and τ max are the minimum and
maximum eigenvalues of AT A, respectively. Then for the ULT-II iteration method, the optimal parameter sopt which minimizes the
spectral radius ρ (T2 ) is

sopt = (μ2 + τmin )(μ2 + τmax ).
Correspondingly, the optimal spectral radius is

μ2 + τmax − (μ2 + τmin )(μ2 + τmax )
ρopt (T2 ) =  . (5.12)
μ + τmax +
2 (μ2 + τmin )(μ2 + τmax )
Proof. Let τ i , i = 1, . . . , n2 , be eigenvalues of AT A. As in the proof of Theorem 5.4,
 
 2 ( μ 2
+ τ ) 
ρ (T2 ) = maxT 1 − 2 i 
τi ∈σ (A A ) μ + s + τi 

   
 2(μ2 + τmin )   2(μ2 + τmax ) 
= max 1 − 2 , 1 − .
μ + s + τmin   μ2 + s + τmax 
Therefore, there exists a constant s̄ such that

1 − 2μ(2μ+s++ττmin ) ,
2
if s ≤ s̄,
ρ ( T2 ) = 2(μ2 +τmax )
min
(5.13)
μ2 +s+τmax − 1, if s ≥ s̄.

From Eq. (5.13), it can be seen that ρ (T2 ) < 1 whenever s > 0. To minimize the spectral radius ρ (T2 ), the optimal parameter
s must satisfy the following equation:
2(μ2 + τmin ) 2(μ2 + τmax )
1− = 2 − 1.
μ + s + τmin μ + s + τmax
2

Then the optimal parameter is



sopt = (μ2 + τmin )(μ2 + τmax ) > 0.
It is not difficult to find that the optimal parameter sopt > 0 satisfies the convergence condition. Moreover, by substituting
2 ( μ2 + τ )
the optimal parameter sopt into 1 − μ2 +s+τmin or 2μ(2μ+s++ττmax ) − 1, the corresponding convergence factor is:
2

min max

μ2 + τmax − (μ2 + τmin )(μ2 + τmax )
ρopt (T2 ) =  .
μ2 + τmax + (μ2 + τmin )(μ2 + τmax )
This completes the proof. 

Two concrete algorithms for image restoration


In this subsection, two cases are given for the matrix Q and the results are summarized in two concrete algorithms.
Indeed, we consider Q = sI and Q = sI + AT A and implement ULT-type methods with these proposed matrices.

(i) Q = sI: In this case, we consider Q1 = sI and apply the ULT-I and ULT-II methods to solve the image restoration prob-
lem (1.6). For simplicity, we concisely denote the new methods by ULT-I Q1 and ULT-II Q1 when the proposed param-
eter matrix is used. Now, to implement the ULT-I Q1 method (2.4), it can be seen that
 −1  
I 0 I 0
= , (5.14)
−AT ( μ2 + s ) I 1
μ2 + s A
T 1
μ2 + s I
 −1  
I A I − μ21+s A
= . (5.15)
0 ( μ2 + s ) I 0 1
μ2 + s I
Hence, the ULT-I Q1 method can be written as the following Algorithm 1:
Now, we implement the ULT-II Q1 method to solve image restoration problem. To do so, since
 −1  
I 0 I 0
= ,
−AT sI 1 T
s
A 1
s
I

and from Eq. (5.15), the ULT-II Q1 method (4.3) is given by substitution of the following expression into step 6 of
Algorithm 1:
1
f (k+ 2 ) := (−AT A f (k) + AT g + (s − μ2 ) f (k) ).
1

s
94 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

Algorithm 1 ULT-I Q1 method.


1: Let f (0 ) = g and e(0 ) = g − A f (0 ) be the initial guesses
2: Choose the maximum number M of outer iterations and a very small positive number

3: r (0 ) := b − Ax(0 )

r ( k )
2
4: while

r ( 0 )
2
>
or k < M do
e(k+ 2 ) := −A f (k ) + g
1
5:

f (k+ 2 ) := μ21+s (−AT A f (k ) + AT g + s f (k ) )


1
6:

e(k+1 ) := g − μ21+s (AAT e(k+ 2 ) + sA f (k+ 2 ) )


1 1
7:

f (k+1 ) := μ21+s (AT e(k+ 2 ) + s f (k+ 2 ) )


1 1
8:

9: r (k+1 ) := b − Ax(k )
10: k := k + 1
11: end while

Algorithm 2 ULT-I Q2 method.


1: Let f (0 ) = g and e(0 ) = g − A f (0 ) be the initial guesses
2: Choose the maximum number M of outer iteration and a very small positive number

3: r (0 ) := b − Ax(0 )

r ( k )
2
4: while

r ( 0 )
2
>
or k < M do
(k+ 12 )
5: e := −A f (k ) + g
Solve ((μ2 + s )I + AT A ) f (k+ 2 ) = AT (−A f (k ) + g) + (sI + AT A ) f (k )
1
6:

e(k+1 ) + A f (k+1 ) = g
7: Solve
((μ2 + s )I + AT A ) f (k+1) = AT e(k+ 2 ) + (sI + AT A ) f (k+ 2 )
1 1

8: r (k+1 ) := b − Ax(k )
9: k := k + 1
10: end while

(ii) Q = sI + AT A: In this case, the ULT-I and ULT-II methods are implemented to solve image restoration problem
(1.6) with Q2 = sI + AT A. The new methods are concisely denoted by ULT-I Q2 and ULT-II Q2 for the parameter ma-
trix Q2 . Similar to the previous case, it can be seen that the ULT-I Q2 method is summarized in Algorithm 2:
In step 5 of Algorithm 2, it can be easily seen the the coefficient matrix is symmetric positive definite. Hence,
the Conjugate Gradient (CG) method can be efficiently applied to solve the proposed system. In the step 6 of the
proposed algorithm, the linear system can be solved by classical iterative methods. Due to the structure of the matrix
A, a Krylov subspace method (for instance, generalized minimal residual (GMRES) method) can also be efficiently
applied to solve the linear system [54].

Now, an algorithm can also be given for the ULT-II Q2 method. To do so, the proposed method is summarized in
Algorithm 2 by substituting the following expression into step 5:

(sI + AT A ) f (k+ 2 ) = AT (−A f (k) + g) + ((s − μ2 )I + AT A ) f (k) .


1

In the following, we will consider the operation cost of solving Eq. (1.6) with the ULT-I (ULT-II) iterative method. Because
of the particularity of the Algorithm 1, we just take the Algorithm 2 for example, a similar analysis can also be applied to
the Algorithm 1. In step 5 and the first iteration of step 7 for the ULT-II method, we have to compute the matrix-vector
multiplications Af(k) and A f (k+1 ) . In step 6 and the second iteration of step 7, in addition to scalar-vector multiplications,
the main cost is the operation of matrix-matrix-vector multiplications AT A f (k+ 2 ) , AT Af(k) and AT A f (k+1 ) , and matrix-vector
1

(k+ 12 )
multiplications AT e and AT g. Since the matrices that arise in image restoration are highly structured, involving block
circulant, block Toeplitz and block Toeplitz-plus-Hankel matrices, for an n × n image, matrix-vector multiplications with the
2 2
blurring matrix A ∈ R2n ×2n can be done with O(n2 logn) arithmetic operations using fast Fourier transforms (FFTs). Since
the transpose AT of block Toeplitz (circulant) matrix A is still a block Toeplitz (circulant) matrix, we can therefore use the
FFTs for twice to obtain the matrix-matrix-vector multiplications.
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 95

Fig. 1. True image, PSF and degraded image in Example 1.

6. Illustrative examples

In this section, we use the ULT-type methods to solve the linear system arising from image restoration problem. All
methods have been implemented in Matlab 8.2 software on a PC with Core i7, 2.67 GHz CPU and 4.00 GB RAM.
As seen, the regularization parameter must be determined to apply iterative methods. Some methods have been
presented to approximate the optimal value of the regularization parameter such as the L-curve criterion method [55], the
discrepancy principle method [56], and generalized cross validation (GCV) method [57]. The GCV method is independent
of priori knowledge about the noise variance. This property of the GCV method is very practical to approximate the
regularization parameter. In this study, the proposed method is used to find the regularization parameter. To do so, the GCV
function is defined as follows:

A(AT A + μ2 I )−1 AT g − g
2
G (μ ) = . (6.1)
(trace(I − A(AT A + μ2 I )−1 AT ))2
The regularization parameter is given by a value which minimizes the GCV function. Since finding the exact value of the
proposed argument is a critical task, the Kronecker product approximation can be effectively used to approximate the best
value of μ [12].
Two quantities have been applied to measure the effectiveness of these methods. The peak signal-to-noise ratio (PSNR)
and relative error (R.E) are widely used in literature to investigate the performance of the applied methods of image
restoration problem. The proposed quantities are defined as follows:
2552 × n2
fres − ftrue
2
PSNR = 10 log10 , R.E = ,

fres − ftrue
22
ftrue
2
where the size of the image is n × n and ftrue , fres are the original and restored images, respectively.
For comparison purposes, the SHSS and the RGHSS methods are also used to restore images. The optimal values of the
unknown parameters for the SHSS and RGHSS methods have been presented in [49,50]. Note that the singular values of the
matrix A should be determined to give the optimal parameters in SHSS, RGHSS and ULT methods. The singular values of

the matrix A are approximated by Bk and Ck , where the proposed matrices minimize
A − k Bk  Ck
.
In all the given examples, these approximated singular values have been applied to give the optimal parameters in
Theorems 3.4, 3.5, 5.4 and 5.5. Furthermore, the maximum number of outer iterations k and stop tolerance
in examples
1 and 2 are set to 200 and 10−3 , respectively.
It can be seen that two linear systems should be solved in each iteration of the Algorithm 2. To do so, the CG method is
applied to solve the linear system in step 5 with stopping tolerance ζ = 10−6 . Moreover, to solve the linear system in step
6, we used the restarted GMRES method in Matlab, where restart = 15 and ζ = 10−6 .

Example 1. In this example, we use a 256 × 256 cameraman grayscale image as a true image. A non-symmetric PSF function,
which is defined as follows, is applied to blur the true image:

c
i+ j+1
, if 0 ≤ i, j ≤ 9,
hi j =
0, otherwise,

where c is the normalization constant such that i, j hi, j = 1. The degraded image is given by adding 1% Gaussian white
noise to the blurred image. Furthermore, the PSNR of the degraded image is 15.47. The white lines have been applied
to show observed image domain which is drawn along with the PSF and the degraded image in Fig. 1. The ULT method
is implemented to solve the image restoration problem with given values for the unknown parameter in Table 1. Note
that these values are given with the presented optimal values in theorems, and our numerical tests show that theoretical
optimal parameters are adopted in numerical experiments. For instance, the behaviour of spectral radius of iteration matrix
of various methods is shown in Fig. 2. The PSNR and relative error of the various versions of the ULT method are given in
96 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

Table 1
Value of parameter s for various methods in Example 1.

Methods ࢨ BCs Zero Periodic Reflexive Antireflective Mean

ULT-I Q1 1.04 1.05 1.05 1.04 13.38


ULT-II Q1 1.05 1.06 1.04 1.04 13.38
ULT-I Q2 0.25 0.25 0.20 0.21 0.97
ULT-II Q2 0.26 0.26 0.20 0.21 0.98

Fig. 2. Spectral radius of iteration matrix with reflexive BCs for various values of s in Example 1.

Table 2
PSNR values of various methods in Example 1.

Methods ࢨ BCs Zero Periodic Reflexive Antireflective Mean

ULT-I Q1 14.43 17.05 25.00 26.89 25.42


ULT-II Q1 14.44 17.05 25.00 26.89 25.42
ULT-I Q2 14.43 17.05 25.00 26.89 25.42
ULT-II Q2 14.43 17.05 25.00 26.89 25.42

Table 3
Relative error of various methods in Example 1.

Methods ࢨ BCs Zero Periodic Reflexive Antireflective Mean

ULT-I Q1 0.3862 0.2859 0.1145 0.0921 0.1091


ULT-II Q1 0.3861 0.2859 0.1145 0.0921 0.1091
ULT-I Q2 0.3863 0.2860 0.1145 0.0921 0.1091
ULT-II Q2 0.3863 0.2860 0.1145 0.0921 0.1091

Tables 2 and 3. Because of the specified stop criteria of the algorithms and convergence of the ULT methods, the iterations
stop before maximum number of outer iterations M is reached, and hence the PSNR (and relative error) at final step of
all methods should be approximately same, see Table 2. Furthermore, the restored images are drawn for zero, periodic,
reflective, antireflective and mean BCs in Figs. 3–6. As the numerical results show, the presented versions of ULT method
can be effectively applied to solve the image restoration problem. For comparison purpose, the SHSS and RGHSS methods
are also applied to restore the degraded image. The CPU time (in seconds) of computations is reported in Table 4. Number
of outer iterations (Its, for short) and total number of inner iterations (Total, for short) are given in Table 5.
The plots of PSNR with respect to outer iterations k are drawn in Figs. 7 and 8 to illustrate the convergence behaviour of
our methods for reflexive and antireflective BCs. Zooming has been done on a part of each plot to show the similar behaviour
of related methods. These plots show that the PSNR values of restored images are improved with increasing values of k, and
hence the ULT methods can be confidently applied for image restoration problem. The images restored using various meth-
ods and the related PSNR are given in Fig. 9 for reflexive BCs and M = 10. Note that the large values of maximum number of
outer iteration M lead to approximately close results in PSNR and relative errors, and thus we have decreased the maximum
values of outer iterations to check the performance of methods. As we can see from the previous results, the convergence
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 97

Fig. 3. Restored images with ULT-I Q1 method for various BCs in Example 1.

Fig. 4. Restored images with ULT-II Q1 method for various BCs in Example 1.

Table 4
CPU time of various methods in Example 1.

Methods ࢨ BCs Zero Periodic Reflexive Antireflective Mean

SHSS 28.87 29.56 20.76 44.28 24.28


RGHSS 19.14 22.10 19.45 33.42 10.13
ULT-I Q1 4.63 6.02 6.12 7.85 2.68
ULT-II Q1 4.78 6.14 6.04 7.96 2.65
ULT-I Q2 11.48 12.32 10.80 16.55 5.79
ULT-II Q2 11.42 12.77 11.06 16.98 5.98
98 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

Fig. 5. Restored images with ULT-I Q2 method for various BCs in Example 1.

Fig. 6. Restored images with ULT-II Q2 method for various BCs in Example 1.

Table 5
Iteration numbers of various methods in Example 1.

Methods ࢨ BCs Zero Periodic Reflexive Antireflective Mean

Its. Total Its. Total Its. Total Its. Total Its. Total

SHSS 34 510 50 747 56 584 80 1120 56 168


RGHSS 25 172 19 190 35 140 34 291 16 160
ULT-I Q1 43 – 52 – 96 – 98 – 127 –
ULT-II Q1 42 – 51 – 96 – 98 – 127 –
ULT-I Q2 12 178 13 193 19 299 20 311 10 166
ULT-II Q2 12 178 13 193 19 300 20 311 10 166
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 99

Fig. 7. PSNR versus the iteration number k for restored images with reflexive BCs in Example 1.

Fig. 8. PSNR versus the iteration number k for restored images with antireflective BCs in Example 1.

rate of ULT-I Q1 and ULT-II Q1 is less than the other methods, which leads to large number of outer iterations. However,
since no linear system to be solved in the proposed methods, the CPU times are substantially lower than other methods.
Therefore, we suggest that the ULT-I Q1 and ULT-II Q1 methods can be efficiently used for large values of M without loss of
their advantages, and hence the restored images with ULT-I Q1 and ULT-II Q1 methods are also drawn for M = 30 in Fig. 9.
As this results show, the proposed versions of ULT method are more effective than the SHSS and RGHSS methods.

Example 2. In this example, we consider a 256 × 256 grayscale image and degrade it with 2% additive Gaussian white noise
and the Moffat PSF function. The true image is blurred with the proposed PSF function which is defined as follows:
 −β
i2 j2
hi j = c 1 + 2 + 2 ,
s1 s2
where c is the normalization constant, s1 = s2 = 3, β = 5 and i, j = 1, 2, . . . , 40. The PSNR of the degraded image is 23.39.
The observed image domain, characterized by white lines, PSF function and degraded image are shown in Fig. 10. The
proposed versions of the ULT method have been applied to solve the image restoration problem with given values for the
unknown parameters in Table 6. These unknown parameters have been given with the presented optimal values in theorems
and the validation of the proposed values has been investigated with several tests. For instance, the behaviour of spectral
radius of iteration matrix for various methods is given in Fig. 11 which shows the adoption of theoretical optimal parameters
in numerical experiments. The PSNR and relative error of these methods are given in Tables 7 and 8. The restored images
100 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

Fig. 9. Restored images with various methods for reflexive BCs in Example 1.

Fig. 10. True image, PSF and degraded image in Example 2.

Table 6
Value of parameter s for various methods in Example 2.

Methods ࢨ BCs Zero Periodic Reflexive Antireflective Mean

ULT-I Q1 1.02 1.03 1.05 1.06 2.15


ULT-II Q1 1.02 1.03 1.06 1.07 2.18
ULT-I Q2 0.16 0.20 0.23 0.23 0.27
ULT-II Q2 0.16 0.19 0.25 0.25 0.30

via the ULT methods have also drawn for various BCs in Figs. 12–15. As the numerical results show, the ULT method is a
reliable and effective method for image restoration. For further investigation, the SHSS and the RGHSS methods have also
been applied to restore degraded image. The CPU time, number of outer iterations and total number of inner iterations
are reported in Tables 9 and 10. It can be seen that the CPU time of ULT methods are less than the SHSS and the RGHSS
methods.
Note that the numbers of outer iterations in the ULT-I Q1 and ULT-II Q1 methods are larger than the SHSS, RGHSS,
ULT-I Q2 and ULT-II Q2 methods while their CPU times are less than the proposed methods. This is due to structure of
Algorithm 1; indeed, there is no linear system to be solved in the ULT methods for Q1 = sI (Algorithm 1) which results in
smaller CPU time of computations.
As in the previous example, the PSNR with respect to outer iteration k is shown for reflexive and antireflective BCs in
Figs. 16 and 17. From these figures, it can be seen that the convergence behaviour of the ULT-I Q1 and ULT-II Q1 methods
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 101

Fig. 11. Spectral radius of iteration matrix with reflexive BCs for various values of s in Example 2.

Table 7
PSNR values of various methods in Example 2.

Methods ࢨ BCs Zero Periodic Reflexive Antireflective Mean

ULT-I Q1 23.09 26.02 28.40 28.49 28.11


ULT-II Q1 23.09 26.02 28.40 28.49 28.13
ULT-I Q2 23.08 26.02 28.40 28.49 28.12
ULT-II Q2 23.08 26.01 28.40 28.49 28.15

Table 8
Relative error of various methods in Example 2.

Methods ࢨ BCs Zero Periodic Reflexive Antireflective Mean

ULT-I Q1 0.2012 0.1436 0.1092 0.1080 0.1128


ULT-II Q1 0.2012 0.1436 0.1092 0.1080 0.1126
ULT-I Q2 0.2012 0.1436 0.1091 0.1080 0.1128
ULT-II Q2 0.2013 0.1438 0.1091 0.1080 0.1124

Fig. 12. Restored images with ULT-I Q1 method for various BCs in Example 2.
102 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

Fig. 13. Restored images with ULT-II Q1 method for various BCs in Example 2.

Fig. 14. Restored images with ULT-I Q2 method for various BCs in Example 2.

Table 9
CPU time of various methods in Example 2.

Methods ࢨ BCs Zero Periodic Reflexive Antireflective Mean

SHSS 27.52 31.39 50.63 31.11 30.86


RGHSS 22.32 19.69 26.55 24.32 13.51
ULT-I Q1 7.34 5.45 6.13 5.14 4.07
ULT-II Q1 7.23 5.41 6.37 5.20 4.05
ULT-I Q2 11.16 9.40 10.55 10.81 7.14
ULT-II Q2 11.01 9.57 10.89 10.71 7.13
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 103

Fig. 15. Restored images with ULT-II Q2 method for various BCs in Example 2.

Table 10
Iteration numbers of various methods in Example 2.

Methods ࢨ BCs Zero Periodic Reflexive Antireflective Mean

Its. Total Its. Total Its. Total Its. Total Its. Total

SHSS 72 1080 91 1274 77 1078 38 532 72 216


RGHSS 53 787 32 321 26 289 23 184 26 208
ULT-I Q1 124 – 88 – 67 – 68 – 80 –
ULT-II Q1 123 – 87 – 66 – 68 – 80 –
ULT-I Q2 20 334 18 263 16 211 16 211 10 188
ULT-II Q2 20 332 18 266 16 209 16 210 10 180

Fig. 16. PSNR versus the iteration number k for restored images with reflexive BCs in Example 2.
104 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

Fig. 17. PSNR versus the iteration number k for restored images with antireflective BCs in Example 2.

Fig. 18. Restored images with various methods for reflexive BCs in Example 2.

is approximately the same as it is for the ULT-I Q2 and ULT-II Q2 methods. For more investigation, the restored images with
various methods and the related PSNR are given in Fig. 18 for reflexive BCs where M = 10. Also, as discussed in the previous
example, the maximum number of outer iterations has been increased to M = 20 and the restored images with the ULT-I
Q1 and ULT-I Q1 methods are also given in Fig. 18. Although the value of M has been increased to implement the ULT-I Q1
and ULT-I Q1 methods, several tests show that the CPU times of the proposed methods are less than SHSS, RGHSS, ULT-I
Q2 and ULT-I Q2 methods with M = 10. Comparison between these methods shows that the new methods are more reliable
and effective than SHSS and RGHSS methods in solving the image restoration problems.

Example 3. In this example, we increase the field of view and use a 256 × 256 grayscale image as a true image. The true
image is degraded with the out-of-focus PSF and 1% additive white Gaussian noise. To implement the proposed PSF, we use
the psfDefocus function [7] with dim = 10 and R = 4. The true and degraded images are shown in Fig. 19. The PSNR of the
degraded image is 21.13.
To fairly assess the benefits of the proposed method, besides using the given methods in previous examples, we apply
four classical methods (i.e., MINRES, [61] PMINRES, CG and GMRES) to solve the image restoration problem. As we all
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 105

Fig. 19. True and degraded images in Example 3.

Table 11
Comparison between various methods for periodic BCs in Example 3.

Methods Parameters PSNR Relative error CPU time

GMRES – 24.11 0.1180 2.90


MINRES – 23.66 0.1248 2.88
PMINRES – 24.01 0.1198 2.85
PCG – 24.35 0.1152 4.43
SHSS α = 0.3443 24.87 0.1085 16.97
RGHSS (α , β ) = (0.37, 1.2 ) 24.94 0.1076 15.22
ULT-I Q1 s = 1.03 24.39 0.1147 2.79
ULT-I Q1 (M = 30) s = 1.03 25.43 0.1018 7.64
ULT-II Q1 s = 1.03 24.39 0.1147 2.40
ULT-II Q1 (M = 30) s = 1.03 25.44 0.1017 6.87
ULT-I Q2 s = 0.15 26.01 0.0951 30.77
ULT-II Q2 s = 0.15 26.02 0.0950 27.90

know, the classical MINRES method is suitable for solving the symmetric indefinite linear systems (see also [60]). However,
the linear system (1.6) to be solved is non-symmetric. Likewise, the classical conjugate gradient (CG) method is widely
applied to solve symmetric positive definite linear systems. The classical generalized minimal residual (GMRES) method is
mostly applied to solve the non-symmetric linear systems. In light of this, the following three aspects will be considered in
numerical tests:

• Since the system (1.6) is non-symmetric, by transforming it into the following form:
    
I A e g
= . (6.2)
AT −μ2 I f 0

which is a symmetric indefinite linear system, then the MINRES and PMINRES methods can, thus, be employed to solve
this new system. Here, the PMINRES method has been implemented by the preconditioner PMINRES = [10/η 0
],
+M
where
η = 0.01 and the matrix M is composed of tridiagonal parts of the matrix S = + μ2 I AT A.
• The linear system (AT A + μ2 I ) f = AT g is symmetric positive definite linear system (where μ is small). Nagy et al. have
presented a BCCB preconditioner to apply CG method for image deblurring in [59]. To compare our methods with the
classical method, similar to the work in [49] the proposed preconditioned CG (PCG) method is employed to the solution
of the image restoration problem (1.5).
• For making a fair comparison, we also consider the use of the full GMRES method since the linear system (1.6) is a
non-symmetric linear system.

Numerical experiments were performed between the ULT-type methods, the SHSS method, the RGHSS method, the PCG
method, the full GMRES method, the MINRES method and the PMINRES method for periodic and reflexive BCs, respectively,
and M = 10 in Tables 11 and 12. Note that the results of the ULT-I Q1 and ULT-II Q1 methods have also been given for M = 30
in the proposed tables. Furthermore, the restored images with various methods and M = 10 are given in Figs. 20 and 21 for
periodic BCs and reflexive BCs, respectively. For more investigation, the convergence curves are shown for used methods
with periodic and reflective BCs in Fig. 22 which were obtained by plotting
r(k)
2 /
r(0)
2 versus the outer iteration
number k. As can be seen from these figures and previous results, the convergence speed of our method is dependent on
the choosing the parameter matrix Q. Indeed, the convergence speed of the ULT-I Q1 and ULT-II Q methods with Q1 = sI
1
methods is slower than the other methods. However, since there is no linear system to be solved in the ULT-I Q1 and the
ULT-II Q methods, the CPU time of proposed methods is smaller than other methods. It is worth mentioning that the CPU
1
106 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

Table 12
Comparison between various methods for reflexive BCs in Example 3.

Methods Parameters PSNR Relative error CPU time

GMRES – 23.61 0.1254 2.82


MINRES – 23.42 0.1281 2.72
PMINRES – 23.62 0.1253 2.67
PCG – 23.68 0.1244 4.15
SHSS α = 0.3383 23.82 0.1224 30.22
RGHSS (α , β ) = (0.39, 1.34 ) 23.86 0.1218 20.22
ULT-I Q1 s = 1.04 23.72 0.1239 2.55
ULT-I Q1 (M = 30) s = 1.04 23.96 0.1205 7.57
ULT-II Q1 s = 1.04 23.72 0.1239 2.26
ULT-II Q1 (M = 30) s = 1.04 23.96 0.1205 6.78
ULT-I Q2 s = 0.21 23.92 0.1211 24.11
ULT-II Q2 s = 0.22 23.92 0.1211 24.75

time of the MINRES, PMINRES, PCG and GMRES methods are less than the SHSS, RGHSS, ULT-I Q2 and the ULT-II Q2
methods,
however, their PSNR and relative errors are not as good as the ULT-type methods. This is also the main reason why we use
iterative methods instead of classical methods to solve the image restoration problem. To avoid sloppy form and prevent
redundancy, only the related results of the MINRES, PMINRES, PCG and GMRES methods for Example 3 are reported.
From the above discussion, it can be deduced that the presented ULT methods are more reliable than the other proposed
methods and can be efficiently applied to restore images.

7. SOR acceleration

In this section, we will establish the SOR acceleration scheme for the ULT-type iteration method. Letting K1 , K2 , K3 ,
K4 and L1 , L2 , L3 , L4 be as defined in Sections 2 and 4, respectively, we first consider the following system:
    
K1 −K2 x̄ b
= . (7.1)
−K4 K3 ȳ b
 
D

In reality, the block system is equivalent to the system (1.6); see [53]. We have the following theorem.

Theorem 7.1. Suppose the spectral radius of K3−1 K4 K1−1 K2 is smaller than 1. Suppose E and C + 2Q are non-singular. If x∗ is
∗ ∗
the exact solution of (1.6), then the vector [xx∗ ] is the exact solution of (7.1). Conversely, if [yx∗ ] is the exact solution of (7.1), then
x∗ = y∗ and x∗ is the exact solution of (1.6).

Proof. It suffices to show that the coefficient matrix in (7.1) is nonsingular. The matrix D can be equivalently written as
    
K1 −K2 I 0 K1 −K2
D= = .
−K4 K3 −K4 K1−1 I 0 K3 − K4 K1−1 K2

Therefore, D is a nonsingular matrix by noting the fact ρ (K3−1 K4 K1−1 K2 ) < 1 and the matrices K1 and K3 are nonsingular.
In order to solve the block two-by-two linear system, we can try to consider the block Jacobi iteration, which is defined
as follows:
       
K1 0 x̄(k+1) 0 K2 x̄(k ) b
= + ,
0 K3 ȳ(k+1) K4 0 ȳ(k ) b
or equivalently,
z(k+1) = Jz(k ) + d,
(k ) −1 b
where z(k ) = [ȳx̄ (k) ], d = [K 1
K −1 b
] and
3
 
0 K1−1 K2
J= .
K3−1 K4 0
To accelerate the convergence of the block Jacobi iteration, we consider the block SOR iteration, which is defined as follows:

x̄(k+1) = (1 − ω )x̄(k ) + ωK1−1 [K2 ȳ(k ) + b],


ȳ(k+1) = (1 − ω )ȳ(k ) + ωK3−1 [K4 x̄(k+1) + b],
with the relaxation parameter ω, or equivalently,
z(k+1) = Jω z(k ) + dω ,
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 107

Fig. 20. Restored images with various methods for periodic BCs in Example 3.

where
 
( 1 − ω )I ωK1−1 K2
Jω = .
ω (1 − ω )K3−1 K4 (1 − ω )I + ω2 K3−1 K4 K1−1 K2


It is worthy mentioning that a conclusion similar to the one above can be drawn for ULT-II iteration method, and is thus
omitted here. It is well known that the choice of ω = 1 in the SOR method results in the block Gauss–Seidel iteration for
solving the block two-by-two linear system. We can obtain ρ (J1 ) = (ρ (J ))2 , that is, the asymptotic convergence rates of the
block Gauss–Seidel iteration and the ULT-type iteration are the same, and are twice that of the block Jacobi iteration. There
exists a functional relationship between the eigenvalues of the block Jacobi matrix J and the block SOR matrix Jω . It has
been proved in [58] that, if ω = 0, λ is a nonzero eigenvalue of the mathrix Jω and ν satisfies

(λ + ω − 1 )2 = λω2 ν 2 ,

then ν is an eigenvalue of the block Jacobi matrix J. Conversely, if ν is an eigenvalue of the block Jacobi matrix J and λ
satisfies the above equality, then λ is a nonzero eigenvalue of the matrix Jω . When all of the eigenvalues of the block Jacobi
matrix are real, the block SOR method is convergent if and only if 0 < ω < 2. And when some of the eigenvalues of the block
Jacobi matrix J are complex, the block SOR method is convergent for some positive number ε ∈ (0, 1) and each eigenvalue
β2
ν = δ + iβ , and the point (δ , β ) lies in the interior of the ellipse δ 2 + ε2
= 1 and ω satisfies 0 < ω < 2
1+ε .
108 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

Fig. 21. Restored images with various methods for reflexive BCs in Example 3.

Fig. 22. Comparison of the residual errors using periodic (left) and reflective (right) BCs for Example 3.
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 109

8. Conclusion

In this study, a class of upper and lower triangular (ULT) splitting iteration methods have been given to solve the linear
systems arising from image restoration problem. Four different versions of the proposed method have been presented to
restore the degrades images. Convergence properties of the ULT-type methods have also been investigated. The effectiveness
of the proposed methods has been investigated by three examples. The numerical examples have shown that the new
methods are more reliable and robust than two recently proposed methods to solve the image restoration problem. Finally,
we discuss the SOR acceleration method. The practical implications of the SOR acceleration method will be the subject of
a follow-up paper.

Acknowledgments

The authors are very much indebted to anonymous referees and editor for providing very useful comments and sugges-
tions, which greatly improved the original manuscript of this paper. This work is supported by the National Natural Science
Foundation of China (Nos. 11571004, 11701456).

Appendix

Let σ1 ≥ σ2 ≥ · · · ≥ σn2 be the singular values of matrix A. Since the system (1.4) is typically ill-posed, the singular val-
ues of A decay to zero. Thus, the standard approach to preconditioning cannot be used when solving ill-posed problems.
However, by recasting the original system (1.4) equivalently, employing the Tikhonov regularization method, into 2n2 -by-
2n2 linear system (1.6), we can find that the magnitude of these singular values can be greatly enhanced, as illustrated by
employing the singular value factorization of A = U V T ( = diag(σ1 , . . . , σn2 )) as follows:
First, we have

       
I A I U V T U 0 I  UT 0
K¯ = = = ,
−AT μ2 I −V U T μ2 I 0 V − μ2 I 0 VT
 
W

from which we obtain

    
T I − I  I + 2 (1 − μ2 )
W W = =
 μ2 I − μ2 I (1 − μ2 ) μ4 I +  2
⎡ ⎤
1 + σ12 ( 1 − μ2 ) σ 1
⎢ .. .. ⎥
⎢ . . ⎥
⎢ ⎥
⎢ 1 + σn22 ( 1 − μ2 ) σ n 2 ⎥
=⎢ ⎥.
⎢ ( 1 − μ2 ) σ 1 μ4 + σ12 ⎥
⎢ .. .. ⎥
⎣ . . ⎦
( 1 − μ2 ) σ n 2 μ4 + σn22

Furthermore, it follows from the characteristic polynomial of matrix WT W that

 
 λ − 1 − σ12 ( μ2 − 1 ) σ 1 
 
 .. .. 
 . . 
   
λI2n2 − W T W  =  2 λ−1−σ 2
( μ − 1 ) σn 2 
2
n2
=0 (A.1)
 ( μ − 1 ) σ1 λ − μ4 − σ12 
 
 .. .. 
 . . 
 ( μ − 1 ) σn 2
2
λ − μ − σn 2
4 2
110 H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111

Case 1: When λ = 1 + σi2 (i = 1, . . . , n2 ), simplifying (A.1) yields


 
λ − 1 − σ12 ( μ2 − 1 ) σ 1 
 
 .. .. 
 . . 
 
 λ − 1 − σn22 ( μ2 − 1 ) σ n 2 
 (1−μ2 )2 σ12 
 0 λ − μ4 − σ12 −  = 0,
 λ−(1+σ12 ) 
 .. .. 
 . . 
 
 ( 1 − μ ) σ
2 2 2

 0 λ − μ4 − σn22 − λ−(1+σ 2 n)2 
n2

namely,
λ2 − (1 + 2σi2 + μ4 )λ + (μ4 + σi2 )(1 + σi2 ) − (μ2 − 1 )2 σi2 = 0, i = 1, . . . , n2 . (A.2)
Since the discriminant of the above quadratic equations is equal to = ( 1 − μ2 ) 2 [ ( 1 + μ2 ) 2 + 4σi2 ] > 0 (0 < μ < 1), the roots
of (A.2) are

1 + 2σi2 + μ4 ± (1 − μ2 ) (1 + μ2 )2 + 4σi2
λ1,2 =
2

1 + μ4 ± ( 1 − μ2 ) (1 + μ2 )2 + 4σi2
= σ +
i
2
, i = 1, . . . , n2 .
2
By comparing the square of singular values of matrices A and W, the differences between them yields

1 + μ4 ± ( 1 − μ2 ) (1 + μ2 )2 + 4σi2
, i = 1, . . . , n2 .
2
We see that when the singular values of A decay to, and cluster at 0, the above expression will tend to 1 and μ4 , that is,
most of the singular values of W (or K¯ ) will cluster around 1. 
Case 2: When λ = 1 + σi2 (i = 1, . . . , n2 ), i.e., the singular values of W (or K¯ ) are 1 + σi2 (i = 1, . . . , n2 ), which implies
that at least half of singular values of A are far away from zero.
In conclusion, the behaviour of ill-conditioned linear system can be greatly improved as shown in the above analysis.
Thus, the standard preconditioning approaches for general system can be applied to solve ill-posed problems arising in
image restoration.

References

[1] F.A. Gonzalez, E. Romero, Biomedical image analysis and machine learning technologies: Applications and techniques, 1st ed., Information Science
Reference-Imprint of IGI Publishing, Hershey, PA, 2009.
[2] Y.-S. Han, D.M. Herrington, W.E. Snyder, Quantitative angiography using mean field annealing, Proceedings of the Computers in Cardiology, IEEE, 1992,
pp. 119–122.
[3] B. Fisher, Digital restoration of snow white: 120,0 0 0 famous frames are back, Adv. Imaging 137 (1993) 32–36.
[4] J.L. Starck, F. Murtagh, Astronomical Image and Data Analysis, second ed., Springer, 2006.
[5] L.R. Berriel, J. Bescos, A. Santisteban, Image restoration for a defocused optical system, Appl. Opt. 22 (1983) 2772–2780.
[6] T.F. Chan, J. Shen, Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods, SIAM, Philadelphia, 2005.
[7] P.C. Hansen, J.G. Nagy, D.P. O’leary, Deblurring Images: Matrices, Spectra, and Filtering, SIAM, 2006.
[8] R.C. Gonzalez, R.E. Woods, Digital Image Processing, second ed., Prentice Hall, New Jersey, 2002.
[9] S. Serra-Capizzano, A note on antireflective boundary conditions and fast deblurring models, SIAM J. Sci. Comput. 25 (2003) 1307–1325.
[10] X.-G. Lv, T.-Z. Huang, Z.-B. Xu, X.-L. Zhao, Kronecker product approximations for image restoration with whole-sample symmetric boundary conditions,
Inf. Sci. 186 (2012) 150–163.
[11] J.G. Nagy, M.K. Ng, L. Perrone, Kronecker product approximations for image restoration with reflexive boundary conditions, SIAM J. Matrix Anal. Appl.
25 (2004) 829–841.
[12] L. Perrone, Kronecker product approximations for image restoration with anti-reflective boundary conditions, Numer. Linear Algebra Appl. 13 (2006)
1–22.
[13] X.-L. Zhao, T.-Z. Huang, X.-G. Lv, Z.-B. Xu, J. Huang, Kronecker product approximations for image restoration with new mean boundary conditions,
Appl. Math. Model. 36 (2012) 225–237.
[14] J.G. Nagy, K.M. Palmer, Steepest descent, CG, and iterative regularization of ill-posed problems, BIT Numer. Math. 43 (2003) 1003–1017.
[15] B. Kaltenbacher, A. Neubauer, O. Scherzer, Iterative Regularization Methods for Nonlinear Ill-Posed Problems, Walter de Gruyter, Berlin, 2008.
[16] M.K. Ng, R.H. Chan, W.C. Tang, A fast algorithm for deblurring models with Neumann boundary conditions, SIAM J. Sci. Comput. 21 (1999) 851–866.
[17] N. Zheng, K. Hayami, J.-F. Yin, Modulus-type inner outer iteration methods for nonnegative constrained least squares problems, SIAM J. Matrix Anal.
Appl. 37 (2016) 1250–1278.
[18] J. Liu, T.-Z. Huang, I.W. Selesnick, X.-G. Lv, P.-Y. Chen, Image restoration using total variation with overlapping group sparsity, Inf. Sci. 295 (2015)
232–246.
[19] Y.-T. Cai, M. Donatelli, D. Bianchi, T.-Z. Huang, Regularization preconditioners for frame-based image deblurring with reduced boundary artifacts, SIAM
J. Sci. Comput. 38 (2016) 189. B164–B
[20] G. Liu, T.-Z. Huang, J. Liu, High-order TVL1-based images restoration and spatially adapted regularization parameter selection, Comput. Math. Appl. 67
(2014) 2015–2026.
[21] Z.-Z. Bai, B.N. Parlett, Z.-Q. Wang, On generalized successive overrelaxation methods for augmented linear systems, Numer. Math. 102 (2005) 1–38.
[22] Z.-Z. Bai, Z.-Q. Wang, On parameterized inexact Uzawa methods for generalized saddle point problems, Linear Algebra Appl. 428 (2008) 2900–2932.
[23] J.H. Bramble, J.E. Pasciak, A.T. Vassilev, Analysis of the inexact Uzawa algorithm for saddle point problems, SIAM J. Numer. Anal. 34 (1997) 1072–1092.
[24] J.H. Bramble, J.E. Pasciak, A.T. Vassilev, Uzawa type algorithms for nonsymmetric saddle point problems, Math. Comp. 69 (20 0 0) 667–689.
H.-T. Fan et al. / Applied Mathematical Modelling 54 (2018) 82–111 111

[25] H.C. Elman, G.H. Golub, Inexact and preconditioned Uzawa algorithms for saddle point problems, SIAM J. Numer. Anal. 31 (1994) 1645–1661.
[26] K.J. Arrow, L. Hurwicz, H. Uzawa, Studies in Linear and Non-Linear Programming, first ed., Stanford University Press, 1958.
[27] J.-F. Lu, Z.-Y. Zhang, A modified nonlinear inexact Uzawa algorithm with a variable relaxation parameter for the stabilized saddle point problem, SIAM
J. Matrix Anal. Appl. 31 (2010) 1934–1957.
[28] Q.-Y. Hu, J. Zou, Two new variants of nonlinear inexact Uzawa algorithms for saddle-point problems, Numer. Math. 93 (2002) 333–359.
[29] S.-L. Wu, T.-Z. Huang, X.-L. Zhao, A modified SSOR iterative method for augmented systems, J. Comput. Appl. Math. 228 (2009) 424–433.
[30] L. Li, T.-Z. Huang, X.-P. Liu, Modified Hermitian and skew–Hermitian splitting methods for non-Hermitian positive-definite linear systems, Numer.
Linear Algebra Appl. 14 (2007) 217–235.
[31] Z.-Z. Bai, Optimal parameters in the HSS-like methods for saddle-point problems, Numer. Linear Algebra Appl. 16 (2009) 447–479.
[32] Z.-Z. Bai, G.H. Golub, J.-Y. Pan, Preconditioned Hermitian and skew–Hermitian splitting methods for non-Hermitian positive semidefinite linear systems,
Numer. Math. 98 (2004) 1–32.
[33] Z.-Z. Bai, G.H. Golub, Accelerated Hermitian and skew–Hermitian splitting iteration methods for saddle-point problems, IMA J. Numer. Anal. 27 (2007)
1–23.
[34] Z.-Z. Bai, G.H. Golub, L.-Z. Lu, J.-F. Yin, Block triangular and skew–Hermitian splitting methods for positive-definite linear systems, SIAM J. Sci. Comput.
26 (2005) 844–863.
[35] X.-F. Peng, W. Li, The alternating-direction iterative method for saddle point problems, Appl. Math. Comput. 216 (2010) 1845–1858.
[36] H.-T. Fan, X.-Y. Zhu, A modified relaxed splitting preconditioner for generalized saddle point problems from the incompressible Navier–Stokes equa-
tions, Appl. Math. Lett. 55 (2016) 18–26.
[37] D.M. Young, Iterative Solution of Large Linear Systems, Dover Publications, 2003.
[38] G.H. Golub, X. Wu, J.-Y. Yuan, SOR-like methods for augmented systems, BIT Numer. Math. 41 (2001) 71–85.
[39] Z.-Z. Bai, G.-Q. Li, Restrictively preconditioned conjugate gradient methods for systems of linear equations, IMA J. Numer. Anal. 23 (2003) 561–580.
[40] Z.-Z. Bai, Z.-Q. Wang, Restrictive preconditioners for conjugate gradient methods for symmetric positive definite linear systems, J. Comput. Appl. Math.
187 (2006) 202–226.
[41] E. Sturler, J. Liesen, Block-diagonal and constraint preconditioners for nonsymmetric indefinite linear systems. part i: theory, SIAM J. Sci. Comput. 26
(2005) 1598–1619.
[42] J.-Y. Pan, M.K. Ng, Z.-Z. Bai, New preconditioners for saddle point problems, Appl. Math. Comput. 172 (2006) 762–771.
[43] H.C. Elman, Preconditioning for the steady-state Navier–Stokes equations with low viscosity, SIAM J. Sci. Comput. 20 (1999) 1299–1316.
[44] C. Keller, N.I.M. Gould, A.J. Wathen, Constraint preconditioning for indefinite linear systems, SIAM J. Matrix Anal. Appl. 21 (20 0 0) 130 0–1317.
[45] N.I.M. Gould, M.E. Hribar, J. Nocedal, On the solution of equality constrained quadratic programming problems arising in optimization, SIAM J. Sci.
Comput. 23 (2001) 1376–1395.
[46] V. Sarin, A. Sameh, An efficient iterative method for the generalized stokes problem, SIAM J. Sci. Comput. 19 (1998) 206–226.
[47] Q.-Q. Zheng, C.-F. Ma, A class of triangular splitting methods for saddle point problems, J. Comput. Appl. Math. 298 (2016) 13–23.
[48] Z.-Z. Bai, G.H. Golub, M.K. Ng, Hermitian and skew–Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix
Anal. Appl. 24 (2003) 603–626.
[49] X.-G. Lv, T.-Z. Huang, Z.-B. Xu, X.-L. Zhao, A special Hermitian and skew–Hermitian splitting method for image restoration, Appl. Math. Model. 37
(2013) 1069–1082.
[50] N. Aghazadeh, M. Bastani, D.K. Salkuyeh, Generalized Hermitian and skew–Hermitian splitting iterative method for image restoration, Appl. Math.
Model. 39 (2015) 6126–6138.
[51] L.A. Krukier, B.L. Krukier, Z.-R. Ren, Generalized skew–Hermitian triangular splitting iteration methods for saddle-point linear systems, Numer. Linear
Algebra Appl. 21 (2014) 152–170.
[52] Z.-Z. Bai, M. Benzi, Regularized HSS iteration methods for saddle-point linear systems, BIT Numer. Math. (2016), doi:10.1007/s10543- 016- 0636- 7.
[53] Z.-Z. Bai, G.H. Golub, M.K. Ng, On successive-overrelaxation acceleration of the Hermitian and skew–Hermitian splitting iterations, Numer. Linear
Algebra Appl. 14 (2007) 319–335.
[54] Y. Saad, Iterative Methods for Sparse Linear Systems, second ed., SIAM, Philadelphia, 2002.
[55] P.C. Hansen, Analysis of discrete ill-posed problems by means of the L-curve, SIAM Rev. 34 (1992) 561–580.
[56] V. Morozov, On the solution of functional equations by the method of regularization, Soviet Math. Dokl. 7 (1966) 414–417.
[57] G.H. Golub, M. Heath, G. Wahba, Generalized cross-validation as a method for choosing a good ridge parameter, Technometrics 21 (1979) 215–223.
[58] R.S. Varga, Matrix Iterative Analysis, second ed., Springer, New York and London, 20 0 0.
[59] J.G. Nagy, K. Palmer, L. Perrone, Iterative methods for image deblurring: a matlab object-oriented approach, Numer. Algor. 36 (2004) 73–93.
[60] C.C. Paige, M.A. Saunders, Solution of sparse indefinite systems of linear equations, SIAM J. Numer. Anal. 12 (1975) 617–629.
[61] B. Fischer, A. Ramage, D.J. Silvester, A.J. Wathen, Minimum residual methods for augmented systems, BIT Numer. Math. 38 (1998) 527–543.

You might also like