You are on page 1of 12

Inverse Problems

You may also like


- The relaxed CQ algorithm solving the split
A note on the CQ algorithm for the split feasibility feasibility problem
Qingzhi Yang
problem - Sparsity constrained split feasibility for
dose-volume constraints in inverse
planning of intensity-modulated photon or
To cite this article: Biao Qu and Naihua Xiu 2005 Inverse Problems 21 1655 proton therapy
Scott Penfold, Rafa Zalas, Margherita
Casiraghi et al.

- Linear convergence of CQ algorithms and


applications in gene regulatory network
View the article online for updates and enhancements. inference
Jinhua Wang, Yaohua Hu, Chong Li et al.

This content was downloaded from IP address 210.86.231.166 on 04/11/2023 at 07:48


INSTITUTE OF PHYSICS PUBLISHING INVERSE PROBLEMS

Inverse Problems 21 (2005) 1655–1665 doi:10.1088/0266-5611/21/5/009

A note on the CQ algorithm for the split feasibility


problem

Biao Qu1,2 and Naihua Xiu1


1 Department of Applied Mathematics, Beijing Jiaotong University, Beijing, 100044,

People’s Republic of China


2 Institute of Operations Research, Qufu Normal University, Shandong, 276826,

People’s Republic of China

E-mail: qubiao001@163.com and nhxiu@center.njtu.edu.cn

Received 14 January 2005, in final form 23 June 2005


Published 16 September 2005
Online at stacks.iop.org/IP/21/1655

Abstract
Let C and Q be nonempty closed convex sets in N and M , respectively, and
A an M × N real matrix. The split feasibility problem (SFP) is to find x ∈ C
with Ax ∈ Q, if such x exist. Byrne (2002 Inverse Problems 18 441–53)
proposed a CQ algorithm with the following iterative scheme:
x k+1 = PC (x k + γ AT (PQ − I )Ax k ), k = 0, 1, . . . ,
where γ ∈ (0, 2/L), L denotes the largest eigenvalue of the matrix AT A, and
PC and PQ denote the orthogonal projections onto C and Q, respectively. In his
algorithm, Byrne assumed that the projections PC and PQ are easily calculated.
However, in some cases it is impossible or needs too much work to exactly
compute the orthogonal projection. Recently, Yang (2004 Inverse Problems 20
1261–6) presented a relaxed CQ algorithm, in which he replaced PC and PQ by
PCk and PQk , that is, the orthogonal projections onto two halfspaces Ck and Qk ,
respectively. Clearly, the latter is easy to implement. One common advantage
of the CQ algorithm and the relaxed CQ algorithm is that computation of the
matrix inverses is not necessary. However, they use a fixed stepsize related to
the largest eigenvalue of the matrix AT A, which sometimes affects convergence
of the algorithms. In this paper, we present modifications of the CQ algorithm
and the relaxed CQ algorithm by adopting Armijo-like searches. The modified
algorithms need not compute the matrix inverses and the largest eigenvalue of
the matrix AT A, and make a sufficient decrease of the objective function at each
iteration. We also show convergence of the modified algorithms under mild
conditions.

0266-5611/05/051655+11$30.00 © 2005 IOP Publishing Ltd Printed in the UK 1655


1656 B Qu and N Xiu

1. Introduction

Let C and Q be nonempty closed convex sets in N and M , respectively, and A an M × N


real matrix. The problem, to find x ∈ C with Ax ∈ Q if such x exist, was called the
split feasibility problem (SFP) by Censor and Elfving [5]. This problem appears in signal
processing, image reconstruction and so on, and many well-known iterative algorithms for
solving it were established; see the survey papers [1, 4].
In [5], the authors used their multidistance idea to obtain iterative algorithms for solving
the SFP. Their algorithms as well as others obtained later (see [2]) involve matrix inverses
at each iteration. In [3], Byrne presented a projection method called the CQ algorithm for
solving the SFP that does not involve matrix inverses.
Denote by PC and PQ , the orthogonal projections onto C and Q respectively, that is,
PC (x) minimizes c − x over all c ∈ C, where · indicates the 2-norm. The CQ algorithm
proposed in [3] is as follows.
The CQ algorithm. Let x 0 be arbitrary. For k = 0, 1, . . . , calculate
x k+1 = PC (x k + γ AT (PQ − I )Ax k ),
where γ ∈ (0, 2/L), L denotes the largest eigenvalue of the matrix AT A and I is the identity
operator.
It may be seen that if the orthogonal projections onto C and Q are easily calculated,
the total cost of computation is not great. In [3], Byrne assumed that both projections are
easily calculated in the CQ algorithm. However, in some cases it is impossible or needs too
much work to exactly compute the orthogonal projection. Therefore, if this case appears,
the efficiency of projection-type methods, including the CQ algorithm, will be seriously
affected. Inexact technology plays an important role in designing efficient, easily implemented
algorithms for solving optimization problems, variational inequality problems and so on (see,
e.g., [6, 8, 9, 11, 12]). The relaxed projection method may be viewed as one of the inexact
projection-type methods. In [17], by using the relaxed projection technology, Yang presented
a relaxed CQ algorithm for solving the SFP, where he used two halfspaces Ck and Qk in place
of C and Q, respectively, at the kth iteration and the orthogonal projections onto Ck and Qk
are easily executed.
We note that both the CQ algorithm and the relaxed CQ algorithm use a fixed stepsize
related to the largest eigenvalue of the matrix AT A, which sometimes affects convergence of
the algorithms. In this paper, we present modifications of the CQ algorithm and the relaxed
CQ algorithm by adopting Armijo-like searches which are popular in iterative algorithms
for solving nonlinear programming problems, variational inequality problems and so on (see,
e.g., [7, 16]). The modified algorithms need not compute the matrix inverses and the largest
eigenvalue of the matrix AT A, and make a sufficient decrease of the objective function at each
iteration. We also show convergence of the modified algorithms under mild conditions.
The rest of this paper is organized as follows. Section 2 reviews some concepts and
existing results. Section 3 gives a modification of the CQ algorithm and shows its convergence.
Section 4 presents a modification of the relaxed CQ algorithm and proves its convergence.
Finally, section 5 gives some concluding remarks.

2. Preliminaries

In this section, we review some definitions and basic results which will be used in this paper.

Definition 2.1. Let F be a mapping from a set X ⊂ n into n . Then


A note on the CQ algorithm for the split feasibility problem 1657

(a) F is said to be monotone on X, if

F (x) − F (y), x − y  0, ∀x, y ∈ X;

(b) F is said to be co-coercive on X with modulus α > 0, if

F (x) − F (y), x − y  αF (x) − F (y)2 , ∀x, y ∈ X;

(c) F is said to be Lipschitz continuous on X with constant λ > 0, if

F (x) − F (y)  λx − y, ∀x, y ∈ X.

For the given nonempty closed convex set  in n , the orthogonal projection from n
onto  is defined by

P (x) = argmin{x − y|y ∈ }, x ∈ Rn.

Sometimes, we denote P (x) by P x. It has the following well-known properties:

Lemma 2.1 ([18]). Let  be a nonempty closed convex subset in n , then for any x, y ∈ n
and z ∈ ,

(1) P (x) − x, z − P (x)  0;


(2) P (x) − P (y)2  P (x) − P (y), x − y;
(3) P (x) − z2  x − z2 − P (x) − x2 .

Remark 2.1. From part (2) of lemma 2.1, we know that P is a monotone, co-coercive (with
modulus 1) and nonexpansive (i.e., P (x) − P (y)  x − y) operator. Moreover, the
operator I − P is also co-coercive with modulus 1, where I denotes the identity operator, i.e.,
for all x, y ∈ n

(I − P )x − (I − P )y, x − y  (I − P )x − (I − P )y2 . (1)

In fact, it is easily obtained from part (2) of lemma 2.1 and the fact that

(I − P )x − (I − P )y, x − y − (I − P )x − (I − P )y2


= P (x) − P (y), x − y − P (x) − P (y)2 .

Let F be a mapping from n into n . For any x ∈ n and α > 0, define

x(α) = P (x − αF (x)), e(x, α) = x − x(α).

From the nondecreasing property of e(x, α) on α > 0 by Toint [14] and the nonincreasing
property of e(x, α)/α on α > 0 by Gafni and Bertsekas [10], we immediately conclude a
useful lemma.

Lemma 2.2. Let F be a mapping from n into n . For any x ∈ n and α > 0, we have

min{1, α}e(x, 1)  e(x, α)  max{1, α}e(x, 1).


1658 B Qu and N Xiu

3. A modified CQ algorithm and its convergence

In this section, assuming that the projections PC and PQ are easily calculated, we establish a
corresponding algorithm for the SFP.
Define
f (x) = 12 Ax − PQ Ax2 .
Then the function f (x) is convex and continuously differentiable on N and its derivative is
the operator
∇f (x) = AT (I − PQ )Ax;
see [4].
Consider the following constrained minimization problem:
min{f (x) : x ∈ C}. (2)

We say that a point x ∈ C is a stationary point of the problem (2) if it satisfies the condition
∇f (x ∗ ), x − x ∗   0, ∀x ∈ C.

Because f is convex, a point x ∈ C is a stationary point of the problem (2) if and only if it is
a global minimizer of the problem (2). Obviously, it is not necessarily a solution of the SFP.
Proposition 3.1. Suppose that the solution set of the SFP is nonempty. Then the following
statements are equivalent:
(i) x ∗ is a solution of the SFP;
(ii) x ∗ ∈ C and f (x ∗ ) = 0;
(iii) x ∗ ∈ C and ∇f (x ∗ ) = 0.
Proof. (i) ⇒(ii). This is obvious.
(ii) ⇒(iii). Suppose that x ∗ ∈ C and f (x ∗ ) = 0. Then, we have
Ax ∗ − PQ Ax ∗ = 0.
Premultiplying the above equality by AT , we obtain
AT (I − PQ )Ax ∗ = 0,
that is,
∇f (x ∗ ) = 0.
(iii) ⇒(i). It follows from the fact that f is convex and the solution set of the SFP is nonempty
that f (x ∗ ) = 0, that is, Ax ∗ = PQ Ax ∗ , which implies that Ax ∗ ∈ Q. This completes the
proof. 
Algorithm 3.1. Given constants β > 0, σ ∈ (0, 1), γ ∈ (0, 1). Let x 0 be arbitrary. For
k = 0, 1, . . . , calculate
x k+1 = PC (x k − αk AT (I − PQ )Ax k ),
where αk = βγ mk and mk is the smallest nonnegative integer m such that
f (PC (x k − βγ m AT (I − PQ )Ax k ))
 f (x k ) − σ AT (I − PQ )Ax k , x k − PC (x k − βγ m AT (I − PQ )Ax k ).
Algorithm 3.1 is in fact a special case of the standard gradient projection method with the
Armijo-like search rule for solving convexly constrained optimization:
min{g(x) : x ∈ }, (3)
A note on the CQ algorithm for the split feasibility problem 1659

where  ⊆ n is a nonempty closed convex set, and the function g(x) is continuously
differentiable on , denoted by g ∈ C1 . For this famous method, the following convergence
result is given in [15]:

Lemma 3.1. Let g ∈ C1 be pseudo-convex and {x k } be an infinite sequence generated by


the gradient projection method with the Armijo-like searches. Then, the following conclusions
hold:
(a) limk→∞ g(x k ) = inf{g(x) : x ∈ }.
(b) ∅ = ∗ , which denotes the set of the optimal solutions to (3), if and only if there exists at
least one limit point of {x k }. In this case, {x k } converges to a solution of (3).

By using lemma 3.1, we immediately obtain the following convergence result of


algorithm 3.1:

Theorem 3.1. Let {x k } be a sequence generated by algorithm 3.1. Then the following
conclusions hold:
(a) {x k } is bounded if and only if the solution set of (2) is nonempty. In this case, {x k } must
converge to a solution of (2).
(b) {x k } is bounded and limk→∞ f (x k ) = 0 if and only if the SFP is solvable. In such a case,
{x k } must converge to a solution of the SFP.

Remark 3.1. In contrast to the CQ algorithm proposed by Byrne, algorithm 3.1 has three
advantages. First, it need not determine or estimate the largest eigenvalue of the matrix AT A.
Second, the stepsize αk is judiciously chosen so that the function value f (x k+1 ) has a sufficient
decrease. Third, it can identify the existence of solutions to the concerned problem by the
iterative sequence.

4. A modified relaxed CQ algorithm and its convergence

In this section, assuming that the projections PC and PQ are not easily calculated, we present
a modification of the relaxed CQ algorithm. Carefully speaking, the convex sets C and Q
satisfy the following assumptions:
(H1) The set C is given by
C = {x ∈ N |c(x)  0},
where c : N →  is a convex (not necessarily differentiable) function and C is nonempty.
The set Q is given by
Q = {y ∈ M |q(y)  0},
where q : M →  is a convex (not necessarily differentiable) function and Q is nonempty.
(H2) For any x ∈ N , at least one subgradient ξ ∈ ∂c(x) can be calculated, where ∂c(x)
is a generalized gradient of c(x) at x and is defined as follows:
∂c(x) = {ξ ∈ N |c(z)  c(x) + ξ, z − x for all z ∈ N }.
For any y ∈ M , at least one subgradient η ∈ ∂q(y) can be calculated, where
∂q(y) = {η ∈ M |q(u)  q(y) + η, u − y for all u ∈ M }.
The following lemma provides an important boundedness property of the subdifferential,
see, e.g., [13]:
1660 B Qu and N Xiu

Lemma 4.1. Suppose h : n →  is a convex function, then it is subdifferentiable everywhere


and its subdifferentials are uniformly bounded on any bounded subset of n .

We first review the relaxed CQ algorithm presented by Yang [17].


The relaxed CQ algorithm. Let x 0 be arbitrary. For k = 0, 1, . . . , calculate
   
x k+1 = PCk x k + γ AT PQk − I Ax k ,
where γ ∈ (0, 2/L), L denotes the largest eigenvalue of matrix AT A,
Ck = {x ∈ N |c(x k ) + ξ k , x − x k   0}
where ξ k is an element in ∂c(x k ),
Qk = {y ∈ M |q(Ax k ) + ηk , y − Ax k   0}
where ηk is an element in ∂q(Ax k ).

Remark 4.1. By the definition of subgradient, it is clear that the halfspaces Ck and Qk
contain C and Q, respectively. From the expressions of Ck and Qk , the orthogonal projections
onto Ck and Qk may be directly calculated (see [9]).
For every k, using Qk we define the function Fk : N → N by
 
Fk (x) = AT I − PQk Ax.
We now formally state our modified relaxed CQ algorithm.

Algorithm 4.1. Given constants γ > 0, l ∈ (0, 1), µ ∈ (0, 1). Let x 0 be arbitrary. For
k = 0, 1, . . . , let
x̄ k = PCk (x k − αk Fk (x k )),
where αk = γ l mk and mk is the smallest nonnegative integer m such that
x k − x̄ k 
Fk (x k ) − Fk (x̄ k )  µ . (4)
αk
Set
x k+1 = PCk (x k − αk Fk (x̄ k )).

Although Ck , Qk and Fk depend on k, we have the following two nice lemmas:

Lemma 4.2. For all k = 0, 1, 2, . . . , Fk is Lipschitz continuous on N with constant L and


co-coercive on N with modulus 1/L, where L is the largest eigenvalue of the matrix AT A.
Therefore, Armijo-like search rule (4) is well defined.

Proof. By the definition of Fk , we have for any x, y ∈ N


     2
Fk (x) − Fk (y)2 = AT I − PQk Ax − AT I − PQk Ay 
    2
 L I − PQk Ax − I − PQk Ay  . (5)
Also, from part (2) of lemma 2.1, we obtain
      
 I − PQ Ax − I − PQ Ay 2 = Ax − Ay2 + PQ (Ax) − PQ (Ay)2
k k
 k k

− 2 PQk (Ax) − PQk (Ay), Ax − Ay
 2
 Ax − Ay2 − PQk (Ax) − PQk (Ay) .
A note on the CQ algorithm for the split feasibility problem 1661

Therefore,
  2 
Fk (x) − Fk (y)2  L Ax − Ay2 − PQk (Ax) − PQk (Ay)
 LAx − Ay2
 L2 x − y2 ,
that is
Fk (x) − Fk (y)  Lx − y, ∀x, y ∈ N .
From remark 2.1, we know for any x, y ∈ N
     
Fk (x) − Fk (y), x − y = AT I − PQk Ax − AT I − PQk Ay, x − y
    
= I − PQk Ax − I − PQk Ay, Ax − Ay
    2
  I − PQk Ax − I − PQk Ay  .
On the other hand, from (5) we have
    
 I − PQ Ax − I − PQ Ay 2  (1/L)Fk (x) − Fk (y)2 .
k k

Therefore,
Fk (x) − Fk (y), x − y  (1/L)Fk (x) − Fk (y)2 , ∀x, y ∈ N .
The proof is completed. 
Lemma 4.3.
µl
< αk  γ for all k = 0, 1, . . . .
L

Proof. Obviously, from the search rule (4), αk  γ for all k = 0, 1, . . . .


If αk = γ , then this lemma is proved.
If αk < γ , from the search rule (4), we know that αk / l must violate inequality (4), i.e.,
  k  
   x − PC x k − αk Fk (x k ) 
 αk 
Fk (x ) − Fk PCk x − Fk (x )  > µ
k k k l
k
αk ,
l l
which together with lemma 4.2 yield
µl
αk > .
L
This completes the proof. 
Now, we establish global convergence of algorithm 4.1.
Theorem 4.1. Let {x k } be a sequence generated by algorithm 4.1. If the solution set of the
SFP is nonempty, then {x k } converges to a solution of the SFP.
Proof. Let x ∗ be a solution of the SFP. Since C ⊆ Ck , Q ⊆ Qk , then x ∗ = PC (x ∗ ) = PCk (x ∗ )
and Ax ∗ = PQ (Ax ∗ ) = PQk (Ax ∗ ), this implies that x ∗ ∈ Ck and Fk (x ∗ ) = 0 for all
k = 0, 1, 2, . . . .
Using the monotonicity of Fk (by lemma 4.2), we have for all k = 0, 1, 2, . . . ,
Fk (x̄ k ) − Fk (x ∗ ), x̄ k − x ∗   0,
this implies
Fk (x̄ k ), x̄ k − x ∗   0.
1662 B Qu and N Xiu

Therefore, we have
Fk (x̄ k ), x k − x ∗   Fk (x̄ k ), x k − x̄ k . (6)
Thus, using part (3) of lemma 2.1 and (6), we have
 2
x k+1 − x ∗ 2 = PCk (x k − αk Fk (x̄ k )) − x ∗ 
 x k − αk Fk (x̄ k ) − x ∗ 2 − x k+1 − x k + αk Fk (x̄ k )2
= x k − x ∗ 2 − 2αk Fk (x̄ k ), x k − x ∗  − x k+1 − x k 2
− 2αk Fk (x̄ k ), x k+1 − x k 
 x k − x ∗ 2 − 2αk Fk (x̄ k ), x k − x̄ k  − x k+1 − x k 2
− 2αk Fk (x̄ k ), x k+1 − x k 
= x k − x ∗ 2 − 2αk Fk (x̄ k ), x k+1 − x̄ k  − x k+1 − x̄ k + x̄ k − x k 2
= x k − x ∗ 2 − 2αk Fk (x̄ k ), x k+1 − x̄ k  − x k+1 − x̄ k 2
− x̄ k − x k 2 + 2x k+1 − x̄ k , x k − x̄ k 
= x k − x ∗ 2 − x̄ k − x k 2 − x k+1 − x̄ k 2
+ 2x k − x̄ k − αk Fk (x̄ k ), x k+1 − x̄ k .
Also, by part (1) of lemma 2.1 and (4), we have
2x k − x̄ k − αk Fk (x̄ k ), x k+1 − x̄ k 
 2x k − x̄ k − αk Fk (x̄ k ), x k+1 − x̄ k  + 2x̄ k − x k + αk Fk (x k ), x k+1 − x̄ k 
= 2αk Fk (x k ) − Fk (x̄ k ), x k+1 − x̄ k 
 2αk Fk (x k ) − Fk (x̄ k )x k+1 − x̄ k 
 αk 2 Fk (x k ) − Fk (x̄ k )2 + x k+1 − x̄ k 2
 µ2 x k − x̄ k 2 + x k+1 − x̄ k 2 .
Thus, we have for all k = 0, 1, 2, . . . ,
x k+1 − x ∗ 2  x k − x ∗ 2 − x k − x̄ k 2 + µ2 x k − x̄ k 2
= x k − x ∗ 2 − (1 − µ2 )x k − x̄ k 2 , (7)
∗ 2
which implies that the sequence {x − x  } is monotonically decreasing and hence {x } is
k k

bounded. Consequently, we get from (7)


lim x k − x̄ k  = 0. (8)
k→∞
Moreover, using remark 2.1 and (4), we obtain for all k = 0, 1, 2, . . . ,
x k+1 − x k   x k+1 − x̄ k  + x k − x̄ k 
 x k − αk Fk (x̄ k ) − x k + αk Fk (x k ) + x k − x̄ k 
= αk Fk (x̄ k ) − Fk (x k ) + x k − x̄ k 
 (µ + 1)x k − x̄ k ,
which, together with (8), implies that
lim x k+1 − x k  = 0. (9)
k→∞

Assume that x̄ is an accumulation point of {x k } and x ki → x̄, where {x ki }∞i=1 is a subsequence


of {x k }. We are ready to show that x̄ is a solution of the SFP.
First, we show that x̄ ∈ C. Since x ki +1 ∈ Cki , then by the definition of Cki , we have
c(x ki ) + ξ ki , x ki +1 − x ki   0, ∀i = 1, 2, . . . .
A note on the CQ algorithm for the split feasibility problem 1663

Passing onto the limit in this inequality and taking into account (9) and lemma 4.1, we obtain
that
c(x̄)  0.
Hence, we conclude x̄ ∈ C.
Next, we need to show Ax̄ ∈ Q. Define
ek (x, α) = x − PCk (x − αFk (x)), k = 0, 1, 2, . . . .
Then from lemmas 2.2, 4.3 and equation (8), we have
  x ki − x̄ ki 
lim eki (x ki , 1)  lim
ki →∞ ki →∞ min{1, αki }

x ki − x̄ ki 
 lim
ki →∞ min{1, α}
= 0, (10)
where α = µl
L
. Using part (1) of lemma 2.1 and x ∗ ∈ Cki , we have for all i = 1, 2, . . . ,
 ki    
x − Fki (x ki ) − PCki x ki − Fki (x ki ) , x ∗ − PCki x ki − Fki (x ki )  0,
that is,
 
eki (x ki , 1) − Fki (x ki ), x ki − x ∗ − eki (x ki , 1)  0.
From the above inequality and (1) we know for all i = 1, 2, . . . ,
 ki   2    
x − x ∗ , eki (x ki , 1)  eki (x ki , 1) − Fki (x ki ), eki (x ki , 1) + Fki (x ki ), x ki − x ∗
 2    
= eki (x ki , 1) − Fki (x ki ), eki (x ki , 1) + Fki (x ki ) − Fki (x ∗ ), x ki − x ∗
 2  
= eki (x ki , 1) − Fki (x ki ), eki (x ki , 1)
     
+ AT I − PQki Ax ki − AT I − PQki Ax ∗ , x ki − x ∗
 2  
= eki (x ki , 1) − Fki (x ki ), eki (x ki , 1)
    
+ I − PQki Ax ki − I − PQki Ax ∗ , Ax ki − Ax ∗
 2  
 eki (x ki , 1) − Fki (x ki ), eki (x ki , 1)
    2
+  I − PQki Ax ki − I − PQki Ax ∗ 
 2     2
= eki (x ki , 1) − Fki (x ki ), eki (x ki , 1) +  I − PQki Ax ki  . (11)
Since
   
Fk (x ki ) = Fk (x ki ) − Fk (x ∗ )  Lx ki − x ∗ , ∀i = 1, 2, . . . ,
i i i

and {x } is bounded, the sequence Fki (x ) is also bounded. Therefore, from (10) and (11)
ki ki

we get
  
lim  I − PQ Ax ki  = 0,ki
ki →∞

that is,
lim PQki (Ax ki ) − Ax ki = 0. (12)
ki →∞

Since PQki (Ax ki ) ∈ Qki , we have


 
q(Ax ki ) + ηki , PQki (Ax ki ) − Ax ki  0.
1664 B Qu and N Xiu

Letting ki → ∞, taking into account lemma 4.1 and (12), we deduce that
q(Ax̄)  0,
that is,
Ax̄ ∈ Q.
Therefore, x̄ is a solution of the SFP.
Thus, we may use x̄ in place of x ∗ in (7), and obtain that {x k − x̄} is convergent.
Because there is a subsequence {x ki − x̄} converging to 0, then x k → x̄ as k → ∞. This
completes the proof. 

5. Concluding remarks

In this paper, two modified CQ algorithms with Armijo-like searches for solving the split
feasibility problem have been presented. They need not compute the matrix inverses and
the matrix eigenvalue, and force a sufficient decrease of the function value at each iteration.
The corresponding convergence properties have been established. Theorem 3.1 shows that
algorithm 3.1 can apply to both the feasible case and infeasible cases of the SFP. However,
theorem 4.1 only shows that algorithm 4.1 can apply to the feasible case of the SFP. Whether
the algorithm 4.1 can also apply to the infeasible case or not is a topic deserving further
research.
As pointed out in [3] and [17], both the CQ algorithm and the relaxed CQ algorithm can
be extended to a block-iterative version. Analogously, the algorithms presented in this paper
can also be extended to a block-iterative version.

Acknowledgments

The authors would like to thank the associate editor and the referees for their useful comments
and suggestions. This research was partly supported by the National Natural Science
Foundation of China (10271002, 70471002, 10171055), the Key Project of Chinese Ministry of
Education (104048) and the Natural Science Foundation of Shandong Province (Y2003A02).

References

[1] Bauschke H H and Borwein J M 1996 On projection algorithms for solving convex feasibility problems
SIAM Rev. 38 367–426
[2] Byrne C L 2001 Bregman–Legendre multidistance projection algorithms for convex feasibility and optimization
Inherently Parallel Algorithms in Feasibility and Optimization and their Applications ed D Butnairu, Y Censor
and S Reich (Amsterdam: Elsevier) pp 87–100
[3] Byrne C L 2002 Iterative oblique projection onto convex sets and the split feasibility problem Inverse Problems
18 441–53
[4] Byrne C L 2004 A unified treatment of some iterative algorithms in signal processing and image reconstruction
Inverse Problems 20 103–20
[5] Censor Y and Elfving T 1994 A multiprojection algorithm using Bregman projections in a product space Numer.
Algorithms 8 221–39
[6] Dembo R Eisenstat S and Steihang T 1982 Inexact Newton methods SIAM J. Numer. Anal. 19 400–8
[7] Facchinei F and Pang J S 2003 Finite-Dimensional Variational Inequalities and Complementarity Problems
(New York: Springer)
[8] Fukushima M 1984 On the convergence of a class of outer approximation algorithms for convex programs
J. Comput. Appl. Math. 10 147–56
[9] Fukushima M 1986 A relaxed projection method for variational inequalities Math. Program. 35 58–70
A note on the CQ algorithm for the split feasibility problem 1665

[10] Gafni E M and Bertsekas D P 1984 Two-metric projection problems and descent methods for asymmetric
variational inequality problems Math. Program. 53 99–110
[11] He B 1999 Inexact implicit methods for monotone general variational inequalities Math. Program. 35 199–217
[12] Pang J S 1986 Inexact Newton methods for the nonlinear complementarity problem Math. Program. 36 54–71
[13] Rockafellar R T 1970 Convex Analysis (Princeton, NJ: Princeton University Press)
[14] Toint Ph L 1988 Global convergence of a class of trust region methods for nonconvex minimization in Hilbert
space IMA J. Numer. Anal. 8 231–52
[15] Wang C Y and Xiu N H 2000 Convergence of the gradient projection method for generalized convex minimization
Comput. Optim. Appl. 16 111–20
[16] Xiu N H and Zhang J Z 2003 Some recent advances in projection-type methods for variational inequalities
J. Comput. Appl. Math. 152 559–85
[17] Yang Q Z 2004 The relaxed CQ algorithm solving the split feasibility problem Inverse Problems 20 1261–66
[18] Zarantonello E H 1971 Projections on convex sets in Hilbert space and spectral theory Contributions to Nonlinear
Functional Analysis ed E H Zarantonello (New York: Academic)

You might also like