Discrete Tomography based on DC Programming

and DCA
LE THI Hoai An
Laboratory of Theoretical
and Applied Computer Science
Paul Verlaine - Metz University,
Ile du Saulcy, 57045, Metz, France.
Email: lethi@univ-metz.fr
NGUYEN Trong Phuc
Ecole Sup´ erieure
des Communications
et Transports,
Hanoi, Vietnam
Email: phucsenh@gmail.com
PHAM Dinh Tao
LMI, INSA de Rouen,
76801 Saint-Etienne-du-Rouvray, France
Email: pham@insa-rouen.fr
Abstract—In this article, we present a new continuous ap-
proach based on DC (Difference of Convex functions) program-
ming and DC algorithms (DCA) to the Discrete Tomography.
We are concerned with the reconstruction of binary images from
their projections in a smaller number of directions. We treat this
problem as DC programs. DC programming and DCA, becoming
now classic, have been introduced by PHAM DINH T. in 1985
and extensively developed by LE THI H. A. and PHAM DINH
T. since 1994. The DCA has been successfully applied to a lot of
various large- scale differentiable or nondifferentiable nonconvex
programs for which it has provided quite often global solutions
(see [1], [2] and references therein). Preliminary numrical exper-
iments show the efficiency of the proposed algorithms.
Keywords: X-ray tomography, Image reconstruction, Optimiza-
tion methods, Binary Quadratic programming, Mixed zero-
one linear programming, DC (Difference of Convex functions)
programming, DCA
I. INTRODUCTION
Tomography concerns recovering images from a number
of projections. The Discrete Tomography is a relatively new
research direction. This is concerned with the reconstruction
of binary image from their discretely sampled projections in a
smaller number of directions. The name Discrete Tomography
(DT) is due to Larry Shepp, organizer of the first meeting
devoted to the topic in 1994. Many problems of DT have been
introduced as combinatorial problems around 1960. Today,
DT has many applications in many fields such as medical
imaging, data security, game theory, material sciences and
image compression as well ( [3]).
The aim of DT is to reconstruct certain discrete density
functions from their discrete X-rays in certain directions. The
original reconstruction problem of DT is represented by a sys-
tem of linear equations. However, the system of linear equation
is very underdetermined and leads in general to a large number
of solutions. There are two strategies which can be used to
reduce the ambiguity and retrieve some informations:
i) The first strategy is based on increasing the number of
directions. Unfortunately, in this case, the problem of
reconstruction becomes insurmountable when the number
of directions is higher than two. Moreover, if this number
is equal to or higher than three, the problem is NP-hard(
[4]).
Fig. 1. Multi-solutions with the same orthogonal projections.
ii) The second strategy is based on the addition of the
geometrical properties which can be known a priori, for
example the distribution, the convexity, the connexity
or the periodicity... These informations allow algorithms
of recontruction to remove the incorrect approximate
solutions ( [4]).
In this article, we use both strategies i) and ii). By adding
the standard smoothness prior enforcing spatial coherency, we
consider two formulations of this problem:
• The first formulation corresponds to a binary convex
quadratic programming problem. Using a new result
related to the exact penalty in D.C programming ( [5])
we reformulate the problem in the form of a polyhedral
DC program and thus apply DCA.
• The second formulation is based on the 0 − 1 linear
programming problem formulated P. Gritzmann ( [6]).
Thank to the exact penalty, we obtain an equivalent
polyhedral DC program and then apply DCA for the
resulting problem.
The remainder of the paper is organized as follows. The
problem of reconstruction of binary images and its mathe-
matical formulations are presented in Section 2. Section 3
is devoted to the description of DC programming and DCA
for solving the DT problem via the two optimization models
formulated in the previous section. The computational results
are reported in section 4.
II. PROBLEM DESCRIPTION AND ITS MATHEMATICAL
FORMULATIONS
The reconstruction of binary images is equivalent to the
reconstruction of a binary matrix if we consider a pixel of
image as an element of the matrix. Figure 2 illustrates a
binary image and the equivalent binary matrix. Therefore, the
Fig. 2. The binary image and the equivalent binary matrix.
reconstruction of binary image problem is represented by a
system of linear equations Px = b where x ∈ ¦0, 1¦
N
, P is
a M N-matrix and b is a vector in R
M
. If we consider a
binary image of the size n
1
n
2
, then M = n
1
+ n
2
and
N = n
1
n
2
. This problem is equivalent to the next convex
quadratic programming with binary variables:
0 = min

|Px − b|
2
s.t x ∈ ¦0, 1¦
N

, (1)
or the following 0 − 1 linear programming:
min

−'e, x` s.t Px = b, x ∈ ¦0, 1¦
N

, (2)
P. Gritzmann et al. [6] have proposed two 0 − 1 linear
programming problems BIF (Best Inner the FIT) and BOF
(Best Outer the FIT) that approximate the problem (1):
BIF min

−'e, x` s.t. Px ≤ b x ∈ ¦0, 1¦
N

, (3)
BOF max

'e, x` s.t. Px ≥ b x ∈ ¦0, 1¦
N

. (4)
The authors developed some simple algorithms to solve
these problems.
In this paper, aiming to remove the incorrect approximate
solutions we add the standard smoothness prior enforcing
spatial coherency

(i,j)
(x
i
−x
j
)
2
in the objective function of
the problem (2). Here the sum runs over all 4 nearest neighbors
of the pixel. In fact, this relation is one of the important
characteristics of an image because the nearest neighbors often
have the similar values, and the probability where they belong
to the same partition is very high. Figure 3 shows the pixel
and its 4 nearest neighbors if we consider only orthogonal
projections.
A. The first model
The first mathematical model is based on (3). By adding
the standard smoothness prior enforcing spatial coherency, we
consider the problem follow:
(IQP)









min−'e, x` + α
N

i=1

j∈Vi
(x
i
− x
j
)
2
s.t. Px ≤ b
x ∈ ¦0, 1¦
N
,
(5)
Fig. 3. The pixel and its 4 nearest neighbors.
where e is the vector of one. This is a binary quadratic
programming problem.
B. The second model
By using the auxiliary variables ¦z
i,j
¦, where z
i,j
= (x
i

x
j
)
2
, we can express (IQP) as a zero-one linear programming
problem:
(MIP)





















min−'e, x` + α
N

i=1

j∈Vi
z
i,j
s.t. Px ≤ b
z
i,j
≥ x
i
− x
j
∀i = 1, ..., N j ∈ V
i
z
i,j
≥ x
j
−x
i
∀i = 1, ..., N j ∈ V
i
x ∈ ¦0, 1¦
N
.
(6)
Clearly that the difficulty arises when the number of binary
variables increases.
We develop DCA for both models by reformulating them
in the form of continuous minimization progroblems via exact
penalty techniques. There are several versions of DCA for
linear/quadratic programs with binary variables by choosing
different penalty functions and/or changing DC decomposi-
tions. In the next section, we describe two DCA schemes
corresponding to two suitable DC decompositions.
III. DC PROGRAMMING AND DCA
DC Programming and DCA constitute the backbone of
smooth/nonsmooth nonconvex programming and global opti-
mization. They address the problem of minimizing a function
f which is difference of convex functions on the whole space
IR
p
or on a convex set C ⊂ IR
p
. Generally speaking, a DC
program takes the form
α = inf¦f(x) := g(x) −h(x) : x ∈ IR
p
¦ (P
dc
) (7)
where g, h are lower semicontinuous proper convex functions
on IR
p
. Such a function f is called DC function, and g − h,
DC decomposition of f while g and h are DC components of
f. The convex constraint x ∈ C can be incorporated in the
objective function of (P
dc
) by using the indicator function on
C denoted χ
C
which is defined by χ
C
(x) = 0 if x ∈ C, +∞
otherwise. Let
g

(y) := sup¦'x, y` −g(x) : x ∈ IR
p
¦
Algorithm 1 DCA Scheme
Input:
• Let x
0
∈ IR
p
be a best guest, 0 ← k.
Repeat:
• Calculate y
k
∈ ∂h(x
k
).
• Calculate
x
k+1
∈ arg min

g(x) − h(x
k
) −x −x
k
, y
k
s.t.x ∈ IR
p

. (Pk)
• k + 1 ← k.
Until {convergence of x
k
}.
be the conjugate function of g where '.` denotes a scalar prod-
uct. Then, the following program is called the dual program
of (P
dc
):
α
D
= inf¦h

(y) − g

(y) : y ∈ IR
p
¦. (D
dc
) (8)
One can prove that α = α
D
, and there is the perfect symmetry
between primal and dual DC programs: the dual to (D
dc
) is
exactly (P
dc
).
For a convex function θ, the subdifferential of θ at x
0
∈ dom
θ := ¦x
0
∈ IR
p
: θ(x
0
) < +∞¦, denoted ∂θ(x
0
), is defined
by
∂θ(x
0
) := ¦y ∈ IR
n
: θ(x) ≥ θ(x
0
) +'x −x
0
, y`, ∀x ∈ IR
p
¦.
(9)
The subdifferential ∂θ(x
0
) generalizes the derivative in the
sense that θ is differentiable at x
0
if and only if ∂θ(x
0
)
≡ ¦θ

(x
0
)¦.
The idea of DCA is quite simple: each iteration of DCA
approximates the concave part −h by one of its affinne
majorization defined by y
k
∈ ∂h(x
k
) and minimizes the
resulting convex function (i.e., computing x
k+1
∈ ∂g

(y
k
)).
Convergence properties of DCA and its theoretical basis can
be found in [1, 2, 7]. It is important to mention that
• DCA is a descent method (the sequences ¦g(x
k
)−h(x
k

and ¦h

(y
k
) − g

(y
k
)¦ are decreasing) without line-
search;
• If the optimal value α of problem (P
dc
) is finite and
the infinite sequences ¦x
k
¦ and ¦y
k
¦ are bounded then
every limit point x

(resp. y) of the sequence ¦x
k
¦ (resp.
¦y
k
¦) is a critical point of g − h (resp. h

− g

), i.e.
∂h(x

) ∩ ∂g(x

) = ∅ (resp. ∂h

(y

) ∩ ∂g

(y

) = ∅).
• DCA has a linear convergence for general DC programs.
• DCA has a finite convergence for polyhedral DC pro-
grams.
It is interesting to note that ( [1, 2, 7]) DCA works with
the convex DC components g and h but not the DC function
f itself. Moreover, a DC function f has infinitely many DC
decompositions which have crucial impacts on the qualities
(speed of convergence, robustness, efficiency, globality of
computed solutions,...) of DCA.
A. DCA for solving problem (IQP)
Let us consider the problem (IQP) in the following simpli-
fied form
(GIQP)







min−'e, x` +
α
2
'x, Qx`
s.t. Px ≤ b
x ∈ ¦0, 1¦
N
,
(10)
where
N

i=1

j∈Vi
(x
i
−x
j
)
2
=
1
2
'x, Qx`
with Q being a positive semi definite N N matrix .
Let p be the penalty function defined by
p(x) =
N

i=1
min(x
i
, 1 −x
i
).
Let T
1
and /
1
and be the sets defined by
T
1
:= ¦x ∈R
n
: Px ≤ b¦, /
1
:= ¦x ∈ T
1
: x ∈ [0, 1]
N
¦.
Clearly that the function p is concave and finite on /
1
, and
p(x) ≥ 0 for all x ∈ /
1
. Moreover
¦x ∈ T
1
, x ∈ ¦0, 1¦
N
¦ = ¦x ∈ /
1
, p(x) ≤ 0¦.
Thank to the exact penalty in DC programming ( [8]), we
reformulate the problem (GIQP) as the following polyhedral
DC program, for a sufficiently large positive number t (t ≥ t
0
)



min−'e, x` +
α
2
'x, Qx` + t
N

i=1
min(x
i
, 1 − x
i
)
s.t. x ∈ /
1
.
(11)
The problem (11) is a DC program with the following natural
DC decomposition
g(x) =
α
2
'x, Qx` − 'e, x`, (12)
h(x) = −t
N

i=1
min(x
i
, 1 − x
i
). (13)
According to the description of DCA in [2], solving the
problem (11) by DCA consists of the determination of two
sequences x
k
and y
k
such that
y
k
∈ ∂h(x
k
) and x
k+1
∈ ∂g

(y
k
).
The function h is subdifferentiable and a subgradient at x
can be chosen in the following way:
y = (y
i
) ∈ ∂h(x) ⇐y
i
=

−t x
i
≤ 0.5
t otherwise.
(14)
x
k+1
∈ ∂g

(y
k
) is a solution of the convex quadratic
program:
min

α
2
'x, Qx` − '(e + y
k
), x` : x ∈ /
1

. (15)
Algorithm 2 Algorithm DCA-1
Input:
• Let x
0
∈ R
N
and k = 0.
• Let α and t be positive numbers and 1 and 2 be sufficiently
small positive numbers.
Repeat:
• Compute y
k
∈ ∂h(x
k
) via (14).
• Compute x
k+1
∈ ∂g

(y
k
) is a solution of the quadratic
convex programming (15).
• k + 1 ← k.
Until

x
k+1
− x
k

≤ 1 or

f(x
k+1
) −f(x
k
)

≤ 2.
Reconstruction a binary image: Let x

be the solution
computed by DCA. To reconstruct a binary image, we round
the varibles x

i
who are not integer. More precisely, we set
x

i
= 1 if x

i
≥ 0.5 and x

i
= 0 otherwise.
The convergence of the algorithm DCA-1 is based on the
convergence theorem of polyhedral DC programs ( [1, 2, 7]).
B. DCA for solving problem (MIP)
In this section, we will formulate (MIP) in a form of a
(continuous) concave minimization problem.
Let us consider the penalty function p defined by
p(x, z) = θ(x) =
N

i=1
min(x
i
, 1 −x
i
).
Let
T
2
:= ¦(x, z) : Px ≤ b, z
i,j
≥ x
i
− x
j
, z
i,j
≥ x
j
− x
i
¦,
and /
2
:= ¦(x, z) ∈ T
2
: x ∈ [0, 1]
N
¦. Like above, the
function p is finite, concave on /
2
, and p(x, z) ≥ 0 for all
(x, z) ∈ /
2
. Moreover
¦(x, z) ∈ T
2
, x ∈ ¦0, 1¦
N
¦ = ¦(x, z) ∈ /
2
, p(x, z) ≤ 0¦.
Consequently, (MIP) can be rewritten as





min−'e, x` + α
N

i=1

j∈Vi
z
i,j
s.t. (x, z) ∈ T
2
, p(x, z) ≤ 0.
(16)
Thank to exact penalty result in [8], (MIP) is equivalent to
the following concave minimization problem, for a sufficiently
large positive number t (t ≥ t
0
)





min−'e, x` + α
N

i=1

j∈Vi
z
i,j
+ t
N

i=1
min(x
i
, 1 −x
i
)
s.t. (x, z) ∈ /
2
.
(17)
We now prove that (17) is a DC program and then present
the DCA applied to the resulting DC program.
Let
f(x, z) = −'e, x` + α
N

i=1

j∈Vi
z
i,j
+ t
N

i=1
min(x
i
, 1 −x
i
).
Algorithm 3 Algorithm DCA-2
Input:
• Let (x
0
, z
0
) ∈ R
N
× R
M
and k = 0.
• Let α be a positive number and let 1 and 2 be sufficiently
small positive numbers.
Repeat:
• Compute (u
k
, v
k
) ∈ ∂h(x
k
, y
k
) via (19).
• Compute (x
k+1
, z
k+1
) ∈ ∂g

(u
k
, v
k
) is a solution of the
linear programming (20).
• k + 1 ← k.
Until

(x
k+1
, z
k+1
) − (x
k
, z
k
)

≤ 1 or

f(x
k+1
, z
k+1
) −f(x
k
, z
k
)

≤ 2.
Denote by χ
K2
the indicator function on /
2
, χ
K2
(x, z) = 0
if (x, z) ∈ /
2
, +∞ otherwise. Let g and h be the functions
defined by
g(x, z) = χ
K2
(x, z),
h(x, z) = 'e, x` −α
N

i=1

j∈Vi
z
i,j
− t
N

i=1
min(x
i
, 1 − x
i
).
Hence g and h are convex functions, and so problem (17)
is a DC program in the form
min¦g(x, z) − h(x, z) : (x, z) ∈ R
N
R
M
¦. (18)
Solving the problem (17) by DCA consists of the determi-
nation of two sequences (u
k
, v
k
) and (x
k
, z
k
) such that
(u
k
, v
k
) ∈ ∂h (x
k
, z
k
) and
(x
k+1
, z
k+1
) ∈ ∂g

(u
k
, v
k
).
The function h is subdifferentiable and a subgradient at the
point (x, z) is computed in the following way:
(u, v) ∈ ∂h(x, z)





u = (u
i
)
i
←u
i
=

1 −t x
i
≤ 0.5
1 + t otherwise,
v = −αe.
(19)
(x
k+1
, z
k+1
) ∈ ∂g

(u
k
, v
k
) is a solution of the next linear
program:
min

−'(u
k
, v
k
), (x, z) : (x, z) ∈ /
2

. (20)
The convergence of DCA-2 is based on the convergence
property of polyhedral DC programs (see [1]).
IV. IMPLEMENTATIONS AND RESULTS
The algorithm was implemented in C with double precision,
and run on a Dell computer 1GHz with 512Mb RAM. To
solve linear programs and/or quadratic convex programs, we
used the software CPLEX version 9.1. In order to evaluate
the performance of the proposed algorithms, we use the
images with special characteristics. The first image is elliptic
and rectangular and the second is elliptic. The parameter
of penalization α of the relation between the pixel and its
neighbors was selected in interval [0.1, 0.5].
Our experiment is composed of two parts. In the first
experiment we are interest in the effect of the parameters α
and t in DCA, and the efficiency of DCA compared to the
classical BIF algorithm in both cases, with or without the
standard smoothness prior enforcing spatial coherency. The
results are presented in Figure 4. Here we consider the first
image with size 128 128.
In the second experiment we compare the performance of
DCA with two different choices of the initial point: the first
choice is a random initial point, and the second choice is the
best point among the certain random points. Figure 5 shows
the results of algorithm DCA-1 for the two images with these
choices of initial points in case of α = 0.5, t = 0.25.
Finally, the performance of the DCA (the CPU time in
seconds and the number of iterations) with the three sizes of
the first image is summarized in Table I and Table II.
Comments on the numerical results: from the computa-
tionnal results, we observe that
◦ The DCA algorithms give image with higher quality
compared to that obtained by applying algorithm BIF
in both cases, with or without the standard smoothness
prior enforcing spatial coherency.
◦ The DCA algorithms depend on the penalty parameters.
DCA can provides a very good result with a good
choice of these parameters. For example, on Figure 4, the
reconstructed image is identical to original with t = 0.25
and α = 0.125.
◦ The sequence of images obtained by applying the al-
gorithm DCA presented on Figure 5 shows that with a
suitable initial point DCA gives a good result.
Time(s)
Size 64 × 64 128 × 128 256 ×256
DCA-1 5.12 43.72 404.14
DCA-2 7.38 23.37 456.37
TABLE I
THE AVERAGE TIME OF EACH ITERATION OF DCA
Conclusion In this paper, we considered two combinatorial
optimization models for reconstruction of binary images. Us-
ing an exact penalty technique we reformulated the problems
Fig. 4. Results of DCA algorithms with selected parameters
Fig. 5. Algorithm DCA-1 with different choices of initial point
Size 64 ×64 128 × 128 256 ×256
N

Iter Time N

Iter Time N

Iter Time
DCA-1 9 46.08 8 425.38 8 3896.47
DCA-2 11 85.75 12 249.01 11 4587.38
TABLE II
THE CPU (IN SECONDS) AND THE NUMBER OF ITERATIONS OF DCA
inf hte form of DC programs and proposed two DCA schemes
for solving them. The results show that the reconstructed
images given by DCA are quite similar to the original images.
They suggest that this apporach is interesting for reconstruct-
ing binary images.
REFERENCES
[1] H. A. Le Thi and T. Pham Dinh, “A continuous approach for globally
solving linearly constrained quadratic zero - one programming problems,”
Optimization, vol. 50, pp. 93–120, 2001.
[2] T. Pham Dinh and H. A. Le Thi, “Convex analysis approach to dc
programming: Theory, algorithms and applications,” Acta Mathematica
Vietnamica, vol. 22-1, pp. 289–355, 1997.
[3] G. Herman and A. Kuba, Discrete Tomography: Foundations, Algorithms
and Applications, Birkhauser boston edition, 1999.
[4] R.J Gardner, P. de Gritzmann, and D. Prangenberg, “On the computational
complexity of reconstructing lattice sets from their x-rays,” Discrete
Mathematics, vol. 202, pp. 45–71, 1999.
[5] H. A. Le Thi, T. Pham Dinh, and H. Van Ngai, “Exact penalty techniques
in dc programming,” Research Report - INSA, 2004.
[6] P. Gritzmann, S. de Vries, and M. Wiegelmann, “Approximating binary
images from discrete x-rays,” SIAM J. Optimization, vol. 11-2, pp. 522–
546, 2000.
[7] T. Pham Dinh and H. A. Le Thi, “Dc optimization algorithms for solving
the trust region subproblem,” SIAM Journal of Optimization, vol. 8-2, pp.
476–505, 1998.
[8] H. A. Le Thi, T. Pham Dinh, and M. Le Dung, “Exact penalty in dc.
programming,” Vietnam Journal of Mathematics, vol. 27-2, pp. 169–178,
1999.

This is a binary quadratic programming problem.j ≥ xi − xj ∀i = 1. Gritzmann et al. 2. and g − h.t. 1}N . They address the problem of minimizing a function f which is difference of convex functions on the whole space IRp or on a convex set C ⊂ IRp . Fig. III.t.t. DCA DC Programming and DCA constitute the backbone of smooth/nonsmooth nonconvex programming and global optimization. In this paper. P x ≥ b x ∈ {0. a DC program takes the form α = inf{f(x) := g(x) − h(x) : x ∈ IRp } (Pdc ) (7) where g.j)(xi −xj )2 in the objective function of the problem (2). x ∈ {0. (1) or the following 0 − 1 linear programming: min − e. The binary image and the equivalent binary matrix. Figure 3 shows the pixel and its 4 nearest neighbors if we consider only orthogonal projections. x s. B. Therefore. then M = n1 + n2 and N = n1 × n2 . Px ≤ b  (MIP) zi.. P is a M × N -matrix and b is a vector in RM .j = (xi − xj )2 . P x ≤ b x ∈ {0.. x max e. we can express (IQP) as a zero-one linear programming problem:  min − e. Figure 2 illustrates a binary image and the equivalent binary matrix. Let g∗ (y) := sup{ x. the Fig.. 1}N .. x s. 1}N . x + α  (xi − xj )2   i=1 j∈Vi (IQP) (5) Px ≤ b s. (2) P. There are several versions of DCA for linear/quadratic programs with binary variables by choosing different penalty functions and/or changing DC decompositions. x + α N  zi. Such a function f is called DC function. this relation is one of the important characteristics of an image because the nearest neighbors often have the similar values. +∞ otherwise. Here the sum runs over all 4 nearest neighbors of the pixel. The first model The first mathematical model is based on (3). (3) (4) The authors developed some simple algorithms to solve these problems. 3. s. In the next section. The second model By using the auxiliary variables {zi.t x ∈ {0. . 1}N . PROBLEM DESCRIPTION AND ITS MATHEMATICAL FORMULATIONS The reconstruction of binary images is equivalent to the reconstruction of a binary matrix if we consider a pixel of image as an element of the matrix. h are lower semicontinuous proper convex functions on IRp .j ≥ xj − xi ∀i = 1. By adding the standard smoothness prior enforcing spatial coherency. where zi. 1}N . aiming to remove the incorrect approximate solutions we add the standard smoothness prior enforcing spatial coherency (i.t P x = b. DC PROGRAMMING AND reconstruction of binary image problem is represented by a system of linear equations P x = b where x ∈ {0.II. . [6] have proposed two 0 − 1 linear programming problems BIF (Best Inner the FIT) and BOF (Best Outer the FIT) that approximate the problem (1): BIF BOF min − e. The pixel and its 4 nearest neighbors. we describe two DCA schemes corresponding to two suitable DC decompositions. A. 1}N . This problem is equivalent to the next convex quadratic programming with binary variables: 0 = min Px − b 2 s. Generally speaking. where e is the vector of one. we consider the problem follow:  N min − e.j }... In fact.t. We develop DCA for both models by reformulating them in the form of continuous minimization progroblems via exact penalty techniques. If we consider a binary image of the size n1 × n2. N j ∈ Vi       zi.    x ∈ {0. 1}N . and the probability where they belong to the same partition is very high. (6) Clearly that the difficulty arises when the number of binary variables increases.j    i=1 j∈Vi    s. y − g(x) : x ∈ IRp } . N j ∈ Vi     x ∈ {0. DC decomposition of f while g and h are DC components of f. The convex constraint x ∈ C can be incorporated in the objective function of (Pdc ) by using the indicator function on C denoted χC which is defined by χC (x) = 0 if x ∈ C.

Qx 2 be the conjugate function of g where . denotes a scalar product. y) of the sequence {xk } (resp.e.5 t otherwise. 7]) DCA works with the convex DC components g and h but not the DC function f itself. 1]N }.e.. the following program is called the dual program of (Pdc ): αD = inf{h (y) − g (y) : y ∈ IR }. yk k + 1 ← k. Until {convergence of xk }. 2. 1}N } = {x ∈ K1 . we reformulate the problem (GIQP) as the following polyhedral DC program. Qx + t min(xi .t. x + α x. a DC function f has infinitely many DC decompositions which have crucial impacts on the qualities (speed of convergence.t.t. ∂h(x∗ ) ∩ ∂g(x∗ ) = ∅ (resp. computing xk+1 ∈ ∂g∗ (yk ))..Algorithm 1 DCA Scheme Input: 0 p • Let x ∈ IR be a best guest. Moreover. ∂h∗ (y∗ ) ∩ ∂g∗ (y∗ ) = ∅). Convergence properties of DCA and its theoretical basis can be found in [1. is defined by ∂θ(x0 ) := {y ∈ IR : θ(x) ≥ θ(x0 ) + x − x0 . Qx − (e + yk ). Thank to the exact penalty in DC programming ( [8]). 2. Then. ∀x ∈ IR }. x ∈ K1. 1}N . for a sufficiently large positive number t (t ≥ t0)  N min − e. robustness. K1 := {x ∈ D1 : x ∈ [0. • A. the subdifferential of θ at x0 ∈ dom θ := {x0 ∈ IRp : θ(x0 ) < +∞}. The function h is subdifferentiable and a subgradient at x can be chosen in the following way: y = (yi ) ∈ ∂h(x) ⇐ yi = −t xi ≤ 0. Repeat: k k • Calculate y ∈ ∂h(x ). k k h(x) = −t i=1 min(xi . Let p be the penalty function defined by N p(x) = i=1 min(xi. 1 − xi). {yk }) is a critical point of g − h (resp. globality of computed solutions. (13) According to the description of DCA in [2]. If the optimal value α of problem (Pdc ) is finite and the infinite sequences {xk } and {yk } are bounded then every limit point x∗ (resp. and there is the perfect symmetry between primal and dual DC programs: the dual to (Ddc ) is exactly (Pdc ). 7].x ∈ IRp . It is important to mention that • n p Let D1 and K1 and be the sets defined by P x ≤ b}.. 0 ← k. x : x ∈ K1 . (Pk ) (xi − xj )2 = i=1 j∈Vi 1 x. xk+1 ∈ ∂g∗ (yk ) is a solution of the convex quadratic program: α min (15) x. Qx − e.. The problem (11) is a DC program with the following natural DC decomposition α g(x) = x. 1 − xi ). Qx  2  (10) (GIQP) s. 2 . where N s. For a convex function θ. (9) The subdifferential ∂θ(x0 ) generalizes the derivative in the sense that θ is differentiable at x0 if and only if ∂θ(x0 ) ≡ {θ (x0)}. Px ≤ b    x ∈ {0. y . The idea of DCA is quite simple: each iteration of DCA approximates the concave part −h by one of its affinne majorization defined by yk ∈ ∂h(xk ) and minimizes the resulting convex function (i. (14) It is interesting to note that ( [1. Clearly that the function p is concave and finite on K1. DCA for solving problem (IQP) Let us consider the problem (IQP) in the following simplified form  min − e. efficiency. i. solving the problem (11) by DCA consists of the determination of two sequences xk and yk such that yk ∈ ∂h(xk ) and xk+1 ∈ ∂g∗ (yk ). x + α x. Moreover {x ∈ D1 . x ∈ {0. (12) 2 N • • • DCA is a descent method (the sequences {g(x )−h(x )} and {h∗(yk ) − g∗ (yk )} are decreasing) without linesearch. h∗ − g∗ ). DCA has a linear convergence for general DC programs. and p(x) ≥ 0 for all x ∈ K1. ∗ ∗ p with Q being a positive semi definite N × N matrix . 1 − xi) 2 (11) i=1  s. DCA has a finite convergence for polyhedral DC programs. x . denoted ∂θ(x0 ). p(x) ≤ 0}. • Calculate xk+1 ∈ arg min g(x) − h(xk ) − x − xk . (Ddc ) (8) D1 := {x ∈Rn : One can prove that α = αD .) of DCA.

2. The convergence of DCA-2 is based on the convergence property of polyhedral DC programs (see [1]). z) = 0 if (x.j + t i=1 min(xi . Like above. i i i The convergence of the algorithm DCA-1 is based on the convergence theorem of polyhedral DC programs ( [1. DCA for solving problem (MIP) In this section. z k+1) ∈ ∂g∗ (uk . B. (MIP) can be rewritten as  N  min − e. • Let α and t be positive numbers and 1 and 2 be sufficiently small positive numbers. vk ) is a solution of the next linear program: min − (uk . More precisely. zi.j − t i=1 min(xi . Consequently. we set i x∗ = 1 if x∗ ≥ 0. z). z) = e. z) ≥ 0 for all (x. z k ) ≤ 2. Moreover {(x. we will formulate (MIP) in a form of a (continuous) concave minimization problem. 1 − xi ). x ∈ {0. v) ∈ ∂h(x. Repeat: k k • Compute y ∈ ∂h(x ) via (14). (20) f(x. (18) Let D2 := {(x. To reconstruct a binary image. z k+1) ∈ ∂g∗ (uk . vk ) ∈ ∂h (xk .5 and x∗ = 0 otherwise.j ≥ xi − xj .5 otherwise. Until xk+1 − xk ≤ 1 Algorithm 3 Algorithm DCA-2 Input: 0 0 N M • Let (x . (17) We now prove that (17) is a DC program and then present the DCA applied to the resulting DC program. the function p is finite. vk ). +∞ otherwise. χK2 (x. z) : (x. vk ) is a solution of the linear programming (20). z) : (x. 1 − xi ) i=1 j∈Vi i=1  s.j ≥ xj − xi}. x + α i=1 j∈Vi zi.t. (xk+1 . we round the varibles x∗ who are not integer. Repeat: k k k k • Compute (u .j i=1 j∈Vi  s. • Let α be a positive number and let 1 and 2 be sufficiently small positive numbers. z k+1 ) − (xk . z) ∈ K2. z)      1−t 1+t xi ≤ 0. z) = χK2 (x. x + α zi. x − α i=1 j∈Vi zi. zi. z ) ∈ R × R and k = 0. z) ∈ RN × RM }. x + α zi. z) ∈ K2 . Let g and h be the functions defined by g(x. Let N N u = (ui )i ← ui = v = −αe. z k ) and (xk+1. p(x. z) ∈ K2. z) ∈ D2 : x ∈ [0. N N h(x. vk ) and (xk . concave on K2. 1]N }. (19) (16) Thank to exact penalty result in [8].Algorithm 2 Algorithm DCA-1 Input: 0 N • Let x ∈ R and k = 0. z k+1 ) − f (xk .t. 1 − xi ). z) = − e. Until or (xk+1 . z k+1 ) ∈ ∂g∗ (uk . and K2 := {(x. z) ∈ D2 . 1 − xi). for a sufficiently large positive number t (t ≥ t0 )  N N  min − e. p(x. (x. Let us consider the penalty function p defined by N Denote by χK2 the indicator function on K2 . • k + 1 ← k. z k ) such that (uk . z) ≤ 0}.j + t min(xi . y ) via (19). z) ∈ D2 . v ) ∈ ∂h(x . z) ≤ 0. z) is computed in the following way: (u. and p(x. (MIP) is equivalent to the following concave minimization problem. and so problem (17) is a DC program in the form min{g(x. . 7]). z) : P x ≤ b. The function h is subdifferentiable and a subgradient at the point (x. or f (xk+1 ) − f (xk ) ≤ 2. Hence g and h are convex functions. Solving the problem (17) by DCA consists of the determination of two sequences (uk . p(x. vk ). z) − h(x. 1}N } = {(x. z) ∈ K2 . z k ) ≤ 1 f (xk+1 . k+1 • Compute (x . • k + 1 ← k. (x. (x. Reconstruction a binary image: Let x∗ be the solution computed by DCA. z) = θ(x) = i=1 min(xi . k+1 • Compute x ∈ ∂g∗ (yk ) is a solution of the quadratic convex programming (15). z) ∈ K2 .

with or without the standard smoothness prior enforcing spatial coherency. [5] H. Birkhauser boston edition. 5. They suggest that this apporach is interesting for reconstructing binary images. 4. t = 0. pp. Our experiment is composed of two parts. The first image is elliptic and rectangular and the second is elliptic. we observe that ◦ The DCA algorithms give image with higher quality compared to that obtained by applying algorithm BIF in both cases. Pham Dinh. Pham Dinh.37 Fig. Prangenberg.75 128 × 128 N◦ Iter Time 8 425. Pham Dinh and H. ◦ The DCA algorithms depend on the penalty parameters.47 11 4587. 2004.” Research Report . Algorithm DCA-1 with different choices of initial point Size DCA-1 DCA-2 64 × 64 N◦ Iter Time 9 46.IV. “On the computational complexity of reconstructing lattice sets from their x-rays. algorithms and applications. 45–71.5]. Finally. Herman and A. To solve linear programs and/or quadratic convex programs.25 and α = 0. A. 1999. “Convex analysis approach to dc programming: Theory. Van Ngai. 50.” Discrete Mathematics. Kuba. vol. “A continuous approach for globally solving linearly constrained quadratic zero .72 23. ◦ The sequence of images obtained by applying the algorithm DCA presented on Figure 5 shows that with a suitable initial point DCA gives a good result. P.12 7. [4] R.14 456. R EFERENCES Size DCA-1 DCA-2 64 × 64 5. 202.one programming problems. pp.5. vol. 2001. 93–120. we use the images with special characteristics. vol. and H. “Exact penalty techniques in dc programming.38 TABLE II T HE CPU ( IN SECONDS ) AND THE NUMBER OF ITERATIONS OF DCA inf hte form of DC programs and proposed two DCA schemes for solving them. and D. 22-1. DCA can provides a very good result with a good choice of these parameters. with or without the standard smoothness prior enforcing spatial coherency. the reconstructed image is identical to original with t = 0. [2] T. For example. on Figure 4. T.INSA. A.08 11 85. .38 12 249.J Gardner. [3] G. Le Thi. Le Thi and T. Figure 5 shows the results of algorithm DCA-1 for the two images with these choices of initial points in case of α = 0. Le Thi.37 TABLE I T HE AVERAGE TIME OF EACH ITERATION OF DCA Conclusion In this paper. The parameter of penalization α of the relation between the pixel and its neighbors was selected in interval [0. Discrete Tomography: Foundations. Time(s) 128 × 128 43. I MPLEMENTATIONS AND R ESULTS The algorithm was implemented in C with double precision. 1999.25. and run on a Dell computer 1GHz with 512Mb RAM.38 256 × 256 404.” Optimization. Using an exact penalty technique we reformulated the problems [1] H.1. and the second choice is the best point among the certain random points. Algorithms and Applications. Here we consider the first image with size 128 × 128. we considered two combinatorial optimization models for reconstruction of binary images. 289–355.” Acta Mathematica Vietnamica. 0. we used the software CPLEX version 9. Results of DCA algorithms with selected parameters Fig. 1997. Comments on the numerical results: from the computationnal results. the performance of the DCA (the CPU time in seconds and the number of iterations) with the three sizes of the first image is summarized in Table I and Table II. A. In the first experiment we are interest in the effect of the parameters α and t in DCA. de Gritzmann. The results show that the reconstructed images given by DCA are quite similar to the original images. In the second experiment we compare the performance of DCA with two different choices of the initial point: the first choice is a random initial point.01 256 × 256 N◦ Iter Time 8 3896. pp.1. and the efficiency of DCA compared to the classical BIF algorithm in both cases. The results are presented in Figure 4. In order to evaluate the performance of the proposed algorithms.125.

2000. vol. Pham Dinh and H. and M.” Vietnam Journal of Mathematics . Optimization. Pham Dinh. . vol. Wiegelmann. A. de Vries. 476–505. A. 169–178. Le Thi. and M. “Approximating binary images from discrete x-rays. 522– 546. programming.[6] P. S. “Exact penalty in dc. 1999. pp. [8] H. Le Dung. vol. 27-2. pp. Le Thi.” SIAM J. 8-2. [7] T. 11-2. T. pp. 1998. Gritzmann.” SIAM Journal of Optimization. “Dc optimization algorithms for solving the trust region subproblem.

Sign up to vote on this title
UsefulNot useful