Professional Documents
Culture Documents
Download textbook Energy Minimization Methods In Computer Vision And Pattern Recognition 10Th International Conference Emmcvpr 2015 Hong Kong China January 13 16 2015 Proceedings 1St Edition Xue Cheng Tai ebook all chapter pdf
Download textbook Energy Minimization Methods In Computer Vision And Pattern Recognition 10Th International Conference Emmcvpr 2015 Hong Kong China January 13 16 2015 Proceedings 1St Edition Xue Cheng Tai ebook all chapter pdf
https://textbookfull.com/product/energy-minimization-methods-in-
computer-vision-and-pattern-recognition-marcello-pelillo/
https://textbookfull.com/product/hong-kong-
architecture-1945-2015-from-colonial-to-global-xue/
https://textbookfull.com/product/pattern-recognition-and-
computer-vision-third-chinese-conference-prcv-2020-nanjing-china-
october-16-18-2020-proceedings-part-iii-yuxin-peng/
https://textbookfull.com/product/image-and-graphics-8th-
international-conference-icig-2015-tianjin-china-
august-13-16-2015-proceedings-part-iii-1st-edition-yu-jin-zhang-
Image and Graphics 8th International Conference ICIG
2015 Tianjin China August 13 16 2015 Proceedings Part I
1st Edition Yu-Jin Zhang (Eds.)
https://textbookfull.com/product/image-and-graphics-8th-
international-conference-icig-2015-tianjin-china-
august-13-16-2015-proceedings-part-i-1st-edition-yu-jin-zhang-
eds/
https://textbookfull.com/product/pattern-recognition-and-
computer-vision-third-chinese-conference-prcv-2020-nanjing-china-
october-16-18-2020-proceedings-part-i-1st-edition-yuxin-peng/
https://textbookfull.com/product/hong-kong-
architecture-1945-2015-from-colonial-to-global-1st-edition-
charlie-q-l-xue-auth/
https://textbookfull.com/product/web-information-systems-
engineering-wise-2019-20th-international-conference-hong-kong-
china-november-26-30-2019-proceedings-reynold-cheng/
Xue-Cheng Tai Egil Bae
Tony F. Chan Marius Lysaker (Eds.)
Energy Minimization
Methods
LNCS 8932
in Computer Vision
and Pattern Recognition
10th International Conference, EMMCVPR 2015
Hong Kong, China, January 13–16, 2015
Proceedings
123
Lecture Notes in Computer Science 8932
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbruecken, Germany
Xue-Cheng Tai Egil Bae Tony F. Chan
Marius Lysaker (Eds.)
Energy Minimization
Methods
in Computer Vision
and Pattern Recognition
10th International Conference, EMMCVPR 2015
Hong Kong, China, January 13-16, 2015
Proceedings
13
Volume Editors
Xue-Cheng Tai
University of Bergen, Department of Mathematics
Bergen, Norway
E-mail: tai@math.uib.no
Egil Bae
University of California, Department of Mathematics
Los Angeles, CA, USA
E-mail: ebae@math.ucla.edu
Tony F. Chan
The Hong Kong University of Science and Technology
Clear Water Bay, Kowloon, Hong Kong, S.A.R.
E-mail: tonyfchan@ust.hk
Marius Lysaker
Telemark University College
Porsgrunn, Norway
E-mail: marius.lysaker@hit.no
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection
with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and
executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication
or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location,
in ist current version, and permission for use must always be obtained from Springer. Permissions for use
may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution
under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of publication,
neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or
omissions that may be made. The publisher makes no warranty, express or implied, with respect to the
material contained herein.
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Preface
Energy minimization has become an important paradigm for solving many chal-
lenging problems within computer vision and pattern recognition over the past
few decades. Mathematical models that describe the desired solution as the min-
imizer of an energy potential arise through different schools of thought, includ-
ing statistical approaches in the form of Markov random fields and geometrical
approaches in the form of variational models or equivalent partial differential
equations. Besides the challenge of formulating appropriate energy minimiza-
tion models, a significant research topic is the design of computational methods
for reliably and efficiently obtaining solutions of minimal energy.
This book contains 36 original research articles that cover the whole spectrum
of energy minimization in computer vision and pattern recognition, including
design and analysis of mathematical models and design of discrete and con-
tinuous optimization algorithms. Application areas include image segmentation
and tracking, image restoration and inpainting, multiview reconstruction, shape
optimization, and texture and color analysis. The articles have been carefully
selected through a thorough double-blind peer-review process.
Furthermore, we were delighted that three internationally recognized ex-
perts in the fields of computer vision, pattern recognition, and optimization,
namely, Andrea Bertozzi (UCLA), Ron Kimmel (Technion-IIT), and Long Quan
(HKUST), agreed to further enrich the conference with inspiring keynote
lectures.
We would like to express our gratitude to those who made this event possible
and contributed to its success. In particular, our Program Committee of top
international experts in the field provided excellent reviews. The administrative
and financial support from the Hong Kong University of Science and Technology
(HKUST), especially from HKUST Jockey Club Institute for Advanced Study
(IAS), was crucial for the success of this event. We are grateful to Linus See
(HKUST), Eric Lin (HKUST) and Shing Yu Leung (HKUST) for providing very
helpful local administrative support. It is our belief that this conference helped
to advance the field of energy minimization methods and to further establish the
mathematical foundations of computer vision and pattern recognition.
EMMCVPR 2015 was organized by the HKUST Jockey Club Institute for
Advanced Study (IAS).
Executive Committee
Conference Chair
Xue-Cheng Tai University of Bergen, Norway
Organizers
Egil Bae UCLA, USA
Tony F. Chan HKUST, Hong Kong
Marius Lysaker Telemark University College, Norway
Shing Yu Leung HKUST, Hong Kong
Invited Speakers
Andrea Bertozzi University of California at Los Angeles, USA
Ron Kimmel Technion-IIT, Israel
Yi Ma ShanghaiTech, China
Long Quan HKUST, Hong Kong
Program Committee
J.-F. Aujol B. Flach C. Schnoerr
M. Björkman D. Geiger C.-B. Schonlieb
M. Blaschko H. Ishikawa A. Schwing
A. Bruhn D. Jacobs F. Sgallari
R. Chan F. Kahl A. Shekhovtsov
X. Chen R. Kimmel H. Talbot
J. Clark I. Kokkinos W. Tao
D. Cremers A. S. Konushin O. Veksler
J. Darbon S. Li J. Weickert
G. Doretto H. Li O. Woodford
P. Favaro S. Maybank X. Wu
M. Felsberg M. Nikolova C. Wu
M. Figueiredo M. Pelillo J. Yuan
A. Fix T. Pock J. Zerubia
Sponsoring Institutions
HKUST Jockey Club Institute for Advanced Study
Table of Contents
Inpainting of Cyclic Data Using First and Second Order Differences . . . . 155
Ronny Bergmann and Andreas Weinmann
Segmentation
A Fast Projection Method for Connectivity Constraints in Image
Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Jan Stühmer and Daniel Cremers
1 Introduction
The assumption that measurements consist of noisy observations from a low
rank matrix has been proven useful in applications such as non-rigid and artic-
ulated structure from motion [1,2,3], photometric stereo [4] and optical flow [5].
The interpretation of the low rank assumption is that the observed data can
be written as a linear combination of a few basis elements. The factorization
approach, introduced to vision in [6], offers a simple way of determining both
coefficients and basis elements. If the measurement matrix M is complete then
the best approximation, in a least squares sense, can be computed in closed form
[7] using the singular value decomposition (SVD). The main drawback is that
the computation of a factorization requires a complete measurement matrix. In
structure from motion this means that every point has to be visible in every
image, something that rarely occurs in practice due to occlusions and track-
ing failures. In case there are missing entries and/or outliers the optimization
problem is substantially more difficult.
The issue of outliers has received a lot of attention lately. In [8,9] the more
robust L1 -norm is considered. These methods build on the so called Wiberg
algorithm [10] which jointly optimizes a product U V T of two fixed size U and V
matrices. As a consequence the quality of the result is dependent on initialization.
Another approach [11,3,12] tackles the problem of missing data by replacing the
rank constraint with the weaker but convex nuclear norm penalty and solves
min μX∗ + W (X − M )2F , (1)
X
1
This work has been funded by the Swedish Research Council (grant no. 2012-4213)
and the Crafoord Foundation.
X.-C. Tai et al. (Eds.): EMMCVPR 2015, LNCS 8932, pp. 1–14, 2015.
c Springer International Publishing Switzerland 2015
2 V. Larsson and C. Olsson
where Wij = 0 if the entry is missing and 1 otherwise. This approach is convex
and therefore independent of initialization. In addition it can be shown that if
the locations of the missing entries are random the approach gives the best low
rank approximation [11]. The typical patterns of missing data in structure from
motion still pose a problem for these approaches.
The motivation for using the nuclear norm in (1) is that it is the convex
envelope of the rank function on the set {X; σmax (X) ≤ 1}. The constraint
σmax (X) ≤ 1 is however artificial and not present in (1). In [13] it is shown that
the so called localized rank function
f (X) = μ rank(X) + X − X0 2F , (2)
has the convex envelope
n
√
f ∗∗ (X) =
2
μ − [ μ − σi (X)]+ + X − X0 2F . (3)
i=1
Note that the regularizer in (3) itself is not convex. The second term, enables a
proportionally smaller penalty for large singular values, without losing convexity,
giving a tighter convex envelope in the neighborhood of X0 . In fact, in contrast
to the nuclear norm heuristic, minimizing (3) gives the same result as solving
(2) with SVD. The advantage of using (3) is that it is convex and therefore can
be combined with other convex constraints and functions. In [13] the missing
data problem is solved by minimizing (3) on complete sub-blocks and enforcing
agreement on the overlaps via linear constraints.
The formulation in [13] consists of a trade-of between matrix rank and data
fit. In many cases it is of interest to search for a matrix of known fixed rank.
For example for rigid structure from motion the measurement matrix is known
to be of rank 4 (or 3 if the translation can be eliminated) [6]. In such cases
the approach of solving (3) on sub-blocks requires determining an appropriate
weight μ for each sub-block that gives the correct rank. In this paper we show
that we can incorporate such knowledge by replacing (2) with
fg (X) = g(rank(X)) + X − X0 2 . (4)
In particular we are interested in the case where
g(rank(X)) = μ max(r0 , rank(X)), (5)
but our theory applies to a larger class of problems as well. The only requirement
that we make is that g is a non-decreasing convex function.
The reason for considering (5) is that in case we know the rank of the sought
matrix we can simply let μ be large thus avoiding iteration over the parameters
which is done in [13]. Consequently our approach is essentially parameter free.
The max term also effectively reduces bias towards low rank solutions like the
zero solution that are often uninteresting, giving a tighter convex relaxation.
Our main contribution is the computation of the convex envelope of (4) and its
proximal operator. While the formulation does not admit closed form solutions
we give simple and fast algorithms for evaluations. In addition we present a way
of strengthening the convex envelopes using a trust-region formulation.
Convex Envelopes for Low Rank Approximation 3
Notation. Throughout the paper we use σi (X), i = 1, ..., n to denote the ith
singular value of a matrix X. Here n denotes the number of singular values and
for notational convenience we will also define σ0 (X) = ∞ and σn+1 (X) = 0. The
vector of all singular values is denoted σ(X). With some abuse of notation we
write the SVD of X as U diag(σ(X))V T . For ease of notation we do not explicitly
indicate the dependence of U and V on X. The scalar product is defined as
X, Y =tr(X T Y ), where
n tr2 is the trace function, and the Frobenius norm
XF = X, X = i=1 σi (X). Truncation at zero is denoted [a]+ , that is,
[a]+ = 0 if a < 0 and a otherwise.
The calculations for the first conjugate roughly follows those of [13] and we only
give the result here. We get that the first conjugate is given by
n 2
Y 1 Y
fg∗ (Y ) = − min gi , σi2 X0 + − X0 2F + X0 + . (8)
i=1
2 2 2 F
where
n
Rg (X) = max min gi , σi2 (Z) − Z − X2F . (10)
Z
i=1
The next step in determining the convex envelope is to find the maximizing Z
in (10). We first note that using von Neumann’s trace theorem we can reduce
the problem to a search over the singular values of Z. The norm term fulfills
n
−Z − X2F ≤ −Z2F + 2 σi (Z)σi (X) − X2F , (11)
i=1
with equality if Z and X have the same U and V in their singular value decom-
positions. Since the sum in (10) does not depend on U or V the optimal Z has
to be of the form Z = U diag(σ(Z))V T if X = U diag(σ(X))V T . This reduces
the maximization in (10) to
n n
max min gi , σi (Z) −
2
(σi (Z) − σi (X)) .
2
(12)
σ(Z)
i=1 i=1
Note that the elements of σ(Z) have to fulfill σ1 (Z) ≥ σ2 (Z) ≥ ... ≥ σn (Z) since
these are singular values.
Properties of the Optimal σ(Z). To limit the search space for maximization
over σ(Z) we will next derive some properties of the maximizer. Considering each
singular value σk (Z) separately they should solve a program of the type
maxs min(gk , s2 ) − (s − σk (X))2 (13)
s.t. σk+1 (Z) ≤ s ≤ σk−1 (Z) (14)
Note that for k = 1 there is no upper bound on s and for k = n there is no positive
lower bound since we use the convention that σ0 (Z) = ∞ and σn+1 (Z) = 0. We
first consider the unconstrained objective function. This function is the point
√
wise minimum of the two concave functions gk − (s − σk (X))2 (for s ≥ gk ) and
s2 − (s − σk (X))2 = 2sσk (X) − σk2 (X). The function is concave and attains its
√ √
optimum in s = σk (X) if σk (X) ≥ gk and in s = gk otherwise (see Figure 1).
In case σk (X) = 0 the optimum is not unique. For simplicity we will assume that
σk (X) > 0 in what follows. The solution we create will still be valid if σk (X) = 0
but might not be unique. Let sk be the individual unconstrained optimizers of
(13), i.e.
√
sk = max( gk , σk (X)). (15)
√
Note that this sequence is decreasing when σk (X) is larger than gk . We choose
k0 such that sk0 is the smallest value in the sequence sk .
We now consider the constrained problem (13)-(14). Since σk+1 (Z) ≤ σk−1 (Z)
we see that the optimization over σk (Z) can be limited to three choices
⎧
⎨ sk if σk+1 (Z) ≤ sk ≤ σk−1 (Z)
σk (Z) = σk−1 (Z) if σk−1 (Z) < sk . (16)
⎩
σk+1 (Z) if sk < σk+1 (Z)
Convex Envelopes for Low Rank Approximation 5
0.3 0.4
0.3
0.2
0.2
0.1 0.1
0 √
0 gk σk (X)
√
σk (X) gk
−0.1
−0.1
−0.2
−0.2 −0.3
−0.4
−0.3
−0.5
−0.4 −0.6
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
√ √
Fig. 1. The objective function in (13) for σk (X) ≤ gk and σk (X) ≥ gk
For i = 1 we see from (16) that s1 is the optimal choice if s1 > σ2 (Z) oth-
erwise σ2 (Z) is optimal. Therefore σ1 (Z) = max(s1 , σ2 (Z)). Next assume that
σi−1 (Z) = max(si−1 , σi (Z)) for some i ≤ k0 . Then
therefore we can ignore the second case in (16), which proves the recursion (19).
To prove the lemma assume σk (Z) = sk for some k ≤ k0 . From (19) it follows
that
σk (Z) = σk+1 (Z) > sk . (21)
But sk is decreasing for k ≤ k0 which implies that σk+1 (Z) > sk+1 . By repeating
the argument it follows that
Proof. Consider σi (Z) for some i ≥ k0 . If σi (Z) > si it must have been bounded
from below in (16), i.e. σi (Z) = σi+1 (Z). If instead σi (Z) ≤ si we have σi+1 (Z) ≤
σi (Z) ≤ si ≤ si+1 . Then similarly σi+1 (Z) is bounded from above in (16) which
implies σi+1 (Z) = σi (Z).
6 V. Larsson and C. Olsson
for some k ≤ k0 and s ≤ σk (X). We can find the optimal k and s by considering
the following optimization problem
k
n
n
max max gi + min(s2 , gi ) − (s − σi (X))2 . (25)
k≤k0 s
i=1 i=k+1 i=k+1
For a fixed k < k0 it follows from Lemma 1 that s∗ = σk+1 (Z) must satisfy
Thus for each k < k0 we only need to consider s in the interval [σk+1 (X), σk (X)].
Since gi are increasing we can further divide this interval into subintervals. We let
√ √ √
Il = [ gkl , gkl +1 ], where gkl , l = 1, ..., m−1 is the subsequence with terms in
√
the (open) interval (σk+1 (X), σk (X)). Furthermore, we let I0 = [σk+1 (X), gk1 ]
√
and Im = [ gkm , σk (X)]. Note that on each of these subintervals the objective
function can be written as a concave quadratic function
n
flk (s) = gi + s2 − (s − σi (X))2 , s ∈ Il (27)
gi ≤gkl gi >gkl i=k+1
The optimum must lie at either a feasible stationary point of flk or at one of
the boundaries of Il for some l. To find the optimal s we can simply enumerate
√
all the possibilities and choose the maximizing one. Since each gi only lies in
one of the intervals [σk+1 (X), σk (X)] we only need to consider each gi once.
This makes the number of possible solutions depend linearly on the number of
singular values.
The steps of the method are summarized in Algorithm 1.
Convex Envelopes for Low Rank Approximation 7
Data: X, g
Result: σ(Z ∗ )
for k = 0 : k0 do
Compute s∗ and l∗ from (28);
if flk∗ (s∗ ) > fopt then
σi (Z ∗ ) := σi (X), ∀i < k;
σi (Z ∗ ) := s∗ , ∀i ≥ k;
fopt := flk∗ (s∗ );
end
end
Algorithm 1: Finding maximizing Z for (10)
In order to optimize the convex envelope fg∗∗ (X) efficiently we need to be able
to compute its proximal operator
The approach we will take is similar to how we evaluate fg∗∗ (X) itself but will
require looping over two variables instead of one. The key observation is that
switching the order of the minimization over X with maximization over Z enables
us to characterize optimal solutions similarly to Section 2.2. 2 We therefore solve
n
max min min(gi , σi2 (Z)) − X − Z2F + X − X0 2F + ρX − M 2F . (30)
Z X
i=1
X0 − Z
X=M+ . (31)
ρ
n
ρ+1 2
max min(gi , σi2 (Z)) − Z − Y F + C, (32)
Z
i=1
ρ
X0 + ρM
Y = . (33)
1+ρ
2
If ρ > 0 the objective function is closed, proper convex-concave, continuous and
the optimization can be restricted to a compact set. Switching optimization order is
therefore justified by the existence of a saddle point, see [14].
8 V. Larsson and C. Olsson
Therefore we see that the singular value σk (Z) must solve the problem
ρ+1
maxs min(gi , s2 ) − (s − σk (Y ))2 (34)
ρ
s.t. σk+1 (Z) ≤ s ≤ σk−1 (Z). (35)
The objective function (34) is the pointwise minimum of the two quadratic
strictly (assuming ρ > 0) concave functions,
ρ+1 ρ+1
q1 (s) = gi − (s − σk (Y ))2 , q2 (s) = s2 − (s − σk (Y ))2 . (36)
ρ ρ
√
The objective function is equal to q1 (s) for s ≥ gk and q2 (s) otherwise. The
functions q1 and q2 attain their maximum values at s = σk (Y ) and s = (ρ +
1)σk (Y ) respectively. Note that since (ρ + 1)σk (Y ) > σk (Y ) at most one of these
√
can be feasible. It can also happen that neither is feasible, i.e. σk (Y ) ≤ gk ≤
√
(ρ + 1)σk (Y ). In this case the optimal s = gk . Figure 2 illustrates the shape of
the objective function in the three possible cases.
0.4 0.4 1
0.2
0.2
0 √ 0.5
σk (Y ) (ρ + 1)σk (Y ) gk
−0.2 0
√
σk (Y ) gk (ρ + 1)σk (Y )
−0.4 0 √
−0.2 gk σk (Y ) (ρ + 1)σk (Y )
−0.6
−0.4
−0.8 −0.5
−1 −0.6
−1.2 −1
−0.8
−1.4
−1.6 −1 −1.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
√
Fig. 2. The objective function in (34) for left: (ρ + 1)σk (Y ) ≤ gk , middle: σk (Y ) ≤
√ √ √
gk and (ρ + 1)σk (Y ) ≥ gk and right: σk (Y ) ≥ gk
Data: X0 , ρ, μ, M
Result: Set of possible solutions S
S := ∅ ;
Define p, q as in proof of Lemma 3;
if si is decreasing with i then
S := {si };
return;
else
for k1 = 1 : p do
for k2 = q : n do
Compute s∗ from (41) and form σ(Z) as in Lemma 3;
if σi (Z) is decreasing with i then
S := S ∪ {σ(Z)};
end
end
end
end
Algorithm 2: Finding maximizing Z for the proximal operator (32)
Here Λti , i = 1, ..., K are the scaled dual variables whose updates at iteration
t are given by Λt+1i = Λti + Xit+1 − Pi (X t+1 ). The first problem (50) can be
solved using the proximal operator derived in the previous section. The second
subproblem (51) is a separable least squares problem with closed form solution.
block Pk (X) can be factorized as Pk (X) = Uk VkT . Then Pk (U ) 3 must lie in the
column space of Uk or equivalently it must be orthogonal to the complement,
i.e. (Uk⊥ )T Pk (U ) = 0. We can also write this as
Ak U = [ 0 (Uk⊥ )T 0 ] U = 0. (52)
Collecting these into matrix, AU = 0, we can find U by minimizing ||AU ||. Since
the scale of U is arbitrary we can consider this as a homogeneous least squares
problem which can be solved using SVD. For known U we can simply find V by
minimizing ||W (M − U V T )||.
In case of very large noise levels the regularizer Rg may not be strong enough
to enforce low rank of the solution. In this section we present an approach to
strengthen it by restricting the algorithm to a local search close to a current
solution estimate Xk . We consider minimization of
g (X) + X − X
0 F + λX − Xk F .
2 2
(1 + λ)R 1+λ (55)
It can be shown that the term (1+λ)R 1+λg (X) → g(rank(X)) when λ → ∞, that
g (X) + X − X .
2
min (1 + λ)R 1+λ 0 F (56)
X
In practice we make the Xk update at each step in the ADMM algorithm in-
stead of running the ADMM until convergence before updating Xk . This greatly
increases speed of convergence.
3
Here Pk (U ) denotes the rows corresponding to block k.
12 V. Larsson and C. Olsson
1.2
0.8
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
√
Fig. 3. The regularizer r(σ) = 1 − [1 − 1 + λσ]2+ for different λ
Table 1. The errors ||W (X − M )||F after extending the solution beyond the blocks
as described in Section 3.1 (which ensures the correct rank)
[13] CR TR
book 1.2731 1.2733 1.2678
hand 0.91386 0.9141 0.91508
banner 3950.2 3373.2 3373.2
Convex Envelopes for Low Rank Approximation 13
102
10−2 10−2
10−2
10−6 10−6
[13] [13] 10−6 [13]
CR CR CR
TR TR TR
10−101 2 3 4 5
−10
6 10 1 2 3 4 5 6 7 8 10
−10
1 2 3 4 5 6 7 8 9 10 11 12
Fig. 4. Singular values for a single block in the book, hand and banner sequence. The
vertical blue line indicates the desired rank.
CR CR
OptSpace 10 DWiberg-L2
20
TNNR-ADMMAP
DWiberg-L2
error
error
10 5
0 0
0 2 · 10−2 4 · 10−2 6 · 10−2 8 · 10−2 0.1 0 2 · 10−2 4 · 10−2 6 · 10−2 8 · 10−2 0.1
σ σ
Fig. 5. Comparison with non-convex methods. Left: Initial experiment. (Note that the
errors for our approach and DWiberg-L2 are very similar). Right: Experiment with
adjusted row-mean.
References
1. Bregler, C., Hertzmann, A., Biermann, H.: Recovering non-rigid 3d shape from
image streams. In: IEEE Conference on Computer Vision and Pattern Recognition
(2000)
14 V. Larsson and C. Olsson
1 Introduction
Many problems in image processing and computer vision can be modeled and
formulated by the theory of Markov Random Fields (MRF) over graphs, in terms
of computing a maximum a posteriori probability (MAP) estimate, see [23] for
reference. Graph-cuts and message-passing, e.g. [5,4,30,31,19] are two main cat-
egories of efficient algorithms for the combinatorial optimization problem. How-
ever, graph-based methods suffer from visible grid bias, and reducing such bias
requires either adding more neighbors locally or considering high-order cliques,
which inevitably leads to a more intensive computation and memory cost.
On the other hand, variational methods can be applied to solve the same class
of optimization problems in the spatially continuous setting, while avoiding the
metrication errors generated by combinatorial algorithms. In particular, convex
relaxation methods [21,7,15,34,24,9,2,20] were recently developed by relaxing
the discrete constraint to some convex set, which leads great advantages both in
theory and numerics: the convex optimization theory is well-established, efficient
and reliable solvers are available with provable convergence properties, and also
X.-C. Tai et al. (Eds.): EMMCVPR 2015, LNCS 8932, pp. 15–28, 2015.
c Springer International Publishing Switzerland 2015
16 E. Bae, X.-C. Tai, and J. Yuan
1.1 Contributions
In this work, we propose a series of max-flow dual formulations, to compute
minimum cuts in the continuous setting. In contrast to previous work on contin-
uous max-flow [33,1], we formulate the flow excess constraints in different ways,
which directly lead to new generalized proximal algorithms, where the Bregman
divergence acts as the distance measurement for updating the labeling func-
tion. We propose primal-dual algorithmic schemes which combine both a flow-
maximizing step and message-passing step in one unified numerical framework.
This reveals close connections between the proposed flow-maximization meth-
ods and the classical methods, where ’cuts’ over the graphs can be computed by
maximizing flows or propagating messages. Finally, we compare the proposed al-
gorithms with state-of-art continuous optimization methods: the Split-Bregman
algorithm [15], the primal-dual algorithm [10] and the max-flow algorithm in [33]
through experiments.
It is well known that the min-cut problem (1) is dual to the maximum flow
problem over the same graph. We let ps (v) denote the flow on the edge (s, v)
and Cs (v) denote its capacity C(s, v). Similarly, pt (v) and Ct (v) are the flow and
capacity on (v, t) and p(v, w) the flow on (v, w). The maximum flow problem can
be formulated as follows
max ps (v) (2)
ps
v∈V
s.t. |p(v, w)| ≤ C(v, w) ps (v) ≤ Cs (v) pt (v) ≤ Ct (v) ∀v, w ∈ V (3)
p((w, v)) − ps (v) + pt (v) = 0 ∀v ∈ V (4)
(w,v) : w∈V
where the objective (2) is to push the maximum amount of flow from the source
to the sink under flow capacity constraints (3). Additionally, the flow conserva-
tion constraint (4) should hold, which states that the total amount of incoming
flow should be balanced by the amount of outgoing flow at each vertex.
The classical Ford-Fulkerson algorithm [13] solves the max-flow problem (2)
by successively pushing flow from s to t along non-saturated paths, while main-
taining the flow conservation constraint (4) each iteration. In this paper, we also
call (2) subject to (3) and (4), the full-flow representation of max-flow.
where Cs (x) and Ct (x) are pointwise costs for assigning any x to the foreground
S and background Ω\S respectively. As proposed by [21,7], this problem can be
solved globally and exactly by solving the continuous min-cut as follows
min E(u) = (1 − u)Cs dx + uCt dx + C(x) |∇u|2 dx , (6)
u(x)∈[0,1] Ω Ω Ω
respectively; define three flow fields around the pixel x: ps (x) ∈ R directed from
the source s to x, pt (x) ∈ R directed from x to the sink t and the spatial flow
field p(x) ∈ R2 around x within the image plain.
By the above spatially continuous setting, the continuous max-flow model
tries to maximize the total flow passing from the source s:
max ps dx (7)
ps ,pt ,p Ω
subject to the flow capacity constraints (8) was considered. The flow conservation
condition (9) played a central role in constructing the duality between the max-
flow and min-cut models: (7) and (6).
We call (7) the full-flow representation of the continuous max-flow model
in this paper. In the following sections, we will discuss the other two continu-
ous max-flow models which are distinct from the full-flow representation model
(7). We will see that different continuous max-flow models can be constructed
through variants of flow preservation (9), while the full-flow representation model
(7) just corresponds to the balance of in-flow and out-flow.
To compute a solution to (6) or (7), discretization of the domain Ω is neces-
sary. One fundamental difference to the discrete max-flow and min-cut models
is the rotationally invariant 2-norm in (6) and (8), which corresponds to the
Euclidean perimeter in (5). In this paper we assume a general discretized image
domain and differential operators when deriving the duality theory, but we keep
the continuous notation ∇, div, to ease readability. To derive rigorous existence
proofs for infinite dimensional spaces is quite involved and out of the scope of
this conference paper.
Proof. We first observe that the max-flow model (7) can be equivalently formu-
lated as
max pt dx (14)
pt ,p Ω
s.t. ps (x) + div p(x) − pt (x) = 0 , ∀x ∈ Ω (15)
ps (x) ≤ Cs (x) , pt (x) ≤ Ct (x) , |p(x)| ≤ C(x) , ∀x ∈ Ω. (16)
This just comes
from the fact that the total source flow ps dx equals to the total
sink flow pt dx, due to the flow balance condition (9). Changing the positive
direction of flows ps and pt in (7), we then have (14).
Therefore, by the same procedures as in [32], optimizing (14) over the con-
straint ps (x) ≤ Cs (x), we see that (14) can be equivalently expressed as
min max pt dx + u, Cs + div p − pt (17)
u≥0 pt ,p Ω
s.t. pt (x) ≤ Ct (x) , |p(x)| ≤ C(x) ∀x ∈ Ω.
Obviously, (11) gives another continuous max-flow model which tries to maxi-
mize the total flow streaming out to the sink t while keeping the maximum source
flow ps (x) = Cs (x). We see that the excess of flows at each pixel is no longer
constrained to vanish, but to be non-negative (12), i.e. the flow conservation
condition (9) is not kept.
Moreover, we will show that (11) results in a novel max-flow algorithm, in
the continuous context, which has similar steps as the well-known push-relabel
algorithm proposed in [14]. With this perspective, the constraint (12) recovers
the pre-flow condition. We call (11) the pre-flow representation of the continuous
max-flow model. In view of (17), we have that
Another random document with
no related content on Scribd:
The Project Gutenberg eBook of Dick's
retriever
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.
Author: E. M. Stooke
Language: English
BY
E. M. STOOKE
1921
CONTENTS.
I. THE ARRIVAL
II. DICK RUNS AN ERRAND
DICK'S RETRIEVER.
CHAPTER I.
THE ARRIVAL.
With this, she turned her head towards the door, beneath
which she had stuffed some old matting to keep out the
draught.
The widow looked disturbed. She rose from her chair, raked
the dying embers together in the fireplace, and lit the
candle; for she and Dick had been sitting the last half-hour
by firelight—they always did so to save lamp oil after she
had put away her sewing at nine o'clock on winter evenings.
"I reckon your guess isn't far out, Dick," agreed the widow.
"Here, you poor creature, let me look at you. Why, you're
cold as ice, and one of your paws is bleeding!"
Then, turning her kind face to her little son, who stood
looking down on their visitor with pitying eyes, she went on,
—
The mother and her son sat still for a time, silently admiring
the beautiful animal.
"Must we?" The boy stooped over the exhausted animal and
caressed its curly jacket. "Good-night, old man!" he said
softly. "I'm glad we heard you whining. I'm glad we let you
in."
CHAPTER II.
DICK RUNS AN ERRAND.
"What shall we call our dog then?" asked Molly, with quite
an important air of ownership.
"Yes! Yes!" his little sister and the twins agreed in a breath.
"Isn't his coat looking beautiful, mother?" Dick said one day
to Mrs. Wilkins, as the much-dreaded winter drew near.
"Ah, it is, my dear!" was her reply. "It's because he's so well
fed—that's the reason. Do you know, Dick, I almost envy
that dog the bits folks throw to him, sometimes, when you
children are on short rations. But there, I won't complain!
P'raps I shall get some more washing or sewing work to do
before long. I'm sure I don't mind how hard I slave, if only I
can manage to get necessaries for you children."
"To be sure I did," was the reply; and the donor afterwards
told himself that the expression of mingled wonderment and
delight on the little face was worth three times the amount.
"Take it and welcome, my lad," said he. "Now I will bid you
good-day."
"Good-day, sir; and—and thank you ever so!" burst from
Dick's quivering lips; after which he looked at the coin a
second time, and murmured with delight, "Won't mother be
surprised and glad! Fancy a shilling!—a whole shilling! Why,
that's as much as I get at the rectory for cleaning boots in a
week!"
CHAPTER III.
DICK'S ENCOUNTER WITH THE BULLY.
"I wasn't going to call to him, sir. And I wasn't going to run
away either. I ain't a coward," Dick found voice enough to
declare.
"Oh, you are not a coward, eh? Then that's all right. Now
show me that piece of money!" persisted the bully, gripping
Dick's shoulder so tightly that he could have shrieked with
pain, had he been less brave than he was.
"What of that? Let me see it, I tell you, or I'll give you
something to remember me by. Ah!" as Dick's hand went
reluctantly into his pocket. "I thought I should bring you to
reason. So the gentleman gave you this, eh? A shilling!
Well, it's a great deal too much money for a little boy like
you to have. Think of it I—twelve pence, to be sucked away
in candy!"
"Oh, you shan't! You shan't!" cried poor Dick, losing all self-
control, and throwing himself bodily upon the bigger boy.
"'Tis mine," he contended, breaking into a passion of sobs
and tears. "I earned it myself, and I mean to have it. Give it
to me this minute, and take your match-box back. A thing
like that's no good to me and mother. You're a coward and a
thief."
"And I want to buy all sorts of things for mother and the
children," sobbed the miserable and indignant Dick. "Listen
to me, sir!" He ceased crying, took a step towards young
Filmer, and looked fearlessly into his face. "If you don't give
me back my money at once," he said, "I'll go straight to the
farm and tell your father."
"So that's your little game, is it?" exclaimed the bully. "Well,
it's a fortunate thing you mentioned it to me, because now I
can tell you what the result of your doing it would be. I
should make my mother promise me that she would never
have Mrs. Wilkins to do washing or charing for her again."