You are on page 1of 54

Energy Minimization Methods in

Computer Vision and Pattern


Recognition 10th International
Conference EMMCVPR 2015 Hong
Kong China January 13 16 2015
Proceedings 1st Edition Xue-Cheng Tai
Visit to download the full and correct content document:
https://textbookfull.com/product/energy-minimization-methods-in-computer-vision-and
-pattern-recognition-10th-international-conference-emmcvpr-2015-hong-kong-china-ja
nuary-13-16-2015-proceedings-1st-edition-xue-cheng-tai/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Energy Minimization Methods in Computer Vision and


Pattern Recognition Marcello Pelillo

https://textbookfull.com/product/energy-minimization-methods-in-
computer-vision-and-pattern-recognition-marcello-pelillo/

Hong Kong Architecture 1945 2015 From Colonial to


Global Xue

https://textbookfull.com/product/hong-kong-
architecture-1945-2015-from-colonial-to-global-xue/

Pattern Recognition and Computer Vision Third Chinese


Conference PRCV 2020 Nanjing China October 16 18 2020
Proceedings Part III Yuxin Peng

https://textbookfull.com/product/pattern-recognition-and-
computer-vision-third-chinese-conference-prcv-2020-nanjing-china-
october-16-18-2020-proceedings-part-iii-yuxin-peng/

Image and Graphics 8th International Conference ICIG


2015 Tianjin China August 13 16 2015 Proceedings Part
III 1st Edition Yu-Jin Zhang (Eds.)

https://textbookfull.com/product/image-and-graphics-8th-
international-conference-icig-2015-tianjin-china-
august-13-16-2015-proceedings-part-iii-1st-edition-yu-jin-zhang-
Image and Graphics 8th International Conference ICIG
2015 Tianjin China August 13 16 2015 Proceedings Part I
1st Edition Yu-Jin Zhang (Eds.)

https://textbookfull.com/product/image-and-graphics-8th-
international-conference-icig-2015-tianjin-china-
august-13-16-2015-proceedings-part-i-1st-edition-yu-jin-zhang-
eds/

Pattern Recognition and Computer Vision Third Chinese


Conference PRCV 2020 Nanjing China October 16 18 2020
Proceedings Part I 1st Edition Yuxin Peng

https://textbookfull.com/product/pattern-recognition-and-
computer-vision-third-chinese-conference-prcv-2020-nanjing-china-
october-16-18-2020-proceedings-part-i-1st-edition-yuxin-peng/

Hong Kong Architecture 1945 2015 From Colonial to


Global 1st Edition Charlie Q. L. Xue (Auth.)

https://textbookfull.com/product/hong-kong-
architecture-1945-2015-from-colonial-to-global-1st-edition-
charlie-q-l-xue-auth/

Computer Vision Imaging and Computer Graphics Theory


and Applications 10th International Joint Conference
VISIGRAPP 2015 Berlin Germany March 11 14 2015 Revised
Selected Papers 1st Edition José Braz
https://textbookfull.com/product/computer-vision-imaging-and-
computer-graphics-theory-and-applications-10th-international-
joint-conference-visigrapp-2015-berlin-germany-
march-11-14-2015-revised-selected-papers-1st-edition-jose-braz/

Web Information Systems Engineering WISE 2019 20th


International Conference Hong Kong China November 26 30
2019 Proceedings Reynold Cheng

https://textbookfull.com/product/web-information-systems-
engineering-wise-2019-20th-international-conference-hong-kong-
china-november-26-30-2019-proceedings-reynold-cheng/
Xue-Cheng Tai Egil Bae
Tony F. Chan Marius Lysaker (Eds.)

Energy Minimization
Methods
LNCS 8932

in Computer Vision
and Pattern Recognition
10th International Conference, EMMCVPR 2015
Hong Kong, China, January 13–16, 2015
Proceedings

123
Lecture Notes in Computer Science 8932
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbruecken, Germany
Xue-Cheng Tai Egil Bae Tony F. Chan
Marius Lysaker (Eds.)

Energy Minimization
Methods
in Computer Vision
and Pattern Recognition
10th International Conference, EMMCVPR 2015
Hong Kong, China, January 13-16, 2015
Proceedings

13
Volume Editors
Xue-Cheng Tai
University of Bergen, Department of Mathematics
Bergen, Norway
E-mail: tai@math.uib.no
Egil Bae
University of California, Department of Mathematics
Los Angeles, CA, USA
E-mail: ebae@math.ucla.edu
Tony F. Chan
The Hong Kong University of Science and Technology
Clear Water Bay, Kowloon, Hong Kong, S.A.R.
E-mail: tonyfchan@ust.hk
Marius Lysaker
Telemark University College
Porsgrunn, Norway
E-mail: marius.lysaker@hit.no

ISSN 0302-9743 e-ISSN 1611-3349


ISBN 978-3-319-14611-9 e-ISBN 978-3-319-14612-6
DOI 10.1007/978-3-319-14612-6
Springer Cham Heidelberg New York Dordrecht London

Library of Congress Control Number: 2014958022

LNCS Sublibrary: SL 6 – Image Processing, Computer Vision,


Pattern Recognition, and Graphics
© Springer International Publishing Switzerland 2015

This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection
with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and
executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication
or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location,
in ist current version, and permission for use must always be obtained from Springer. Permissions for use
may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution
under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of publication,
neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or
omissions that may be made. The publisher makes no warranty, express or implied, with respect to the
material contained herein.
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Preface

Energy minimization has become an important paradigm for solving many chal-
lenging problems within computer vision and pattern recognition over the past
few decades. Mathematical models that describe the desired solution as the min-
imizer of an energy potential arise through different schools of thought, includ-
ing statistical approaches in the form of Markov random fields and geometrical
approaches in the form of variational models or equivalent partial differential
equations. Besides the challenge of formulating appropriate energy minimiza-
tion models, a significant research topic is the design of computational methods
for reliably and efficiently obtaining solutions of minimal energy.
This book contains 36 original research articles that cover the whole spectrum
of energy minimization in computer vision and pattern recognition, including
design and analysis of mathematical models and design of discrete and con-
tinuous optimization algorithms. Application areas include image segmentation
and tracking, image restoration and inpainting, multiview reconstruction, shape
optimization, and texture and color analysis. The articles have been carefully
selected through a thorough double-blind peer-review process.
Furthermore, we were delighted that three internationally recognized ex-
perts in the fields of computer vision, pattern recognition, and optimization,
namely, Andrea Bertozzi (UCLA), Ron Kimmel (Technion-IIT), and Long Quan
(HKUST), agreed to further enrich the conference with inspiring keynote
lectures.
We would like to express our gratitude to those who made this event possible
and contributed to its success. In particular, our Program Committee of top
international experts in the field provided excellent reviews. The administrative
and financial support from the Hong Kong University of Science and Technology
(HKUST), especially from HKUST Jockey Club Institute for Advanced Study
(IAS), was crucial for the success of this event. We are grateful to Linus See
(HKUST), Eric Lin (HKUST) and Shing Yu Leung (HKUST) for providing very
helpful local administrative support. It is our belief that this conference helped
to advance the field of energy minimization methods and to further establish the
mathematical foundations of computer vision and pattern recognition.

November 2014 Xue-Cheng Tai


Egil Bae
Tony F. Chan
Marius Lysaker
Organization

EMMCVPR 2015 was organized by the HKUST Jockey Club Institute for
Advanced Study (IAS).

Executive Committee
Conference Chair
Xue-Cheng Tai University of Bergen, Norway

Organizers
Egil Bae UCLA, USA
Tony F. Chan HKUST, Hong Kong
Marius Lysaker Telemark University College, Norway
Shing Yu Leung HKUST, Hong Kong
Invited Speakers
Andrea Bertozzi University of California at Los Angeles, USA
Ron Kimmel Technion-IIT, Israel
Yi Ma ShanghaiTech, China
Long Quan HKUST, Hong Kong
Program Committee
J.-F. Aujol B. Flach C. Schnoerr
M. Björkman D. Geiger C.-B. Schonlieb
M. Blaschko H. Ishikawa A. Schwing
A. Bruhn D. Jacobs F. Sgallari
R. Chan F. Kahl A. Shekhovtsov
X. Chen R. Kimmel H. Talbot
J. Clark I. Kokkinos W. Tao
D. Cremers A. S. Konushin O. Veksler
J. Darbon S. Li J. Weickert
G. Doretto H. Li O. Woodford
P. Favaro S. Maybank X. Wu
M. Felsberg M. Nikolova C. Wu
M. Figueiredo M. Pelillo J. Yuan
A. Fix T. Pock J. Zerubia

Sponsoring Institutions
HKUST Jockey Club Institute for Advanced Study
Table of Contents

Discrete and Continuous Optimization


Convex Envelopes for Low Rank Approximation . . . . . . . . . . . . . . . . . . . . . 1
Viktor Larsson and Carl Olsson

Maximizing Flows with Message-Passing: Computing Spatially


Continuous Min-Cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Egil Bae, Xue-Cheng Tai, and Jing Yuan

A Compact Linear Programming Relaxation for Binary Sub-modular


MRF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Junyan Wang and Sai-Kit Yeung

On the Link between Gaussian Homotopy Continuation and Convex


Envelopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Hossein Mobahi and John W. Fisher III

How Hard Is the LP Relaxation of the Potts Min-Sum


Labeling Problem? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Daniel Průša and Tomáš Werner

Coarse-to-Fine Minimization of Some Common Nonconvexities . . . . . . . . 71


Hossein Mobahi and John W. Fisher III

Image Restoration and Inpainting


Why Does Non-binary Mask Optimisation Work for Diffusion-Based
Image Compression? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Laurent Hoeltgen and Joachim Weickert

Expected Patch Log Likelihood with a Sparse Prior . . . . . . . . . . . . . . . . . . 99


Jeremias Sulam and Michael Elad

Blind Deconvolution via Lower-Bounded Logarithmic Image Priors . . . . . 112


Daniele Perrone, Remo Diethelm, and Paolo Favaro

Low Rank Priors for Color Image Regularization . . . . . . . . . . . . . . . . . . . . . 126


Thomas Möllenhoff, Evgeny Strekalovskiy, Michael Moeller, and
Daniel Cremers

A Novel Framework for Nonlocal Vectorial Total Variation Based on


p,q,r −norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Joan Duran, Michael Moeller, Catalina Sbert, and Daniel Cremers
VIII Table of Contents

Inpainting of Cyclic Data Using First and Second Order Differences . . . . 155
Ronny Bergmann and Andreas Weinmann

Discrete Green’s Functions for Harmonic and Biharmonic Inpainting


with Sparse Atoms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Sebastian Hoffmann, Gerlind Plonka, and Joachim Weickert

Segmentation
A Fast Projection Method for Connectivity Constraints in Image
Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Jan Stühmer and Daniel Cremers

Two-Dimensional Variational Mode Decomposition . . . . . . . . . . . . . . . . . . . 197


Konstantin Dragomiretskiy and Dominique Zosso

Multi-class Graph Mumford-Shah Model for Plume Detection Using


the MBO scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Huiyi Hu, Justin Sunu, and Andrea L. Bertozzi

A Novel Active Contour Model for Texture Segmentation . . . . . . . . . . . . . 223


Aditya Tatu and Sumukh Bansal

Segmentation Using SubMarkov Random Walk . . . . . . . . . . . . . . . . . . . . . . 237


Xingping Dong, Jianbing Shen, and Luc Van Gool

Automatic Shape Constraint Selection Based Object Segmentation . . . . . 249


Kunqian Li, Wenbing Tao, Xiangli Liao, and Liman Liu

PDE and Variational Methods


Justifying Tensor-Driven Diffusion from Structure-Adaptive Statistics
of Natural Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Pascal Peter, Joachim Weickert, Axel Munk, Tatyana Krivobokova,
and Housen Li

Variational Time-Implicit Multiphase Level-Sets: A Fast Convex


Optimization-Based Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Martin Rajchl, John S.H. Baxter, Egil Bae, Xue-Cheng Tai,
Aaron Fenster, Terry M. Peters, and Jing Yuan

An Efficient Curve Evolution Algorithm for Multiphase Image


Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Günay Doğan

A Tensor Variational Formulation of Gradient Energy Total


Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Freddie Åström, George Baravdish, and Michael Felsberg
Table of Contents IX

Color Image Segmentation by Minimal Surface Smoothing . . . . . . . . . . . . 321


Zhi Li and Tieyong Zeng

Domain Decomposition Methods for Total Variation Minimization . . . . . 335


Huibin Chang, Xue-Cheng Tai, and Danping Yang

Motion, Tracking and Multiview Reconstruction


A Convex Solution to Disparity Estimation from Light Fields via the
Primal-Dual Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
Mahdad Hosseini Kamal, Paolo Favaro, and Pierre Vandergheynst

Optical Flow with Geometric Occlusion Estimation and Fusion of


Multiple Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
Ryan Kennedy and Camillo J. Taylor

Adaptive Dictionary-Based Spatio-temporal Flow Estimation


for Echo PIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
Ecaterina Bodnariuc, Arati Gurung, Stefania Petra,
and Christoph Schnörr

Point Sets Matching by Feature-Aware Mixture Point Matching


Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Kun Sun, Peiran Li, Wenbing Tao, and Liman Liu

Motion, Tracking and Multiview Reconstruction


Multi-utility Learning: Structured-Output Learning with Multiple
Annotation-Specific Loss Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Roman Shapovalov, Dmitry Vetrov, Anton Osokin,
and Pushmeet Kohli

Mapping the Energy Landscape of Non-convex Optimization


Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
Maira Pavlovskaia, Kewei Tu, and Song-Chun Zhu

Marked Point Process Model for Curvilinear Structures Extraction . . . . . 436


Seong-Gyun Jeong, Yuliya Tarabalka, and Josiane Zerubia

Randomly Walking Can Get You Lost: Graph Segmentation with


Unknown Edge Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Hanno Ackermann, Björn Scheuermann, Tat-Jun Chin,
and Bodo Rosenhahn
X Table of Contents

Medical Image Analysis


Training of Templates for Object Recognition in Invertible Orientation
Scores: Application to Optic Nerve Head Detection in Retinal Images . . . 464
Erik Bekkers, Remco Duits, and Marco Loog

A Technique for Lung Nodule Candidate Detection in CT Using Global


Minimization Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
Nóirı́n Duggan, Egil Bae, Shiwen Shen, William Hsu, Alex Bui,
Edward Jones, Martin Glavin, and Luminita Vese

Hierarchical Planar Correlation Clustering for Cell Segmentation . . . . . . . 492


Julian Yarkony, Chong Zhang, and Charless C. Fowlkes

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505


Convex Envelopes for Low Rank Approximation

Viktor Larsson and Carl Olsson

Centre for Mathematical Sciences


Lund University, Sweden

Abstract. In this paper we consider the classical problem of finding a


low rank approximation of a given matrix. In a least squares sense a
closed form solution is available via factorization. However, with addi-
tional constraints, or in the presence of missing data, the problem be-
comes much more difficult. In this paper we show how to efficiently com-
pute the convex envelopes of a class of rank minimization formulations.
This opens up the possibility of adding additional convex constraints and
functions to the minimization problem resulting in strong convex relax-
ations. We evaluate the framework on both real and synthetic data sets
and demonstrate state-of-the-art performance.1

1 Introduction
The assumption that measurements consist of noisy observations from a low
rank matrix has been proven useful in applications such as non-rigid and artic-
ulated structure from motion [1,2,3], photometric stereo [4] and optical flow [5].
The interpretation of the low rank assumption is that the observed data can
be written as a linear combination of a few basis elements. The factorization
approach, introduced to vision in [6], offers a simple way of determining both
coefficients and basis elements. If the measurement matrix M is complete then
the best approximation, in a least squares sense, can be computed in closed form
[7] using the singular value decomposition (SVD). The main drawback is that
the computation of a factorization requires a complete measurement matrix. In
structure from motion this means that every point has to be visible in every
image, something that rarely occurs in practice due to occlusions and track-
ing failures. In case there are missing entries and/or outliers the optimization
problem is substantially more difficult.
The issue of outliers has received a lot of attention lately. In [8,9] the more
robust L1 -norm is considered. These methods build on the so called Wiberg
algorithm [10] which jointly optimizes a product U V T of two fixed size U and V
matrices. As a consequence the quality of the result is dependent on initialization.
Another approach [11,3,12] tackles the problem of missing data by replacing the
rank constraint with the weaker but convex nuclear norm penalty and solves
min μX∗ + W  (X − M )2F , (1)
X
1
This work has been funded by the Swedish Research Council (grant no. 2012-4213)
and the Crafoord Foundation.

X.-C. Tai et al. (Eds.): EMMCVPR 2015, LNCS 8932, pp. 1–14, 2015.

c Springer International Publishing Switzerland 2015
2 V. Larsson and C. Olsson

where Wij = 0 if the entry is missing and 1 otherwise. This approach is convex
and therefore independent of initialization. In addition it can be shown that if
the locations of the missing entries are random the approach gives the best low
rank approximation [11]. The typical patterns of missing data in structure from
motion still pose a problem for these approaches.
The motivation for using the nuclear norm in (1) is that it is the convex
envelope of the rank function on the set {X; σmax (X) ≤ 1}. The constraint
σmax (X) ≤ 1 is however artificial and not present in (1). In [13] it is shown that
the so called localized rank function
f (X) = μ rank(X) + X − X0 2F , (2)
has the convex envelope
n 
 

f ∗∗ (X) =
2
μ − [ μ − σi (X)]+ + X − X0 2F . (3)
i=1

Note that the regularizer in (3) itself is not convex. The second term, enables a
proportionally smaller penalty for large singular values, without losing convexity,
giving a tighter convex envelope in the neighborhood of X0 . In fact, in contrast
to the nuclear norm heuristic, minimizing (3) gives the same result as solving
(2) with SVD. The advantage of using (3) is that it is convex and therefore can
be combined with other convex constraints and functions. In [13] the missing
data problem is solved by minimizing (3) on complete sub-blocks and enforcing
agreement on the overlaps via linear constraints.
The formulation in [13] consists of a trade-of between matrix rank and data
fit. In many cases it is of interest to search for a matrix of known fixed rank.
For example for rigid structure from motion the measurement matrix is known
to be of rank 4 (or 3 if the translation can be eliminated) [6]. In such cases
the approach of solving (3) on sub-blocks requires determining an appropriate
weight μ for each sub-block that gives the correct rank. In this paper we show
that we can incorporate such knowledge by replacing (2) with
fg (X) = g(rank(X)) + X − X0 2 . (4)
In particular we are interested in the case where
g(rank(X)) = μ max(r0 , rank(X)), (5)
but our theory applies to a larger class of problems as well. The only requirement
that we make is that g is a non-decreasing convex function.
The reason for considering (5) is that in case we know the rank of the sought
matrix we can simply let μ be large thus avoiding iteration over the parameters
which is done in [13]. Consequently our approach is essentially parameter free.
The max term also effectively reduces bias towards low rank solutions like the
zero solution that are often uninteresting, giving a tighter convex relaxation.
Our main contribution is the computation of the convex envelope of (4) and its
proximal operator. While the formulation does not admit closed form solutions
we give simple and fast algorithms for evaluations. In addition we present a way
of strengthening the convex envelopes using a trust-region formulation.
Convex Envelopes for Low Rank Approximation 3

Notation. Throughout the paper we use σi (X), i = 1, ..., n to denote the ith
singular value of a matrix X. Here n denotes the number of singular values and
for notational convenience we will also define σ0 (X) = ∞ and σn+1 (X) = 0. The
vector of all singular values is denoted σ(X). With some abuse of notation we
write the SVD of X as U diag(σ(X))V T . For ease of notation we do not explicitly
indicate the dependence of U and V on X. The scalar product is defined as
X, Y  =tr(X T Y ), where
n tr2 is the trace function, and the Frobenius norm
XF = X, X = i=1 σi (X). Truncation at zero is denoted [a]+ , that is,
[a]+ = 0 if a < 0 and a otherwise.

2 The Convex Envelope


In this section we compute the envelope of (4). We will assume that the function
g can be written 
g if k = 0
g(k) = 0k , (6)
g0 + i=1 gi otherwise
where the sequence gi is non-negative and non-decreasing for 1 ≤ i ≤ n. It is easy
to see that this is possible if g is convex and non decreasing on R. Furthermore,
we will assume that g0 = 0 since subtracting a constant from the objective
function does not affect the minimizers (and only subtracts a constant from the
convex envelope).
We will follow the approach of [13] which computes the bi-conjugate of (2) to
find the convex envelope. In contrast to (2), we will not be able to find a closed
form solution for the convex envelope of (4). Instead our approach will be to
isolate a small set of singular value configurations that can possibly maximize
the conjugate function. By numerically searching this solution set we are able to
efficiently evaluate the convex envelope and compute its proximal operator.

2.1 The Conjugate Function


The convex envelope can be found by computing the second Fenchel conjugate
fg∗∗ = (fg∗ )∗ , where fg∗ is defined as

fg∗ (Y ) = supX, Y  − fg (X). (7)


X

The calculations for the first conjugate roughly follows those of [13] and we only
give the result here. We get that the first conjugate is given by
n   2
Y 1 Y
fg∗ (Y ) = − min gi , σi2 X0 + − X0 2F + X0 + . (8)
i=1
2 2 2 F

2.2 Evaluation of the Bi-conjugate


By completing squares and changing variables we get the bi-conjugate

fg∗∗ (X) = Rg (X) + X − X0 2F , (9)


4 V. Larsson and C. Olsson

where 

n
Rg (X) = max min gi , σi2 (Z) − Z − X2F . (10)
Z
i=1
The next step in determining the convex envelope is to find the maximizing Z
in (10). We first note that using von Neumann’s trace theorem we can reduce
the problem to a search over the singular values of Z. The norm term fulfills

n
−Z − X2F ≤ −Z2F + 2 σi (Z)σi (X) − X2F , (11)
i=1

with equality if Z and X have the same U and V in their singular value decom-
positions. Since the sum in (10) does not depend on U or V the optimal Z has
to be of the form Z = U diag(σ(Z))V T if X = U diag(σ(X))V T . This reduces
the maximization in (10) to

n n
max min gi , σi (Z) −
2
(σi (Z) − σi (X)) .
2
(12)
σ(Z)
i=1 i=1

Note that the elements of σ(Z) have to fulfill σ1 (Z) ≥ σ2 (Z) ≥ ... ≥ σn (Z) since
these are singular values.

Properties of the Optimal σ(Z). To limit the search space for maximization
over σ(Z) we will next derive some properties of the maximizer. Considering each
singular value σk (Z) separately they should solve a program of the type
maxs min(gk , s2 ) − (s − σk (X))2 (13)
s.t. σk+1 (Z) ≤ s ≤ σk−1 (Z) (14)
Note that for k = 1 there is no upper bound on s and for k = n there is no positive
lower bound since we use the convention that σ0 (Z) = ∞ and σn+1 (Z) = 0. We
first consider the unconstrained objective function. This function is the point

wise minimum of the two concave functions gk − (s − σk (X))2 (for s ≥ gk ) and
s2 − (s − σk (X))2 = 2sσk (X) − σk2 (X). The function is concave and attains its
√ √
optimum in s = σk (X) if σk (X) ≥ gk and in s = gk otherwise (see Figure 1).
In case σk (X) = 0 the optimum is not unique. For simplicity we will assume that
σk (X) > 0 in what follows. The solution we create will still be valid if σk (X) = 0
but might not be unique. Let sk be the individual unconstrained optimizers of
(13), i.e.

sk = max( gk , σk (X)). (15)

Note that this sequence is decreasing when σk (X) is larger than gk . We choose
k0 such that sk0 is the smallest value in the sequence sk .
We now consider the constrained problem (13)-(14). Since σk+1 (Z) ≤ σk−1 (Z)
we see that the optimization over σk (Z) can be limited to three choices

⎨ sk if σk+1 (Z) ≤ sk ≤ σk−1 (Z)
σk (Z) = σk−1 (Z) if σk−1 (Z) < sk . (16)

σk+1 (Z) if sk < σk+1 (Z)
Convex Envelopes for Low Rank Approximation 5

0.3 0.4

0.3
0.2
0.2

0.1 0.1

0 √
0 gk σk (X)

σk (X) gk
−0.1

−0.1
−0.2

−0.2 −0.3

−0.4
−0.3
−0.5

−0.4 −0.6
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

√ √
Fig. 1. The objective function in (13) for σk (X) ≤ gk and σk (X) ≥ gk

Lemma 1. If Z is an optimal solution to (12) then there is a k ≤ k0 such that

σi (Z) = si , if i < k, (17)


σi (Z) = σk (Z), if k ≤ i ≤ k0 . (18)

Proof. Using induction we first prove the recursion

σi (Z) = max(si , σi+1 (Z)) for i ≤ k0 . (19)

For i = 1 we see from (16) that s1 is the optimal choice if s1 > σ2 (Z) oth-
erwise σ2 (Z) is optimal. Therefore σ1 (Z) = max(s1 , σ2 (Z)). Next assume that
σi−1 (Z) = max(si−1 , σi (Z)) for some i ≤ k0 . Then

σi−1 (Z) ≥ si−1 ≥ si , (20)

therefore we can ignore the second case in (16), which proves the recursion (19).
To prove the lemma assume σk (Z) = sk for some k ≤ k0 . From (19) it follows
that
σk (Z) = σk+1 (Z) > sk . (21)
But sk is decreasing for k ≤ k0 which implies that σk+1 (Z) > sk+1 . By repeating
the argument it follows that

σk (Z) = σk+1 (Z) = σk+2 (Z) = ... = σk0 (Z). (22)

Lemma 2. If Z is an optimal solution to (12) then

σi (Z) = σi+1 (Z), if i ≥ k0 . (23)

Proof. Consider σi (Z) for some i ≥ k0 . If σi (Z) > si it must have been bounded
from below in (16), i.e. σi (Z) = σi+1 (Z). If instead σi (Z) ≤ si we have σi+1 (Z) ≤
σi (Z) ≤ si ≤ si+1 . Then similarly σi+1 (Z) is bounded from above in (16) which
implies σi+1 (Z) = σi (Z).
6 V. Larsson and C. Olsson

Algorithm. We now summarize the properties derived in the previous section


into an algorithm. Since we do not know which value the k of Lemma 1 will
take the algorithm essentially consists of looping over k and testing the obtained
solutions for feasibility. Furthermore the operations in each iteration are fast so
that in practice the search for k is dominated by other steps such as computation
of the singular value decomposition of X.
From the previous section it follows that the optimal solutions σ(Z) must
have the form

σi (X) i≤k
σi (Z) = , (24)
s i>k

for some k ≤ k0 and s ≤ σk (X). We can find the optimal k and s by considering
the following optimization problem


k 
n 
n
max max gi + min(s2 , gi ) − (s − σi (X))2 . (25)
k≤k0 s
i=1 i=k+1 i=k+1

For a fixed k < k0 it follows from Lemma 1 that s∗ = σk+1 (Z) must satisfy

σk+1 (X) ≤ σk+1 (Z) ≤ σk (Z) = σk (X). (26)

Thus for each k < k0 we only need to consider s in the interval [σk+1 (X), σk (X)].
Since gi are increasing we can further divide this interval into subintervals. We let
√ √ √
Il = [ gkl , gkl +1 ], where gkl , l = 1, ..., m−1 is the subsequence with terms in

the (open) interval (σk+1 (X), σk (X)). Furthermore, we let I0 = [σk+1 (X), gk1 ]

and Im = [ gkm , σk (X)]. Note that on each of these subintervals the objective
function can be written as a concave quadratic function

  
n
flk (s) = gi + s2 − (s − σi (X))2 , s ∈ Il (27)
gi ≤gkl gi >gkl i=k+1

We can therefore rewrite the inner optimization in (25) as the piecewise


smooth problem

max max flk (s). (28)


0≤l≤m s∈Il

The optimum must lie at either a feasible stationary point of flk or at one of
the boundaries of Il for some l. To find the optimal s we can simply enumerate

all the possibilities and choose the maximizing one. Since each gi only lies in
one of the intervals [σk+1 (X), σk (X)] we only need to consider each gi once.
This makes the number of possible solutions depend linearly on the number of
singular values.
The steps of the method are summarized in Algorithm 1.
Convex Envelopes for Low Rank Approximation 7

Data: X, g
Result: σ(Z ∗ )
for k = 0 : k0 do
Compute s∗ and l∗ from (28);
if flk∗ (s∗ ) > fopt then
σi (Z ∗ ) := σi (X), ∀i < k;
σi (Z ∗ ) := s∗ , ∀i ≥ k;
fopt := flk∗ (s∗ );
end
end
Algorithm 1: Finding maximizing Z for (10)

2.3 The Proximal Operator of fg∗∗

In order to optimize the convex envelope fg∗∗ (X) efficiently we need to be able
to compute its proximal operator

proxfg∗∗ (M ) = argmin fg∗∗ (X) + ρX − M 2F . (29)


X

The approach we will take is similar to how we evaluate fg∗∗ (X) itself but will
require looping over two variables instead of one. The key observation is that
switching the order of the minimization over X with maximization over Z enables
us to characterize optimal solutions similarly to Section 2.2. 2 We therefore solve


n
max min min(gi , σi2 (Z)) − X − Z2F + X − X0 2F + ρX − M 2F . (30)
Z X
i=1

The inner minimization in X is a simple least squares problem. By completing


squares one sees that the optimal X is given by

X0 − Z
X=M+ . (31)
ρ

Inserting into (30) we get after some manipulations


n
ρ+1 2
max min(gi , σi2 (Z)) − Z − Y F + C, (32)
Z
i=1
ρ

where C is a constant that does not depend on Z and

X0 + ρM
Y = . (33)
1+ρ
2
If ρ > 0 the objective function is closed, proper convex-concave, continuous and
the optimization can be restricted to a compact set. Switching optimization order is
therefore justified by the existence of a saddle point, see [14].
8 V. Larsson and C. Olsson

Therefore we see that the singular value σk (Z) must solve the problem
ρ+1
maxs min(gi , s2 ) − (s − σk (Y ))2 (34)
ρ
s.t. σk+1 (Z) ≤ s ≤ σk−1 (Z). (35)
The objective function (34) is the pointwise minimum of the two quadratic
strictly (assuming ρ > 0) concave functions,
ρ+1 ρ+1
q1 (s) = gi − (s − σk (Y ))2 , q2 (s) = s2 − (s − σk (Y ))2 . (36)
ρ ρ

The objective function is equal to q1 (s) for s ≥ gk and q2 (s) otherwise. The
functions q1 and q2 attain their maximum values at s = σk (Y ) and s = (ρ +
1)σk (Y ) respectively. Note that since (ρ + 1)σk (Y ) > σk (Y ) at most one of these

can be feasible. It can also happen that neither is feasible, i.e. σk (Y ) ≤ gk ≤

(ρ + 1)σk (Y ). In this case the optimal s = gk . Figure 2 illustrates the shape of
the objective function in the three possible cases.

0.4 0.4 1

0.2
0.2
0 √ 0.5
σk (Y ) (ρ + 1)σk (Y ) gk
−0.2 0

σk (Y ) gk (ρ + 1)σk (Y )
−0.4 0 √
−0.2 gk σk (Y ) (ρ + 1)σk (Y )

−0.6

−0.4
−0.8 −0.5

−1 −0.6

−1.2 −1
−0.8
−1.4

−1.6 −1 −1.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


Fig. 2. The objective function in (34) for left: (ρ + 1)σk (Y ) ≤ gk , middle: σk (Y ) ≤
√ √ √
gk and (ρ + 1)σk (Y ) ≥ gk and right: σk (Y ) ≥ gk

Let sk be the individual unconstrained maximizers of (34), i.e.


⎧ √
⎨ σk (Y ) if σk (Y ) ≥ gk
√ √
sk = gk if σk (Y ) ≤ gk ≤ (ρ + 1)σk (Y ) . (37)
⎩ √
(ρ + 1)σk (Y ) if (1 + ρ)σk (Y ) ≤ gk
Lemma 3. If Z is optimal in (32) then there is k1 and k2 such that
σi (Z) = si , if i < k1 (38)
σi (Z) = s∗ , if k1 ≤ i ≤ k2 (39)
σi (Z) = si , if i > k2 , (40)
where s∗ solves

k2
ρ+1
maxs min(gi , s2 ) − (s − σi (Y ))2 (41)
ρ
i=k1
s.t. σk2 +1 (Z) ≤ s ≤ σk1 −1 (Z). (42)
Convex Envelopes for Low Rank Approximation 9

Proof. By construction there will exist p, q ∈ N with p ≤ q such that si is;


decreasing for 1 ≤ i ≤ p, increasing for p ≤ i ≤ q and decreasing for q ≤ i ≤ n.
For 1 ≤ i ≤ q we are in the same situation as in Lemma 1 and Lemma 2 with
k0 = p.
Consider now instead i ≥ q. We will show that
σi (Z) = min(si , σi−1 (Z)) for i ≥ q. (43)
It is clear from (16) that this holds for i = n. We continue using induction by
assuming σi+1 (Z) = min(si+1 , σi (Z)) holds. Then
σi+1 (Z) ≤ si+1 ≤ si , (44)
since si are decreasing for i ≥ q. This means that for σi (Z) we can ignore the
third case in (16). Thus it follows that σi (Z) = min(si , σi−1 (Z)). So (43) holds
for all i ≥ q.
Now assume that for some i ≥ q we have σi (Z) = si . By (43) we must have
that
σi (Z) = σi−1 (Z) < si ≤ si−1 . (45)
By repeating the argument we get
σi (Z) = σi−1 (Z) = σi−2 (Z) = ... = σq (Z), (46)
and the result follows.

Algorithm. The properties listed in Lemma 3 allows us to find the optimal Z


by searching over the two parameters k1 and k2 . The goal is to find all sequences
σi (Z) of the type given in the lemma and determine which one gives the best
objective value. For fixed k1 and k2 the problem in (41) is a piecewise smooth
problem similar to (13) which we can solve in the same way by considering
the feasible stationary points as well as the boundaries. Note that for feasible
solutions we must have 1 ≤ k1 ≤ p and q ≤ k2 ≤ n. We outline the steps in
Algorithm 2.

3 Block Decomposition with ADMM


Next we consider the problem of missing data. The approach we take here follows
[13] and we only give a very brief account of it here for completeness. The idea
is to try to enforce low rank of sub-blocks of the matrix where no measurements
are missing using our convex relaxation. We seek to minimize the non-convex
function
K
f (X) = g(rank(Pi (X)) + Pi (X) − Pi (M )2F , (47)
i=1
by replacing it with the convex relaxation

K
fR (X) = Rg (Pi (X)) + Pi (X) − Pi (M )2F . (48)
i=1
10 V. Larsson and C. Olsson

Data: X0 , ρ, μ, M
Result: Set of possible solutions S
S := ∅ ;
Define p, q as in proof of Lemma 3;
if si is decreasing with i then
S := {si };
return;
else
for k1 = 1 : p do
for k2 = q : n do
Compute s∗ from (41) and form σ(Z) as in Lemma 3;
if σi (Z) is decreasing with i then
S := S ∪ {σ(Z)};
end
end
end
end
Algorithm 2: Finding maximizing Z for the proximal operator (32)

Here the operator Pi extracts elements corresponding to sub-block i. We do


not explicitly penalize the rank of X, but instead accomplish this via the rank
penalization of the sub-matrices.
To optimize (48) we use ADMM [15]. For each block Pi (X) we introduce a
separate set of variables Xi and enforce consistency via the linear constraints
Xi − Pi (X) = 0. We formulate an augmented Lagrangian of (48) as

K
Rg (Xi ) + Xi − Pi (M )2F + ρXi − Pi (X) + Λi 2F − ρΛi 2F . (49)
i=1

At each iteration t of ADMM we solve the subproblems


Xit+1 = argmin Rg (Xi ) + Xi − Pi (M )2F + ρXi − Pi (X t ) + Λti 2F , (50)
Xi

for i = 1, ..., K and



K
X t+1 = argmin ρXit+1 − Pi (X) + Λti 2F . (51)
X i=1

Here Λti , i = 1, ..., K are the scaled dual variables whose updates at iteration
t are given by Λt+1i = Λti + Xit+1 − Pi (X t+1 ). The first problem (50) can be
solved using the proximal operator derived in the previous section. The second
subproblem (51) is a separable least squares problem with closed form solution.

3.1 Extending the Solution


To extend the solution beyond the blocks we employ a nullspace matching scheme
which has previously been used in [16] and [17]. The goal is find a rank r fac-
torization of the full solution X = U V T given the solution on the blocks. Each
Convex Envelopes for Low Rank Approximation 11

block Pk (X) can be factorized as Pk (X) = Uk VkT . Then Pk (U ) 3 must lie in the
column space of Uk or equivalently it must be orthogonal to the complement,
i.e. (Uk⊥ )T Pk (U ) = 0. We can also write this as

Ak U = [ 0 (Uk⊥ )T 0 ] U = 0. (52)

Collecting these into matrix, AU = 0, we can find U by minimizing ||AU ||. Since
the scale of U is arbitrary we can consider this as a homogeneous least squares
problem which can be solved using SVD. For known U we can simply find V by
minimizing ||W  (M − U V T )||.

4 Stronger Relaxations Using a Trust Region Formulation

In case of very large noise levels the regularizer Rg may not be strong enough
to enforce low rank of the solution. In this section we present an approach to
strengthen it by restricting the algorithm to a local search close to a current
solution estimate Xk . We consider minimization of

g(rank(X)) + X − X0 2F + λX − Xk 2F (53)

The third term can be thought of as a restriction of the step-length of X to


a region where our convex relaxation is accurate. By completing squares the
expression above can be written

2
1 X0 + λXk
(1 + λ) g(rank(X)) + X − +C , (54)
1+λ 1+λ F

where C is a constant that depends on λ, X0 and Xk . Therefore we find that the


convex envelope of (53) is

g (X) + X − X 
0 F + λX − Xk F .
2 2
(1 + λ)R 1+λ (55)

It can be shown that the term (1+λ)R 1+λg (X) → g(rank(X)) when λ → ∞, that

is, we have point wise convergence. Figure 3 shows a one-dimensional version of


(1 + λ)R 1+λ
g with g(k) = k for varying λ.
Our trust region approach consists of two steps. First we minimize (55) with
respect to X. Then we update Xk and repeat the process. Note that at any fix
point X = Xk we have a (possibly local) solution to

g (X) + X − X  .
2
min (1 + λ)R 1+λ 0 F (56)
X

In practice we make the Xk update at each step in the ADMM algorithm in-
stead of running the ADMM until convergence before updating Xk . This greatly
increases speed of convergence.
3
Here Pk (U ) denotes the rows corresponding to block k.
12 V. Larsson and C. Olsson

1.2

0.8

0.6

0.4

0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1


Fig. 3. The regularizer r(σ) = 1 − [1 − 1 + λσ]2+ for different λ

5 Implementation and Experiments


In the experiments we focus our attention to the function g(k) = μ max(r0 , k).
This choice allows us to penalizes a rank higher than r0 while not being biased
towards lower rank solutions.

5.1 Comparison to [13]


We first compare the performance of the envelope of [13] and our convex re-
laxation (CR) in the block decomposition approach (48). We consider the same
three image sequences (book, hand and banner ) which was used in [13]. Since
we are looking for fixed rank solutions we simply choose our weight μ to be suf-
ficiently large. This makes the approach essentially parameter free. In contrast
[13] iterates over weights to find a correct rank solution. The difficulty of find-
ing the optimal parameters is heavily depending on the amount of noise in the
data. For problems with noisy data and many large blocks (such as the banner
sequence) this may be computationally infeasible. We also compare to the trust
region based iterative method (TR) described in section 4.
Figure 4 displays the singular values of a single block in the solutions for
the three image sequences. Note the logarithmic scale. The methods perform
very similarly for the book and hand sequence. This is due to these sequences
having low levels of noise and the problem instance being small enough for it to
be feasible to iteratively find a good μ. The reconstruction error for the three
sequences can be seen in Table 1.

Table 1. The errors ||W  (X − M )||F after extending the solution beyond the blocks
as described in Section 3.1 (which ensures the correct rank)

[13] CR TR
book 1.2731 1.2733 1.2678
hand 0.91386 0.9141 0.91508
banner 3950.2 3373.2 3373.2
Convex Envelopes for Low Rank Approximation 13

102 102 106

102
10−2 10−2

10−2
10−6 10−6
[13] [13] 10−6 [13]
CR CR CR
TR TR TR
10−101 2 3 4 5
−10
6 10 1 2 3 4 5 6 7 8 10
−10
1 2 3 4 5 6 7 8 9 10 11 12

Fig. 4. Singular values for a single block in the book, hand and banner sequence. The
vertical blue line indicates the desired rank.

5.2 Comparison to Non-convex Methods


Next we compare the performance of the proposed method to three state-of-the-
art non-convex methods; OptSpace [18], Truncated Nuclear Norm Regularization
[19] and Damped Wiberg-L2 [20].
The measurement matrix was chosen as M = U V T + N where U, V ∈
R 100×5
, N ∈ R100×100 and Uij , Vij ∼ N (0, 1) and Nij ∼ N (0, σ). If σ is small
then M will be approximately rank 5. The observation matrix W consisted of
overlapping blocks along the diagonal and had 72% missing data. To the left in
Figure 5 we can see the average of ||W  (X − M )||F over 100 instances. The
performance of the proposed method and Damped Wiberg-L2 is very similar on
this data. To illustrate the benefit of the proposed method we also performed
an experiment on another family of instances generated by replacing the fifth
column of V by 103 1. This essentially makes M have one very dominant singular
value which is common in applications. The averaged result for these instances
can be seen to the right in Figure 5.

CR CR
OptSpace 10 DWiberg-L2
20
TNNR-ADMMAP
DWiberg-L2
error

error

10 5

0 0
0 2 · 10−2 4 · 10−2 6 · 10−2 8 · 10−2 0.1 0 2 · 10−2 4 · 10−2 6 · 10−2 8 · 10−2 0.1
σ σ

Fig. 5. Comparison with non-convex methods. Left: Initial experiment. (Note that the
errors for our approach and DWiberg-L2 are very similar). Right: Experiment with
adjusted row-mean.

References
1. Bregler, C., Hertzmann, A., Biermann, H.: Recovering non-rigid 3d shape from
image streams. In: IEEE Conference on Computer Vision and Pattern Recognition
(2000)
14 V. Larsson and C. Olsson

2. Yan, J., Pollefeys, M.: A factorization-based approach for articulated nonrigid


shape, motion and kinematic chain recovery from video. IEEE Trans. Pattern Anal.
Mach. Intell. 30(5), 865–877 (2008)
3. Garg, R., Roussos, A., de Agapito, L.: Dense variational reconstruction of non-
rigid surfaces from monocular video. In: IEEE Conference on Computer Vision
and Pattern Recognition (2013)
4. Basri, R., Jacobs, D., Kemelmacher, I.: Photometric stereo with general, unknown
lighting. Int. J. Comput. Vision 72(3), 239–257 (2007)
5. Garg, R., Roussos, A., Agapito, L.: A variational approach to video registration
with subspace constraints. Int. J. Comput. Vision 104(3), 286–314 (2013)
6. Tomasi, C., Kanade, T.: Shape and motion from image streams under orthography:
a factorization method. Int. Journal on Computer Vision 9(2), 137–154 (1992)
7. Eckart, C., Young, G.: The approximation of one matrix by another of lower rank.
Psychometrika 1(3), 211–218 (1936)
8. Eriksson, A., Hengel, A.: Efficient computation of robust weighted low-rank matrix
approximations using the L1 norm. IEEE Trans. Pattern Anal. Mach. Intell. 34(9),
1681–1690 (2012)
9. Strelow, D.: General and nested Wiberg minimization. In: IEEE Conference on
Computer Vision and Pattern Recognition (2012)
10. Wiberg, T.: Computation of principal components when data are missing. In: Proc.
Symposium of Computational Statistics, pp. 229–326 (1976)
11. Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix
completion. SIAM J. on Optimization 20(4), 1956–1982 (2010)
12. Angst, R., Zach, C., Pollefeys, M.: The generalized trace-norm and its application
to structure-from-motion problems. In: International Conference on Computer Vi-
sion (2011)
13. Larsson, V., Olsson, C., Bylow, E., Kahl, F.: Rank minimization with structured
data patterns. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014,
Part III. LNCS, vol. 8691, pp. 250–265. Springer, Heidelberg (2014)
14. Rockafellar, R.T.: Convex analysis. Princeton Mathematical Series. Princeton Uni-
versity Press, Princeton (1970)
15. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization
and statistical learning via the alternating direction method of multipliers. Found.
Trends Mach. Learn. 3(1), 1–122 (2011)
16. Olsen, S., Bartoli, A.: Implicit non-rigid structure-from-motion with priors. Journal
of Mathematical Imaging and Vision 31(2-3), 233–244 (2008)
17. Jacobs, D.: Linear fitting with missing data: applications to structure-from-motion
and to characterizing intensity images. In: IEEE Conference on Computer Vision
and Pattern Recognition (1997)
18. Keshavan, R.H., Montanari, A., Oh, S.: Matrix completion from a few entries.
IEEE Trans. Inf. Theory 56(6), 2980–2998 (2010)
19. Hu, Y., Zhang, D., Ye, J., Li, X., He, X.: Fast and accurate matrix completion
via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. In-
tell. 35(9), 2117–2130 (2013)
20. Okatani, T., Yoshida, T., Deguchi, K.: Efficient algorithm for low-rank matrix
factorization with missing components and performance comparison of latest algo-
rithms. In: Proceedings of the International Conference on Computer Vision (2011)
Maximizing Flows with Message-Passing:
Computing Spatially Continuous Min-Cuts

Egil Bae1 , Xue-Cheng Tai3 , and Jing Yuan2


1
Department of Mathematics, University of California, Los Angeles, USA
ebae@math.ucla.edu
2
Department of Medical Biophysics, Schulich Medical School, Western University,
Canada
cn.yuanjing@gmail.com
3
Department of Mathematics, University of Bergen, Norway
tai@math.uib.no

Abstract. In this work, we study the problems of computing spatially


continuous cuts, which has many important applications of image pro-
cessing and computer vision. We focus on the convex relaxed formula-
tions and investigate the corresponding flow-maximization based dual
formulations. We propose a series of novel continuous max-flow models
based on evaluating different constraints of flow excess, where the classi-
cal pre-flow and pseudo-flow models over graphs are re-discovered in the
continuous setting and re-interpreted in a new variational manner. We
propose a new generalized proximal method, which is based on a specific
entropic distance function, to compute the maximum flow. This leads to
new algorithms exploring flow-maximization and message-passing simul-
taneously. We show the proposed algorithms are superior to state of art
methods in terms of efficiency.

1 Introduction

Many problems in image processing and computer vision can be modeled and
formulated by the theory of Markov Random Fields (MRF) over graphs, in terms
of computing a maximum a posteriori probability (MAP) estimate, see [23] for
reference. Graph-cuts and message-passing, e.g. [5,4,30,31,19] are two main cat-
egories of efficient algorithms for the combinatorial optimization problem. How-
ever, graph-based methods suffer from visible grid bias, and reducing such bias
requires either adding more neighbors locally or considering high-order cliques,
which inevitably leads to a more intensive computation and memory cost.
On the other hand, variational methods can be applied to solve the same class
of optimization problems in the spatially continuous setting, while avoiding the
metrication errors generated by combinatorial algorithms. In particular, convex
relaxation methods [21,7,15,34,24,9,2,20] were recently developed by relaxing
the discrete constraint to some convex set, which leads great advantages both in
theory and numerics: the convex optimization theory is well-established, efficient
and reliable solvers are available with provable convergence properties, and also

X.-C. Tai et al. (Eds.): EMMCVPR 2015, LNCS 8932, pp. 15–28, 2015.

c Springer International Publishing Switzerland 2015
16 E. Bae, X.-C. Tai, and J. Yuan

easy to handle large-scale computation and speed up by GPUs. In this regard,


the proximal method plays the central element to build up a wide range of
efficient first-order methods, see e.g. [11,10] for references.

1.1 Contributions
In this work, we propose a series of max-flow dual formulations, to compute
minimum cuts in the continuous setting. In contrast to previous work on contin-
uous max-flow [33,1], we formulate the flow excess constraints in different ways,
which directly lead to new generalized proximal algorithms, where the Bregman
divergence acts as the distance measurement for updating the labeling func-
tion. We propose primal-dual algorithmic schemes which combine both a flow-
maximizing step and message-passing step in one unified numerical framework.
This reveals close connections between the proposed flow-maximization meth-
ods and the classical methods, where ’cuts’ over the graphs can be computed by
maximizing flows or propagating messages. Finally, we compare the proposed al-
gorithms with state-of-art continuous optimization methods: the Split-Bregman
algorithm [15], the primal-dual algorithm [10] and the max-flow algorithm in [33]
through experiments.

2 Revisit: Max-flow and Full-Flow Representation


Many discrete optimization problems in image processing and computer vision
can be formulated as finding the minimum cut over appropriate graphs, as first
observed by Greig et. al. [16]. The two most efficient combinatorial algorithms for
computing the minimum cut solve the dual max-flow problem over the graph,
and are called the Ford Fulkerson algorithm [13] and push-relabel algorithm
[14]. More recently, continuous max-flow algorithms [33] have been proposed
that are able to solve isotropic versions of the min-cut / max flow problem by
convex optimization techniques. Both the continuous max-flow algorithm in [33]
and the Ford Fulkerson algorithm solve a full-flow representation of the max-
flow problem, in contrast to the pseudo-flow representation in the push-relabel
algorithm and the algorithms in this paper.

2.1 Discrete Min-cut and Max-flow Models


A graph G is a pair (V, E) consisting of a vertex set V and an edge set E ⊂ V × V.
We let C(v, w) ≥ 0 denote the cost / weight / capacity on edge (v, w) and use the
convention C(v, w) = 0 if there is no edge (v, w). In the min-cut and max-flow
problems, there are two special vertices in addition to V, a source vertix s and
a sink vetrex t. The min-cut problem is to find a partition of V ∪ s ∪ t into two
sets Vs and Vt , such that s ∈ Vs and t ∈ Vt with smallest cost possible, i.e. to
solve

min C(v, w), s.t. s ∈ Vs , t ∈ Vt , Vs ∪ Vt = V, Vs ∩ Vt = ∅ (1)
Vs ,Vt
v∈Vs ,w∈Vt
Maximizing Flows with Message-Passing 17

It is well known that the min-cut problem (1) is dual to the maximum flow
problem over the same graph. We let ps (v) denote the flow on the edge (s, v)
and Cs (v) denote its capacity C(s, v). Similarly, pt (v) and Ct (v) are the flow and
capacity on (v, t) and p(v, w) the flow on (v, w). The maximum flow problem can
be formulated as follows 
max ps (v) (2)
ps
v∈V

s.t. |p(v, w)| ≤ C(v, w) ps (v) ≤ Cs (v) pt (v) ≤ Ct (v) ∀v, w ∈ V (3)

p((w, v)) − ps (v) + pt (v) = 0 ∀v ∈ V (4)
(w,v) : w∈V

where the objective (2) is to push the maximum amount of flow from the source
to the sink under flow capacity constraints (3). Additionally, the flow conserva-
tion constraint (4) should hold, which states that the total amount of incoming
flow should be balanced by the amount of outgoing flow at each vertex.
The classical Ford-Fulkerson algorithm [13] solves the max-flow problem (2)
by successively pushing flow from s to t along non-saturated paths, while main-
taining the flow conservation constraint (4) each iteration. In this paper, we also
call (2) subject to (3) and (4), the full-flow representation of max-flow.

2.2 Continuous Min-cut and Max-flow Models


In the spatially continuous setting, the min-cut problem (1), especially for image
segmentation, can be similarly formulated in terms of finding the two segments
S, Ω\S ⊂ Ω such that
  
min Cs (x) dx + Ct dx + C(s) ds , (5)
S S Ω\S ∂S

where Cs (x) and Ct (x) are pointwise costs for assigning any x to the foreground
S and background Ω\S respectively. As proposed by [21,7], this problem can be
solved globally and exactly by solving the continuous min-cut as follows
  
min E(u) = (1 − u)Cs dx + uCt dx + C(x) |∇u|2 dx , (6)
u(x)∈[0,1] Ω Ω Ω

which results in a convex optimization problem. Further studies can be found in


[22,15] etc.

Continuous Max-flow: Full-Flow Representation. An interesting study


on the continuous min-cut model (6) was proposed in [32,33], which built up the
duality connection between (6) to the so-called continuous max-flow model. It
directly presents the analogue to the well-known duality beetween max-flow and
min-cut [12] discussed above.
As the discrete graph configuration shown above, given the continuous image
domain Ω and two terminals, link the source s and the sink t to each pixel x ∈ Ω
18 E. Bae, X.-C. Tai, and J. Yuan

respectively; define three flow fields around the pixel x: ps (x) ∈ R directed from
the source s to x, pt (x) ∈ R directed from x to the sink t and the spatial flow
field p(x) ∈ R2 around x within the image plain.
By the above spatially continuous setting, the continuous max-flow model
tries to maximize the total flow passing from the source s:

max ps dx (7)
ps ,pt ,p Ω

subject to the three flow capacity constraints:


ps (x) ≤ Cs (x) , pt (x) ≤ Ct (x) , |p(x)|2 ≤ C(x) , ∀x ∈ Ω . (8)
and the flow conservation condition:
pt (x) − ps (x) + div p(x) = 0 , ∀x ∈ Ω . (9)
The authors [32,33] proved that the continuous max-flow model (7) is equivalent
to the continuous min-cut problem (6) in terms of primal and dual, where the
labeling function u(x) just works as a multiplier to the linear flow conservation
condition (9). To see this, the equivalent primal-dual model

min max ps dx + u, pt − ps + div p , (10)
u ps ,pt ,p Ω

subject to the flow capacity constraints (8) was considered. The flow conservation
condition (9) played a central role in constructing the duality between the max-
flow and min-cut models: (7) and (6).
We call (7) the full-flow representation of the continuous max-flow model
in this paper. In the following sections, we will discuss the other two continu-
ous max-flow models which are distinct from the full-flow representation model
(7). We will see that different continuous max-flow models can be constructed
through variants of flow preservation (9), while the full-flow representation model
(7) just corresponds to the balance of in-flow and out-flow.
To compute a solution to (6) or (7), discretization of the domain Ω is neces-
sary. One fundamental difference to the discrete max-flow and min-cut models
is the rotationally invariant 2-norm in (6) and (8), which corresponds to the
Euclidean perimeter in (5). In this paper we assume a general discretized image
domain and differential operators when deriving the duality theory, but we keep
the continuous notation ∇, div, to ease readability. To derive rigorous existence
proofs for infinite dimensional spaces is quite involved and out of the scope of
this conference paper.

3 Continuous Max-flow Models Represented by Pre-flows


and Pseudo-flows
In this section, we propose and study two other continuous max-flow models in
terms of the representations of pre-flows and pseudo-flows. Both models are dual
to the continuous min-cut model (6).
Maximizing Flows with Message-Passing 19

3.1 Continuous Max-flow: Pre-flow Representation


Now we partially optimize the max-flow model (7) by maximizing over the source
flow ps (x) ≤ Cs (x). By simple computation, we can prove that
Proposition 1. The continuous max-flow model (7) is equivalent to the follow-
ing flow-maximization problem:

max pt dx (11)
pt ,p Ω
s.t. Cs (x) − div p(x) − pt (x) ≥ 0 , ∀x ∈ Ω (12)
pt (x) ≤ Ct (x) , |p(x)| ≤ C(x) , ∀x ∈ Ω. (13)

Proof. We first observe that the max-flow model (7) can be equivalently formu-
lated as

max pt dx (14)
pt ,p Ω
s.t. ps (x) + div p(x) − pt (x) = 0 , ∀x ∈ Ω (15)
ps (x) ≤ Cs (x) , pt (x) ≤ Ct (x) , |p(x)| ≤ C(x) , ∀x ∈ Ω. (16)

This just comes
 from the fact that the total source flow ps dx equals to the total
sink flow pt dx, due to the flow balance condition (9). Changing the positive
direction of flows ps and pt in (7), we then have (14).
Therefore, by the same procedures as in [32], optimizing (14) over the con-
straint ps (x) ≤ Cs (x), we see that (14) can be equivalently expressed as

min max pt dx + u, Cs + div p − pt  (17)
u≥0 pt ,p Ω
s.t. pt (x) ≤ Ct (x) , |p(x)| ≤ C(x) ∀x ∈ Ω.

where u is a Lagrange multiplier for Cs + div p − pt ≥ 0. Clearly, (17) is just the


primal-dual formulation of (11). Hence, we have:

(7) ⇐⇒ (14) ⇐⇒ (17) ⇐⇒ (11) .

The equivalence between (7) and (11) is proved.

Obviously, (11) gives another continuous max-flow model which tries to maxi-
mize the total flow streaming out to the sink t while keeping the maximum source
flow ps (x) = Cs (x). We see that the excess of flows at each pixel is no longer
constrained to vanish, but to be non-negative (12), i.e. the flow conservation
condition (9) is not kept.
Moreover, we will show that (11) results in a novel max-flow algorithm, in
the continuous context, which has similar steps as the well-known push-relabel
algorithm proposed in [14]. With this perspective, the constraint (12) recovers
the pre-flow condition. We call (11) the pre-flow representation of the continuous
max-flow model. In view of (17), we have that
Another random document with
no related content on Scribd:
The Project Gutenberg eBook of Dick's
retriever
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.

Title: Dick's retriever

Author: E. M. Stooke

Release date: November 13, 2023 [eBook #72111]

Language: English

Original publication: United Kingdom: Thomas Nelson and Sons,


Limited, 1921

*** START OF THE PROJECT GUTENBERG EBOOK DICK'S


RETRIEVER ***
Transcriber's note: Unusual and inconsistent spelling is as
printed.

"By the time that Mrs. Wilkins put in her appearance,


the table-cloth was laid."
DICK'S RETRIEVER

BY

E. M. STOOKE

Author of "Tim and Jim," "A Reformatory Boy,"


&c. &c.

THOMAS NELSON AND SONS, LIMITED,

LONDON, EDINBURGH, AND NEW YORK.

1921

CONTENTS.

I. THE ARRIVAL
II. DICK RUNS AN ERRAND

III. DICK'S ENCOUNTER WITH THE BULLY

IV. TEN SHILLINGS REWARD

V. DICK'S INTERVIEW WITH THE COLONEL

VI. HARD TIMES

VII. A GALLANT RESCUE

VIII. STRANGER'S MISSION FULFILLED

DICK'S RETRIEVER.

CHAPTER I.
THE ARRIVAL.

IT was a wild, dark night. The rain was coming down in


torrents, the wind blowing a perfect hurricane. Creak!
Creak! How the branches of the elm trees groaned as they
swayed to and fro outside the tiny cottage where Widow
Wilkins and her eldest child—a delicate looking boy of
twelve—crouched over a dying fire!
"Hark, mother, to the wind! Isn't it terrible?" little Dick
exclaimed in awe-stricken tones.

"Yes," said the widow, "it's a dreadful night. I shudder to


think of the poor sailors out at sea. Depend on it, there'll be
lots of wrecks before morning, unless the wind goes down,
and that pretty soon."

With this, she turned her head towards the door, beneath
which she had stuffed some old matting to keep out the
draught.

"I thought," she went on after a few minutes' pause, "I


heard a cry, but suppose it was only my fancy. One thing's
certain, it can't be from upstairs, 'cos the children are
asleep."

"P'raps it's the wind in the chimney or in the branches of


the trees you hear, mother," said the little boy. "It makes all
kinds of sounds when there's a gale on like this. Listen! I
heard a cry then—sure enough I did! 'Twas like the whine of
a dog, only very low and weak. What do you say—shall we
open the door and look? Something is going 'scratch,
scratch,' now," he cried, jumping excitedly to his feet.

"It's late to open the door, Dick," protested his mother


nervously. "Who's to tell that it isn't some drunken body
playing a trick upon us. Mind we've no near neighbours to
shout to if we should want help ever so bad!"

"I know, I know; but I ain't a bit afraid. And we couldn't go


to bed without seeing what was outside. If it isn't a dog,
why, then, maybe it's a child."

The widow looked disturbed. She rose from her chair, raked
the dying embers together in the fireplace, and lit the
candle; for she and Dick had been sitting the last half-hour
by firelight—they always did so to save lamp oil after she
had put away her sewing at nine o'clock on winter evenings.

"Here, mother, you stand back whilst I undo the door,"


directed Dick.

Mrs. Wilkins, not without some slight misgivings, did as she


was bid. Meanwhile Dick went to the door—his small face
pale with anticipation—and withdrew the heavy bolts. This
done, he lifted the latch, and as a result a gust of storm-
wind swept into the cottage kitchen, and a crouching,
shivering retriever entered.

"Oh!" cried the child. "What a poor wretched thing! See,


mother," he continued, as he shut the door and followed the
dog into the centre of the room, "he's soaked to the skin,
and there's a rope round his neck with a big stone at the
end of it! I know; I see what that means. Some one has
been trying to drown him in the river yonder."

"I reckon your guess isn't far out, Dick," agreed the widow.
"Here, you poor creature, let me look at you. Why, you're
cold as ice, and one of your paws is bleeding!"

Then, turning her kind face to her little son, who stood
looking down on their visitor with pitying eyes, she went on,

"Throw a few kindling sticks on those embers, child; and


take the bellows and blow the fire into a blaze. 'Tisn't often
you and I get a chance of doing good, 'cos we're so poor;
but we'll do the best we can for this miserable creature,
though he is but a dog."

"He's a real retriever, I believe," said the enthusiastic little


boy, hastily placing some sticks crosswise on the dying
coals, and reaching forward for the bellows. "See how
affectionately he's licking my hand, mother! Why, what are
you going to do to him with that great cloth you've got?"

"Dry him a bit, to be sure," was the woman's answer. And


straightway she knelt down and began to rub the animal's
rain-sodden coat. "We shall never get him warm as he is,"
she continued, "for he's so wet the water is running off him
into pools on the floor. Try and take off the rope, Dick. And
when you've done it, get me a rag and a piece of string,
and I'll bandage up his paw—it's very sore; I find he can
hardly bear me to touch it."

Dick wanted no second bidding. Setting to work with nimble


fingers, he soon succeeded in untying the knotted rope that
had in some places rubbed the dog's neck into wounds. This
done, he went to a cupboard and took from it a ragged but
clean apron of his sister's, which Mrs. Wilkins split into
strips and bound round the retriever's injured foot.

Having at length dried the dog to their satisfaction, they


coaxed him on to an old sack that they had spread in front
of the hearth; and dumb though he was, the intelligent
creature raised his brown eyes to their faces as if to thank
them for their mercy and compassion. Little Dick brought
some scraps left over from the children's supper and laid
them before the animal; he also offered him some warm
milk and water to drink. But so great was the dog's
exhaustion that he made no effort to drink or eat; instead,
he lay back with a sigh of contentment, and extended his
cramped limbs towards the comforting blaze. In this
position, he was soon asleep.

"Mother," said Dick in a low whisper, after several minutes'


silence, "he's uncommon pretty, now he's dry. Don't you
think so?"
"Yes, I do," assented Mrs. Wilkins. "No one could truthfully
call him ugly with such a fine curly coat as he's got. And he
seems gentle too," she added. "I can't think how folks can
find it in their hearts to be cruel to a dumb thing like him."

The mother and her son sat still for a time, silently admiring
the beautiful animal.

"Mother," said Dick, breaking the silence, "don't you wish


we could keep him—for always, I mean? 'Twould be proper
fun to see him swim in the river to fetch out sticks."

Mrs. Wilkins shook her head.

"Mother!" Dick's voice was low and coaxing; he slipped on


to the floor and laid his head upon his mother's knee. "Do
let us keep this poor dog that's come to our door to-night.
He shall have half of my dinner every day, and a part of my
supper too. O mother, do say yes!"

"Maybe he'll stray away when to-morrow comes."

"Yes, yes; but if he doesn't?"

"Well, Dick, it's not to be thought of—our keeping him—I'm


afraid. You see, a big dog eats a lot; more than you could
spare him from your meals every day, that's certain. Then,
again, there'd be his tax; I couldn't afford to pay it. But,"
hopefully, "p'raps he'll be claimed."

The boy shook his head, and pointed at the rope.

"Whoever tried to drown him doesn't want him back," he


said wisely. "Do you know what I believe, mother?"

"No, Dick. How should I know?"


"Well, I believe God means us to keep him, and I'll tell you
what makes me think so. God knows what happens
everywhere. The parson said so last Sunday. He told us that
there wasn't anything too small or poor to escape God's
notice. So He must know that this poor dog has come
whining to our door." Then, positively, "Of course He does!"

Mrs. Wilkins was silent.

Dick earnestly continued,—

"'Tisn't as if God ever made mistakes. He knows we're poor


folks, and that at times we can scarcely find food for
ourselves. Depend upon it, He won't let us lose by giving
this dear old fellow a home. And when the time comes for
paying his tax—"

"Eh, Dick, what then?" interjected his mother.

"He'll find us the seven and sixpence! P'raps I shall catch a


fish in the river with a piece of money in its mouth," the
little boy conjectured, thinking of the Bible story he had
heard at school the week before.

"We must not decide to-night, child," said Mrs. Wilkins,


heaving a deep sigh. "Poor thing, he shall sleep where he is
till morning. Now, dear, we must go to bed."

"Must we?" The boy stooped over the exhausted animal and
caressed its curly jacket. "Good-night, old man!" he said
softly. "I'm glad we heard you whining. I'm glad we let you
in."
CHAPTER II.
DICK RUNS AN ERRAND.

THERE was no small amount of excitement next morning


when the three younger children became aware that a dog
had gained admittance to the cottage on the previous
evening. Cries of delight rang from their lips the instant
they set eyes on him, and words of pity followed as they
beheld his thin condition, sore neck, and bandaged paw.
The twins, Willie and Joe, aged four, were inclined to be
afraid of him at first; but after watching Dick and Molly
stroke his rough coat, and receive kisses in return from his
long pink tongue, they grew braver, and caressed him also.

"I wonder what he's called?" said bright-eyed, seven-year-


old Molly.

She had addressed the new-comer by a dozen or more


names owned by the various dogs with whom she was
acquainted, but not one appeared to be the right one.

"It isn't Rough, or Ranger, or Spring, or anything I can think


of. If we can't find out what it is, we shall have to think of
something quite new," clapping her hands excitedly.

"But, my dear," broke in the widow at this point, "I really


don't think we can keep him. The gentlefolks in the village
will be sure to say we ain't justified in doing so; and a big
dog is a great expense."

"O mother, we can't turn him to doors!" Dick, on the brink


of tears, pleaded. "What would become of him, lame as he
is? There are lots and lots of boys in the parish who'd stone
him directly they saw he couldn't run away from them. Do
let us keep him for a little while—at any rate, until his foot
is healed."

Widow Wilkins shrugged her thin shoulders and sighed. At


length she consented to keep the dog, assuring herself that
before long some one would be sure to take a fancy to him
and offer him a home.

"What shall we call our dog then?" asked Molly, with quite
an important air of ownership.

"Supposing we call him Stranger," said Dick. "Don't you


think that would do?"

"Yes! Yes!" his little sister and the twins agreed in a breath.

Within a week the dog learned to respond to his new name.


Within a fortnight, he learned to take the children to school
morning and afternoon, and fetch them when their lesson-
hours were over. And though his injured paw caused him to
limp a good deal at first, it soon got well. Then he was able
to scamper and bound along as gracefully as if nothing had
been amiss with it.

He was a sweet-tempered creature, and quickly made


friends with the people in the village, who constantly gave
him scraps to eat.

"Isn't his coat looking beautiful, mother?" Dick said one day
to Mrs. Wilkins, as the much-dreaded winter drew near.

"Ah, it is, my dear!" was her reply. "It's because he's so well
fed—that's the reason. Do you know, Dick, I almost envy
that dog the bits folks throw to him, sometimes, when you
children are on short rations. But there, I won't complain!
P'raps I shall get some more washing or sewing work to do
before long. I'm sure I don't mind how hard I slave, if only I
can manage to get necessaries for you children."

"But, mother, you can't—you mustn't work harder than you


do now!" cried the little boy, in tones strangely earnest for
his years. "Cheer up, though! We won't go meeting trouble
half-way," he went on, "'cos there'll be my shilling a week
that I'm to get for cleaning boots at the rectory before long.
I saw the rector's wife this morning, and promised to start
work in a fortnight's time—that is, if you were willing."

"Willing! Why, yes, 'twill be a fine help to us, my dear,"


responded the widow more cheerfully. "You're right, Dick;
we won't look upon the darkest side. We'll do our best, and
face things as they come."

But although Dick and his hard-working mother tried their


best to be brave during the weeks that followed, anxiety
met them at every turn. The winter settled in, and work
grew scarce. The children's appetites increased with the
cold weather, and rent-days came round all too quickly.

"There's scarcely a handful of coal left in the out-house,


Dick, and I can't spare money to buy more this week," said
Mrs. Wilkins one cold morning to her little son, by now her
right hand in almost every respect.

"That doesn't matter," cried courageous Dick; "I'll pick up a


big bundle of sticks in the woods during dinner-time. And
when I come out of school this afternoon, I'll get another
lot."

And Dick Wilkins was as good as his word. He collected a


huge bundle of fuel when he came out of school at twelve,
and when lessons were over in the afternoon, he hastened
to the woods again to get another lot together.
The weather was chill, indeed; but he paid no heed to the
fact, so busy was he in selecting and collecting his sticks.
He had barely succeeded in binding up his second load
when, to his surprise, he turned and found a gentleman
within a foot of him—one whom he at once recognized as
the artist who lodged at Farmer Smerdon's.

"Don't be frightened, my boy," said the new-comer, seeing


the child start and colour slightly. "You are doing no harm, I
am sure, and it is a pity these branches should be left to rot
in the woods when they would make such capital fires. But
now to come at once to business! Will you run an errand for
me? If so, I'll guard your fagot the while."

"Yes, sir," was Dick's quick reply.

A sensation of delight came over him as he thought of the


coppers that he was in view of earning. He would take them
home to his mother as a pleasant surprise. Oh, how
pleased, how thankful she would be!

"Well, the fact is, I have left a small box of water-colour


paints on the seat in the church porch," the artist lost no
time in explaining; "and as I have walked a great many
miles already this afternoon, I feel too tired to go back for
it. On the other hand, if the village children should come
upon my property, I fear they may do it damage."

"I'll fetch it straightway, sir. Please, is that all? Isn't there


brushes as well?"

"No; I have my brushes with me. It is only the box I have


forgotten."

"Right, sir; I'll be back again in no time."


And needing no further encouragement, Dick darted off as
fast as the broken soles of his worn-out boots would carry
him.

How he did his errand in so short a time, he never knew,


but he reached the church in less than five minutes, though
it was situated at a considerable distance from the woods,
and possessing himself of the little paint-box, he fastened
its cover securely that none of its contents might fall out,
and sped back with all haste to the spot where the artist,
true to his promise, was guarding his bundle of sticks.

"What! Returned already!" exclaimed the gratified


gentleman, as Dick, hot and panting, made his
reappearance. "You have been very quick. I should not have
thought it possible for any one to do the distance in so short
a time," taking the box from the boy's trembling hands and
looking scrutinizingly into his eager countenance.

It was an honest, good-looking face, but withal pinched and


thin. There was, too, a certain wistfulness in the child's blue
eyes that hinted at poverty—perhaps privation. The artist
was by no means rich, but a kindly impulse prompted him
to reward the runner of his errand more generously than he
had at first intended. "Here, lad," said he, "take this for
having obliged me." And he put a piece of money into the
boy's hand.

"Please, sir," Dick gazed with misty eyes at the coin—"did


you mean to give me a shilling?"

"To be sure I did," was the reply; and the donor afterwards
told himself that the expression of mingled wonderment and
delight on the little face was worth three times the amount.
"Take it and welcome, my lad," said he. "Now I will bid you
good-day."
"Good-day, sir; and—and thank you ever so!" burst from
Dick's quivering lips; after which he looked at the coin a
second time, and murmured with delight, "Won't mother be
surprised and glad! Fancy a shilling!—a whole shilling! Why,
that's as much as I get at the rectory for cleaning boots in a
week!"

And then, raising the piece of money to his lips, he actually


kissed it, not for its own sake, as a miser might have done,
but for the sake of the much-needed necessaries that he
meant it to buy.

CHAPTER III.
DICK'S ENCOUNTER WITH THE BULLY.

"HULLO, Dick Wilkins, let me see what he has given you!"

The speaker was Squire Filmer's son, a well-known bully of


about fifteen years of age. As his voice—always a dreaded
one—fell on Dick's ears, the little boy thrust his precious
shilling into his deepest pocket and turned pale.

"Here! Are you deaf?" continued Stephen, as his demand


received no answer. "Let me see this instant what the
gentleman has given you, or I'll make you turn out your
pockets. Oh, it's no good!" he went on as the child looked
anxiously around him. "There's no escape for you. And as
for calling, you might shout yourself hoarse, and no one
would hear. The artist is half a mile away by now; I watched
him out of sight before I spoke to you."

"I wasn't going to call to him, sir. And I wasn't going to run
away either. I ain't a coward," Dick found voice enough to
declare.

And he spoke the truth; no thought of flight had entered his


head for a moment. He had merely glanced around with the
hope that Stranger might perchance have come, as he
sometimes did, to seek him.

"Oh, you are not a coward, eh? Then that's all right. Now
show me that piece of money!" persisted the bully, gripping
Dick's shoulder so tightly that he could have shrieked with
pain, had he been less brave than he was.

"Why should I show it to you, sir? 'Twas given to me. I


earned it by running an errand for the artist gentleman, I
did," said Dick.

"What of that? Let me see it, I tell you, or I'll give you
something to remember me by. Ah!" as Dick's hand went
reluctantly into his pocket. "I thought I should bring you to
reason. So the gentleman gave you this, eh? A shilling!
Well, it's a great deal too much money for a little boy like
you to have. Think of it I—twelve pence, to be sucked away
in candy!"

"No, sir. I mean to take it home to mother," little Dick


explained, in his straightforward way. "We're terribly poor
now that father's dead. And the children do eat such a lot
this cold weather, and—and wear out so many boots."

"Come, you don't whine badly for a youngster! Poor folks


are born grumblers, and a discontented set at best," stated
Stephen. "Look here, Dick Wilkins, I may as well tell you at
once that I am going to have that shilling of yours, whether
you like it or no; and in return, I intend to give you this
pretty little box that I picked up in the road yonder about
half an hour ago. Exchange is no robbery, and you may
think yourself lucky to have anything."

With this, he snatched Dick's shilling from his hand, and


threw a small, curiously-carved match-box at the little
wood-picker's feet.

"Oh, you shan't! You shan't!" cried poor Dick, losing all self-
control, and throwing himself bodily upon the bigger boy.
"'Tis mine," he contended, breaking into a passion of sobs
and tears. "I earned it myself, and I mean to have it. Give it
to me this minute, and take your match-box back. A thing
like that's no good to me and mother. You're a coward and a
thief."

"Now stop that noise," said Stephen. "It's no use your


making a fuss; I want your shilling badly. I'm saving for
new skates; and there's certain to be ice on the lake in Lord
Bentford's grounds early in the new year."

"And I want to buy all sorts of things for mother and the
children," sobbed the miserable and indignant Dick. "Listen
to me, sir!" He ceased crying, took a step towards young
Filmer, and looked fearlessly into his face. "If you don't give
me back my money at once," he said, "I'll go straight to the
farm and tell your father."

"So that's your little game, is it?" exclaimed the bully. "Well,
it's a fortunate thing you mentioned it to me, because now I
can tell you what the result of your doing it would be. I
should make my mother promise me that she would never
have Mrs. Wilkins to do washing or charing for her again."

You might also like