You are on page 1of 54

Guide to 3D Vision Computation

Geometric Analysis and


Implementation 1st Edition Kenichi
Kanatani
Visit to download the full and correct content document:
https://textbookfull.com/product/guide-to-3d-vision-computation-geometric-analysis-an
d-implementation-1st-edition-kenichi-kanatani/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

3D Rotations Parameter Computation and Lie Algebra


based Optimization 1st Edition Kenichi Kanatani

https://textbookfull.com/product/3d-rotations-parameter-
computation-and-lie-algebra-based-optimization-1st-edition-
kenichi-kanatani/

3D Integration in VLSI Circuits: Implementation


Technologies and Applications 1st Edition Katsuyuki
Sakuma

https://textbookfull.com/product/3d-integration-in-vlsi-circuits-
implementation-technologies-and-applications-1st-edition-
katsuyuki-sakuma/

Riemannian Geometry and Geometric Analysis 7th Edition


Jürgen Jost (Auth.)

https://textbookfull.com/product/riemannian-geometry-and-
geometric-analysis-7th-edition-jurgen-jost-auth/

Structures A Geometric Approach Graphical Statics and


Analysis Edmond Saliklis

https://textbookfull.com/product/structures-a-geometric-approach-
graphical-statics-and-analysis-edmond-saliklis/
Mathematical Analysis and the Mathematics of
Computation 1st Edition Werner Römisch

https://textbookfull.com/product/mathematical-analysis-and-the-
mathematics-of-computation-1st-edition-werner-romisch/

3D IMAGING ANALYSIS AND APPLICATIONS 2nd Edition


Yonghuai Liu

https://textbookfull.com/product/3d-imaging-analysis-and-
applications-2nd-edition-yonghuai-liu/

Machine Vision Algorithms in Java Techniques and


Implementation Whelan Paul F Molloy Derek

https://textbookfull.com/product/machine-vision-algorithms-in-
java-techniques-and-implementation-whelan-paul-f-molloy-derek/

GPU Pro 360 Guide to 3D Engine Design 1st Edition


Wolfgang Engel (Author)

https://textbookfull.com/product/gpu-pro-360-guide-to-3d-engine-
design-1st-edition-wolfgang-engel-author/

Retinal Prosthesis A Clinical Guide to Successful


Implementation 1st Edition Mark S. Humayun

https://textbookfull.com/product/retinal-prosthesis-a-clinical-
guide-to-successful-implementation-1st-edition-mark-s-humayun/
Advances in Computer Vision and Pattern Recognition

Kenichi Kanatani
Yasuyuki Sugaya
Yasushi Kanazawa

Guide to
3D Vision
Computation
Geometric Analysis and Implementation
Advances in Computer Vision and Pattern
Recognition

Founding editor
Sameer Singh, Rail Vision, Castle Donington, UK

Series editor
Sing Bing Kang, Microsoft Research, Redmond, WA, USA

Advisory Board
Horst Bischof, Graz University of Technology, Austria
Richard Bowden, University of Surrey, Guildford, UK
Sven Dickinson, University of Toronto, ON, Canada
Jiaya Jia, The Chinese University of Hong Kong, Hong Kong
Kyoung Mu Lee, Seoul National University, South Korea
Yoichi Sato, The University of Tokyo, Japan
Bernt Schiele, Max Planck Institute for Computer Science, Saarbrücken, Germany
Stan Sclaroff, Boston University, MA, USA
More information about this series at http://www.springer.com/series/4205
Kenichi Kanatani Yasuyuki Sugaya

Yasushi Kanazawa

Guide to 3D Vision
Computation
Geometric Analysis
and Implementation

123
Kenichi Kanatani Yasushi Kanazawa
Okayama University Toyohashi University of Technology
Okayama Toyohashi, Aichi
Japan Japan

Yasuyuki Sugaya
Toyohashi University of Technology
Toyohashi, Aichi
Japan

ISSN 2191-6586 ISSN 2191-6594 (electronic)


Advances in Computer Vision and Pattern Recognition
ISBN 978-3-319-48492-1 ISBN 978-3-319-48493-8 (eBook)
DOI 10.1007/978-3-319-48493-8

Library of Congress Control Number: 2016955063

© Springer International Publishing AG 2016


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

Today, computer vision techniques are used for various purposes, and there exist
many textbooks and references that describe principles of programming and system
organization. At the same time, ever new research is going on all over the world,
and the achievements are offered in the form of open source code on the Web. It
appears, therefore, that sufficient environments already exist for students and
researchers for embarking on computer vision research.
However, although executing a public source code may be easy, improving or
modifying it for other applications is rather difficult, because the intent of the code
author is difficult to discern simply by reading the code. On the other hand, many
computer vision textbooks focus on theoretical principles coupled with application
demos. As a result, one is often at a loss as to how to write a program oneself.
Actual implementation of algorithms requires many small details that must be
carefully taken into consideration, which a ready-to-use code does not provide. This
book intends to fill that gap, describing in detail the computational procedures for
programming 3D geometric tasks. The algorithms presented in this book are based
on today’s state of the art, yet arranged in a form simple and easy enough to
understand. The authors also believe that they are the most appropriate form for
practical use in real situations.
In this book, the mathematical background of the presented algorithms is mostly
omitted for the ease of reading, but for theoretically minded readers detailed
derivations and justifications are given in the form of Problems in each chapter;
their Solutions are given at the end of the volume. Also, historical notes and related
references are discussed in the Supplemental Note at the end of each chapter. In this
sense, this book can also serve as a theoretical reference of computer vision
research. To help readers implement the algorithms in this book, sample codes of
typical procedures are placed on the publisher’s Web page.1
This book is based on the teaching materials that the authors used for student
projects at Okayama University and Toyohashi University of Technology, Japan.
Every year, new students with little background knowledge come to our labs to do
computer vision work. According to our experience, the most effective way for
them to learn is to let them implement basic algorithms such as those given here.

1
http://www.springer.com/book/9783319484921

v
vi Preface

Through this process, they learn the basic know-how of programming and at the
same time gradually understand the theoretical background as their interest deep-
ens. Thus we are hoping that this book can serve not only as a reference of the latest
computer vision techniques but also as useful material for introductory courses of
computer vision.
The authors thank Takayuki Okatani of Tohoku University, Japan, Mike Brooks
and Wojciech Chojnacki of the University of Adelaide, Australia, Peter Meer of
Rutgers University, the United States, Wolfgang Förstner, of the University of
Bonn, Germany, Michael Felsberg of Linköping University, Sweden, Rudolf
Mester of the University of Frankfurt, Germany, Prasanna Rangarajan of Southern
Methodist University, the United States, Ali Al-Sharadqah of California State
University, Northridge, the United States, Alexander Kukush of the University of
Kiev, Ukraine, and Chikara Matsunaga of For-A, Co. Ltd., Japan. Special thanks go
to (the late) Prof. Nikolai Chernov of the University of Alabama at Birmingham, the
United States, without whose inspiration and assistance this work would not have
been possible. Parts of this work are used with permission from the authors’ work
Ellipse Fitting for Computer Vision: Implementation and Applications, ©Morgan &
Claypool,2 2016.

Okayama, Japan Kenichi Kanatani


Toyohashi, Japan Yasuyuki Sugaya
Toyohashi, Japan Yasushi Kanazawa
October 2016

2
http://www.morganclaypool.com.
Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Part I Fundamental Algorithms for Computer Vision


2 EllipseFitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1 Representation of Ellipses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Least-Squares Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Noise and Covariance Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Algebraic Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.1 Iterative Reweight . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.2 Renormalization and the Taubin Method . . . . . . . . . . 16
2.4.3 Hyper-Renormalization and HyperLS . . . . . . . . . . . . 17
2.4.4 Summary of Algebraic Methods . . . . . . . . . . . . . . . . 18
2.5 Geometric Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.5.1 Geometric Distance and Sampson Error . . . . . . . . . . 19
2.5.2 FNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.5.3 Geometric Distance Minimization . . . . . . . . . . . . . . . 20
2.5.4 Hyper-Accurate Correction . . . . . . . . . . . . . . . . . . . . 22
2.6 Ellipse-Specific Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6.1 Ellipse Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6.2 Method of Fitzgibbon et al. . . . . . . . . . . . . . . . . . . . . 23
2.6.3 Method of Random Sampling . . . . . . . . . . . . . . . . . . 24
2.7 Outlier Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.8 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.9 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3 Fundamental Matrix Computation. . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1 Fundamental Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Covariance Matrices and Algebraic Methods . . . . . . . . . . . . . . 34

vii
viii Contents

3.3 Geometric Distance and Sampson Error . . . . . . . . . . . . . . . . . . 37


3.4 Rank Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5 A Posteriori Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.6 Hidden Variables Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.7 Extended FNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.8 Geometric Distance Minimization . . . . . . . . . . . . . . . . . . . . . . . 46
3.9 Outlier Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.10 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.11 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4 Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.1 Perspective Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.2 Camera Matrix and Triangulation . . . . . . . . . . . . . . . . . . . . . . . 60
4.3 Triangulation from Noisy Correspondence . . . . . . . . . . . . . . . . 62
4.4 Optimal Correction of Correspondences . . . . . . . . . . . . . . . . . . 63
4.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.6 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5 3D Reconstruction from Two Views . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.1 Camera Modeling and Self-calibration . . . . . . . . . . . . . . . . . . . 69
5.2 Expression of the Fundamental Matrix . . . . . . . . . . . . . . . . . . . 72
5.3 Focal Length Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.4 Motion Parameter Computation . . . . . . . . . . . . . . . . . . . . . . . . 75
5.5 3D Shape Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.7 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6 Homography Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.1 Homographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.2 Noise and Covariance Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.3 Algebraic Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.4 Geometric Distance and Sampson Error . . . . . . . . . . . . . . . . . . 88
6.5 FNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.6 Geometric Distance Minimization . . . . . . . . . . . . . . . . . . . . . . . 90
6.7 Hyperaccurate Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.8 Outlier Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.9 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.10 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Contents ix

7 Planar Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
7.1 Perspective Projection of a Plane . . . . . . . . . . . . . . . . . . . . . . . 99
7.2 Planar Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
7.3 Procedure of Planar Triangulation . . . . . . . . . . . . . . . . . . . . . . . 101
7.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.5 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8 3D Reconstruction of a Plane . . . . . . . . . . ..................... 107
8.1 Self-calibration with a Plane . . . . . . ..................... 107
8.2 Computation of Surface Parameters
and Motion Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
8.3 Selection of the Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
8.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
8.5 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
9 Ellipse Analysis and 3D Computation of Circles . . . . . . . . . . . . . . . . 117
9.1 Intersections of Ellipses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
9.2 Ellipse Centers, Tangents, and Perpendiculars . . . . . . . . . . . . . 119
9.3 Projection of Circles and 3D Reconstruction . . . . . . . . . . . . . . 120
9.4 Center of Circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
9.5 Front Image of the Circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
9.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
9.7 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Part II Multiview 3D Reconstruction Techniques


10 Multiview Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
10.1 Trilinear Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
10.2 Triangulation from Three Views . . . . . . . . . . . . . . . . . . . . . . . . 134
10.2.1 Optimal Correspondence Correction . . . . . . . . . . . . . 134
10.2.2 Solving Linear Equations . . . . . . . . . . . . . . . . . . . . . . 136
10.2.3 Efficiency of Computation . . . . . . . . . . . . . . . . . . . . . 138
10.2.4 3D Position Computation. . . . . . . . . . . . . . . . . . . . . . 138
10.3 Triangulation from Multiple Views . . . . . . . . . . . . . . . . . . . . . . 140
10.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
10.5 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
x Contents

11 Bundle Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149


11.1 Principle of Bundle Adjustment . . . . . . . . . . . . . . . . . . . . . . . . 149
11.2 Bundle Adjustment Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 151
11.3 Derivative Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
11.3.1 Gauss-Newton Approximation . . . . . . . . . . . . . . . . . . 153
11.3.2 Derivatives with Respect to 3D Positions . . . . . . . . . 154
11.3.3 Derivatives with Respect to Focal Lengths . . . . . . . . 154
11.3.4 Derivatives with Respect to Principal Points . . . . . . . 154
11.3.5 Derivatives with Respect to Translations . . . . . . . . . . 154
11.3.6 Derivatives with Respect to Rotations . . . . . . . . . . . . 155
11.3.7 Efficient Computation and Memory Use . . . . . . . . . . 155
11.4 Efficient Linear Equation Solving . . . . . . . . . . . . . . . . . . . . . . . 156
11.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
11.6 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
12 Self-calibration of Affine Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
12.1 Affine Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
12.2 Factorization and Affine Reconstruction . . . . . . . . . . . . . . . . . . 164
12.3 Metric Condition for Affine Cameras . . . . . . . . . . . . . . . . . . . . 167
12.4 Description in the Camera Coordinate System . . . . . . . . . . . . . 168
12.5 Symmetric Affine Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
12.6 Self-calibration of Symmetric Affine Cameras . . . . . . . . . . . . . 172
12.7 Self-calibration of Simplified Affine Cameras . . . . . . . . . . . . . . 175
12.7.1 Paraperspective Projection Model . . . . . . . . . . . . . . . 175
12.7.2 Weak Perspective Projection Model. . . . . . . . . . . . . . 177
12.7.3 Orthographic Projection Model . . . . . . . . . . . . . . . . . 178
12.8 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
12.9 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
13 Self-calibration of Perspective Cameras . . . . . . . . . . . . . . . . . . . . . . . 183
13.1 Homogeneous Coordinates and Projective Reconstruction . . . . 183
13.2 Projective Reconstruction by Factorization . . . . . . . . . . . . . . . . 185
13.2.1 Principle of Factorization . . . . . . . . . . . . . . . . . . . . . . 185
13.2.2 Primary Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
13.2.3 Dual Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
13.3 Euclidean Upgrading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
13.3.1 Principle of Euclidean Upgrading . . . . . . . . . . . . . . . 192
13.3.2 Computation of X . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
13.3.3 Modification of Kj . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
13.3.4 Computation of H . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
13.3.5 Procedure for Euclidean Upgrading . . . . . . . . . . . . . . 198
Contents xi

13.4 3D Reconstruction Computation . . . . . . . . . . . . . . . . . . . . . . . . 199


13.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
13.6 Supplemental Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

Part III Mathematical Foundation of Geometric Estimation


14 Accuracy of Geometric Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
14.1 Constraint of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
14.2 Noise and Covariance Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 214
14.3 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
14.4 Covariance and Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
14.5 Bias Elimination and Hyper-Renormalization . . . . . . . . . . . . . . 218
14.6 Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
14.7 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
15 Maximum Likelihood of Geometric Estimation . . . . . . . . . . . . . . . . 231
15.1 Maximum Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
15.2 Sampson Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
15.3 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
15.4 Bias Analysis and Hyper-Accurate Correction . . . . . . . . . . . . . 235
15.5 Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
15.6 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
16 Theoretical Accuracy Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
16.1 Kanatani-Cramer-Rao (KCR) Lower Bound . . . . . . . . . . . . . . . 243
16.2 Structure of Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
16.3 Derivation of the KCR Lower Bound . . . . . . . . . . . . . . . . . . . . 246
16.4 Expression of the KCR Lower Bound . . . . . . . . . . . . . . . . . . . 249
16.5 Supplemental Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Introduction
1

Abstract
This chapter states the background and organization of this book and describes
distinctive features of the volume.

1.1 Background

The study of computer vision, also known as image understanding, which aims to
extract information contained in 3D scenes by analyzing their images, began in the
United States in the 1960s. Computer vision is a natural extension of pattern recog-
nition, whose study had started earlier for analyzing 2D images, typically hand-
written or printed letters and symbols. For understanding a 3D scene using pattern
recognition techniques, many image processing operations, including edge detec-
tion, thresholding, and thinning, were extensively studied in the 1970s and used for
extracting various features that help classify the images and understand their con-
tent. At first, computer vision researchers thought that it would suffice merely to
combine and apply such image processing operations to 3D scene images. Various
types of research were done in this line in the 1980s, mainly in the United States.
At the time, computer vision was also thought of as a typical artificial intelligence
problem, solving tasks by combining the if-then-else type propositional logic with
the knowledge and common sense of humans about the outside world.
It was soon found, however, that understanding 3D scenes is impossible by merely
combining 2D pattern recognition techniques with a knowledge database and that
mathematical analysis of imaging geometry of the cameras that project a 3D scene
onto a 2D image is indispensable. Around the 1980s, a new mathematical approach
occurred. The resulting mathematical framework was called epipolar geometry,
which allowed one to reconstruct the 3D shape of an object from its images. This

© Springer International Publishing AG 2016 1


K. Kanatani et al., Guide to 3D Vision Computation, Advances in Computer
Vision and Pattern Recognition, DOI 10.1007/978-3-319-48493-8_1
2 1 Introduction

marked a turning point from artificial intelligence research based on knowledge-


based logic to geometric analysis using mathematical means.
In the 1980s, computer vision researchers using mathematical methods were rel-
atively few worldwide, including one of the authors, Kanatani, who was staying in
the United States at the time. In the 1990s, however, the use of mathematical dis-
ciplines, projective geometry in particular, quickly spread to European countries,
mainly the United Kingdom, France, and Sweden. Today, it is no exaggeration to
say that the mathematical approach is the mainstream of computer vision research,
playing a central role in such applications as intelligent robots, autonomous vehicles,
and virtual reality coupled with 3D computer graphics.
One of the authors, Kanatani, since the 1980s, has energetically endorsed the
mathematical approach of computer vision and published such textbooks as Group-
Theoretical Methods in Image Understanding (Springer 1990), Geometric Compu-
tation for Machine Vision (Oxford 1993), and Statistical Optimization for Geometric
Computation (Elsevier 1996). As compared with these, this book describes the
research development thereafter and presents the state-of-the-art techniques for 3D
reconstruction from multiple images with full consideration of programming and
implementation aspects.

1.2 Organization

As mentioned above, this book can be viewed as a supplement to Kanatani’s earlier


books. However, two decades have passed, and the advance since then is remarkable.
This is, of course, mostly due to the dramatic performance increase of today’s com-
puters, but also significant theoretical progress has been made. This book consists
of three parts. In Part I (Chaps. 2–9), fundamental algorithms underlying computer
vision and their implementation are described. Most of these topics were already
treated in earlier books, but since then there have been significant performance
improvements due to newly introduced careful statistical error analysis. We present
their up-to-date versions. Part II (Chaps. 10–13) then describes techniques for 3D
reconstruction from multiple images based on the developments of the last two
decades. Finally, we summarize in Part III (Chaps. 14–16) mathematical theories of
statistical error analysis for general geometric estimation problems.
In the following, we briefly give an outline of each chapter. More detailed historical
notes including the references are given in the Supplemental Note at the end of each
chapter.
Chapter 2. Ellipse Fitting
Because circular objects are projected to ellipses in images, ellipse fitting is necessary
for 3D analysis of circular objects. For this reason, the study of ellipse fitting began as
soon as computers came into use for image analysis in the 1970s. The basic principle
was to compute the parameters such that the sum of squares of expressions that
should ideally be zero is minimized, which today is called least squares or algebraic
1.2 Organization 3

distance minimization. In the 1990s, the notion of optimal computation based on the
statistical properties of noise was introduced by the authors and other researchers.
The first notable technique was the authors’ renormalization, which was improved
in the 2000s as FNS and HEIVE by researchers in Australia and the United States.
Later, further improvements, called hyperaccurate correction, HyperLS, and Hyper-
renormalization, were presented in the 2000s and 2010s by the authors. Chapter 2
describes the implementation of these techniques.
Chapter 3. Fundamental Matrix Computation
The fundamental matrix is a matrix determined from point correspondences between
two images; from it one can compute the relative positions of the two cameras that
took the images. In the 1980s, camera focal lengths were assumed to be known (they
can be calibrated beforehand), and this matrix was called the essential matrix. In
the 1990s, it was found that the matrix can be computed without the knowledge of
the focal lengths and that the focal lengths can be computed from that matrix. Since
then, it has been called the fundamental matrix.
Computing the fundamental or essential matrix from point correspondences has
the same mathematical structure as ellipse fitting. Thus ellipse fitting methods such
as renormalization, FNS, and hyper-renormalization can also be applied. However,
the fundamental matrix has an additional constraint: it has determinant 0 with rank 2.
This chapter mainly focuses on this rank constraint. A simple method popular since
the 1980s is the a posteriori correction: the fundamental matrix is computed without
regard to the rank constraint, and then the smallest singular value of its singular value
decomposition is replaced by 0. In the 1990s, the optimal correction was found by
the authors: the reliability of the fundamental matrix computation was evaluated
from the statistical properties of the data noise, and the rank was corrected in a
statistically optimal manner. In the 2000s, alternative methods were proposed by the
authors and others, including iterating the fundamental matrix computation such that
it automatically converges to an optimal solution of determinant 0 (extended FNS);
an alternative is to parameterize the fundamental matrix such that its determinant
is identically 0 and doing a search in that parameter space. Chapter 3 describes the
implementation of these methods.
Chapter 4. Triangulation
The triangulation for computing the 3D position of a point using two images is
necessary in the final stage of 3D shape reconstruction. In the 1980s, a simple least
squares was used, which is practically sufficient in real situations. In the 1990s and
the 2000s, a new light was shed on this problem from a theoretical point of view. To
be specific, the authors presented an iterative procedure for computing a theoretically
optimal solution by considering the statistical properties of image noise, and Hartley
and Sturm presented an alternative algebraic method for it. The resulting solutions
are identical, but the latter emphasizes global optimality at the cost of a heavier
computational burden. This chapter describes the practically more efficient method
of the authors.
Chapter 5. 3D Reconstruction from Two Images
4 1 Introduction

The mathematical framework for this problem was already established in the 1980s.
However, the camera focal lengths were assumed to be known. The fact that they can
be computed from the fundamental matrix of the two images was found in the 1990s
by many researchers including Bougnoux and the authors of this book. Another
advance was that in the 1980s the essential matrix was computed by least squares,
but today the fundamental matrix is computed by a statistically optimal method, as
described in Chap. 3.
Chapter 6. Homography Computation
The fact that the 3D position of a planar surface can be reconstructed from its two pro-
jection images by computing the homography between them has been known since
the early days before the advent of computers. In this chapter, we discuss optimal
homography computation from point correspondences by extending the optimal com-
putation of ellipses and fundamental matrices described in Chaps. 3 and 4. Specif-
ically, we describe the implementation of renormalization, hyper-renormalization,
FNS, geometric distance minimization, and hyperaccurate correction.
Chapter 7. Planar Triangulation
This is the triangulation task described in Chap. 4 with an additional constraint that
the 3D position to be computed should be on a specified plane. An optimal compu-
tation scheme for this problem was first presented by the authors in 1995. This is
a first approximation that omits higher-order terms in image noise but is sufficient
in practical situations. Later, in 2011, the authors showed an iterative scheme that
computes the exact solution. This chapter describes its implementation. Chum et
al., on the other hand, in 1997 presented from a theoretical motivation an algebraic
procedure that solve an eight-degree polynomial. This corresponds to the algebraic
method of Hartley and Sturm in 1997 for the standard triangulation. As in that case,
the resulting solutions are identical, but the procedure described here is much more
efficient than the method of Chum et al.
Chapter 8. 3D Reconstruction of a Planar Surface
This is the task of computing the 3D position from point correspondences between
two images with the knowledge that the object is a planar surface; the camera
positions need not be known. Analytical procedures for this problem were already
obtained in the 1960s, before computer vision research began, in relation to percep-
tual psychology. In the 1980s, elegant mathematical formulations were presented
by many researchers including Longuet-Higgins. The computation consisted of two
stages: computing the homography between the two images and then decomposing
it to the 3D positions of the cameras and the plane. This chapter describes an up-to-
date procedure for this. For the first stage, we optimally compute the homography
as described in Chap. 4; for the second stage, we adopt the procedure of Longuet-
Higgins.
Chapter 9. Analysis of Ellipses and 3D Computation of Circles
Circular objects are projected onto images as ellipses, and from the observed ellipses
one can compute their 3D positions and orientations. The analytical procedure was
1.2 Organization 5

found in the 1990s: the 3D computation of circular objects was first shown by Forsyth
et al. in 1991 and extended to elliptic objects by Kanatani and Liu in 1993. This
chapter shows various examples of 3D analysis based on ellipse images, combining
the optimal ellipse fitting techniques of Chap. 2 and mathematical facts of projective
geometry.
Chapter 10. Multiview Triangulation
This is an extension of the two-view triangulation of Chap. 4 to multiple views,
computing the 3D position of a point from its multiple images taken by cameras in
known positions. For this task, many papers were published from the late 1990s to the
early 2000s, mainly from a theoretical point of view in relation to global optimization
techniques. This chapter describes an iterative procedure that the authors presented in
2010, extending the two-view triangulation procedure of Chap. 4 to multiple images.
Being computationally very efficient, this is considered to be the best method in
practical applications.
Chapter 11. Bundle Adjustment
This is a technique for computing, from multiple images of a scene, not only its
3D shape but also, simultaneously, the positions and orientations of all the cameras
that take the images as well as their intrinsic parameters. This is done by iteratively
updating all the unknowns, starting from given initial values, such that the computed
3D shape and the observed images better satisfy the assumed perspective projection
relationship. The principle was well known in photogrammetry before the advent
of computers, but the computation requires a vast amount of time and memory for
a huge number of unknowns. Only in the 2000s were various computational tools
made available due to the dramatic progress in hardware. The main focus is on how
to store the vast number of unknowns efficiently and speed up the computation of
large-scale matrices, the majority of whose elements are 0. This chapter presents a
specific computational procedure based on the work of the authors.
Chapter 12. Self-calibration of Affine Cameras
Bundle adjustment requires initial values. One way for computing an approximate 3D
is to ignore the foreshortening effects of the camera imaging. This is called affine cam-
era modeling. In the 1990s, Kanade and his group introduced the orthographic, weak
perspective, and paraperspective camera models. Because the computation involves
factorizing a matrix into the product of two, the method came to be generally known
as factorization. However, the orthographic, weak perspective, and paraperspective
models are mutually unrelated; for example, none is a special case of another. In
2007, the authors pointed out that all these are special cases of the symmetric affine
camera and showed that 3D reconstruction is possible only using symmetric affine
camera modeling coupled with factorization. This chapter describes the algorithm
in detail and shows how it reduces to the paraperspective, weak perspective, and
orthographic models in that order by restricting the parameters.
Chapter 13. Self-calibration of Perspective Cameras
6 1 Introduction

Self-calibration is a process of computing 3D from images without knowledge of


the camera parameters, the cameras calibrating themselves. In the 1990s, efforts
were made to extend the self-calibration process from affine cameras to perspec-
tive cameras. The resulting techniques are built on highly mathematical theories of
projective geometry and are regarded as one of the most significant achievements
of computer vision study. The basic principle is to assume the parameters of fore-
shortening, called the projective depths, and regard the cameras as affine. Then the
factorization technique is applied to update projective depths iteratively such that
the resulting solution better satisfies the perspective projection relationship. The
obtained 3D shape is deformed from the true shape by an unknown projective trans-
formation. The process of rectifying this projective reconstruction into the correct
shape is called Euclidean upgrading. These computations require a large amount of
computation for many unknowns, therefore the computational efficiency is the main
focus. Many different formulations have been proposed for both projective recon-
struction and Euclidean upgrading. This chapter describes their combination in the
way that the authors think is the best.
Chapter 14. Accuracy of Geometric Estimation
This chapter generalizes the algebraic methods for ellipse fitting, fundamental matrix
computation, and homography computation described in Chaps. 2, 3, and 6 in a
unified mathematical framework. Then we give a detailed error analysis in general
terms and derive explicit expressions for the covariance and bias of the solution. The
hyper-renormalization procedure of the authors is obtained in this framework.
Chapter 15. Maximum Likelihood of Geometric Estimation
This chapter discusses maximum likelihood estimation and Sampson error minimiza-
tion in the general mathematical framework of Chap. 14. We present here higher-order
error analysis for deriving explicite expressions of the covariance and bias of the
solution. The hyperaccurate correction procedure of the authors is derived in this
framework.
Chapter 16. Theoretical Accuracy Limit
This chapter presents the derivation of a theoretical accuracy limit of the geometric
estimation problems of Chaps. 2, 3, and 6 in the general mathematical framework
of Chaps. 14 and 15. It is given in the form of a bound, called the KCR (Kanatani-
Cramer-Rao) lower bound, on the covariance matrix of the solution. The resulting
form indicates that all iterative algebraic and geometric methods achieve this bound
up to higher-order noise terms, meaning that these are all optimal with respect to
covariance. The mathematical relationship of the KCR lower bound with the Cramer-
Rao lower bound, well known in statistics, is also explained.
1.3 Features 7

1.3 Features

The uniqueness of this book is the order of description. Most textbooks on computer
vision begin with mathematical fundamentals followed by resulting computational
procedures. This is naturally a logical order, but this would give the impression that
one is reading a sophisticated book on mathematics. This book, in contrast, immedi-
ately describes actual computational procedures after a brief statement of the purpose
and the basic principle. This is to emphasize that one can obtain the desired result
by simply computing as instructed without knowing its derivation. All the compu-
tational procedures described in this book are based on the state of the art of the
domain and constructed in a manner that is faithful to the principle and yet is easy
to understand. The authors believe that they are in the most appropriate form for
practical applications. After the procedure is described, its theoretical background is
briefly explained in the Comments. Thus the reader need not worry about mathemati-
cal details, which are almost always annoying in most computer vision textbooks for
actually building a computer vision system. This omission of mathematical details
is the biggest feature of this book.
Yet, there may certainly be some who want to know the details of the derivation
and justification of the procedure. For them, all the derivations and justifications are
given in the section, Problems, at the end of each chapter and in Solutions at the end
of the volume. This means that theorems and propositions and their proofs in other
computer vision textbooks are replaced by the problems and their solutions in this
book. Also, the background mathematical theories of statistical optimization that
underlie this book are deferred to the final Part III (Chaps. 14–16). Thus, this book
can serve as both a practical programming guidebook and a mathematics reference
for computer vision, satisfying practitioners who want to implement computer vision
algorithms without regard to mathematical details and also theoreticians who want
to know the underlying mathematical details.
In the Examples section of each chapter, a few experimental results are shown to
give one a feeling of what can be done using the procedures in that chapter; they are
not intended as comparative evidence for justification of the computation. At the end
of each section is the Supplemental Note section, explaining the historical background
behind the subject of that chapter along with the reference literature. Also, related
topics and mathematical knowledge not directly discussed in that chapter are briefly
introduced. In this respect, the description order of this book is opposite to most
computer vision textbooks, where general backgrounds are given first followed by
particular topics. This is to give priority to the desire of those who want to implement
the algorithms as quickly as possible.
The description of the computational procedures in this book consists of explicit
lists of mathematical expressions to be evaluated. They can be immediately imple-
mented in any computer language such as C, C++, and MATLAB®. Today, various
packages are provided on the Web for basic mathematical operations including vec-
tor and matrix manipulation and eigenvalue computation; any of them can be used.
8 1 Introduction

For the convenience of the readers, however, sample codes that implement typical
procedures of each chapter have been placed on the publisher’s website.1

1 http://www.springer.com/book/9783319484921.
Part I
Fundamental Algorithms
for Computer Vision
Ellipse Fitting
2

Abstract
Extracting elliptic edges from images and fitting ellipse equations to them is
one of the most fundamental tasks of computer vision. This is because circular
objects, which are very common in daily scenes, are projected as ellipses in
images. We can even compute their 3D positions from the fitted ellipse equations,
which along with other applications are discussed in Chap. 9. In this chapter, we
present computational procedures for accurately fitting an ellipse to extracted
edge points by considering the statistical properties of image noise. The approach
is classified into algebraic and geometric. The algebraic approach includes least
squares, iterative reweight, the Taubin method, renormalization, HyperLS, and
hyper-renormalization; the geometric approach includes FNS, geometric distance
minimization, and hyper-accurate correction. We then describe the ellipse-specific
method of Fitzgibbon et al. and the random sampling technique for avoiding
hyperbolas, which may occur when the input information is insufficient. The
RANSAC procedure is also discussed for removing nonelliptic arcs from the
extracted edge point sequence.

2.1 Representation of Ellipses

An ellipse observed in an image is described in terms of a quadratic polynomial


equation in the form
Ax 2 + 2Bx y + C y 2 + 2 f 0 (Dx + E y) + f 02 F = 0, (2.1)
where f 0 is a constant for adjusting the scale. Theoretically, we can let it be 1, but
for finite-length numerical computation it should be chosen that x/ f 0 and y/ f 0 have
approximately the order of 1; this increases the numerical accuracy, avoiding the loss
of significant digits. In view of this, we take the origin of the image x y coordinate

© Springer International Publishing AG 2016 11


K. Kanatani et al., Guide to 3D Vision Computation, Advances in Computer
Vision and Pattern Recognition, DOI 10.1007/978-3-319-48493-8_2
12 2 Ellipse Fitting

system at the center of the image, rather than the upper-left corner as is customarily
done, and take f 0 to be the length of the side of a square that we assume to contain the
ellipse to be extracted. For example, if we assume there is an ellipse in a 600 × 600
pixel region, we let f 0 = 600. Because Eq. (2.1) has scale indeterminacy (i.e., the
same ellipse is represented if A, B, C, D, E, and F were multiplied by a common
nonzero constant), we normalize them to
A2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1. (2.2)
If we define the 6D vectors
⎛ ⎞ ⎛ ⎞
x2 A
⎜ 2x y ⎟ ⎜B⎟
⎜ 2 ⎟ ⎜ ⎟
⎜ y ⎟ ⎜C ⎟
ξ =⎜ ⎟
⎜ 2 f0 x ⎟ , θ =⎜ ⎟
⎜ D⎟, (2.3)
⎜ ⎟ ⎜ ⎟
⎝ 2 f0 y ⎠ ⎝E⎠
f 02 F
Equation (2.2) can be written as
(ξ , θ) = 0, (2.4)
where and hereafter we denote the inner product of vectors a and b by (a, b). The
vector θ in Eq. (2.4) has scale indeterminacy, and the normalization of Eq. (2.2) is
equivalent to vector normalization θ = 1 to the unit norm.

2.2 Least-Squares Approach

Fitting an ellipse in the form of Eq. (2.1) to a sequence of points (x1 , y1 ), . . . , (x N , y N )


in the presence of noise (Fig. 2.1) is to find A, B, C, D, E, and F such that
Axα2 + 2Bxα yα + C yα2 + 2 f 0 (Dxα + E yα ) + f 02 F ≈ 0, α = 1, . . . , N . (2.5)
If we write ξ α for the value of ξ of Eq. (2.3) for x = xα and y = y0 , Eq. (2.5) can be
equivalently written as
(ξ α , θ ) ≈ 0, α = 1, . . . , N . (2.6)
Our task is to compute such a unit vector θ. The simplest and most naive method is
the following least squares (LS).

Procedure 2.1 (Least squares)

1. Compute the 6 × 6 matrix

1 
N
M= ξ αξ 
α. (2.7)
N
α=1
2.2 Least-Squares Approach 13

Fig. 2.1 Fitting an ellipse to


a noisy point sequence
(x α , y α)

2. Solve the eigenvalue problem


Mθ = λθ , (2.8)
and return the unit eigenvector θ for the smallest eigenvalue λ.

Comments This is a straightforward generalization of line fitting to a point sequence


(→ Problem 2.1); we minimize the sum of squares
1  1  
N N
J= (ξ α , θ )2 = θ ξ αξ 
α θ = (θ, Mθ) (2.9)
N N
α=1 α=1
subject to θ = 1. As is well known in linear algebra, the minimum of this quadratic
form in θ is given by the unit eigenvector θ of M for the smallest eigenvalue. Equation
(2.9) is often called the algebraic distance (although it does not have the dimension of
square length), and Procedure 2.1 is also known as algebraic distance minimization. It
is sometimes called DLT (direct linear transformation). Inasmuch as the computation
is very easy and the solution is immediately obtained, this method has been widely
used. However, when the input point sequence covers only a small part of the ellipse
circumference, it often produces a small and flat ellipse very different from the true
shape (we show such examples in Sect. 2.8). Still, this is a prototype of all existing
ellipse-fitting algorithms. How this can be improved has been a major motivation of
many researchers.

2.3 Noise and Covariance Matrices

The reason for the poor accuracy of Procedure 2.1 is that the properties of image
noise are not considered; for accurate fitting, we need to take the statistical properties
of noise into consideration. Suppose the data xα and yα are disturbed from their true
values x̄α and ȳα by Δxα and Δyα , and write
xα = x̄α + Δxα , yα = ȳα + Δyα . (2.10)
Substituting this into ξ α , we obtain
ξ α = ξ̄ α + Δ1 ξ α + Δ2 ξ α , (2.11)
14 2 Ellipse Fitting

where ξ̄ α is the value of ξ α for xα = x̄α and yα = ȳα and Δ1 ξ α , and Δ2 ξ α are,
respectively, the first-order noise terms (i.e., linear expressions in Δxα and Δyα )
and the second-order noise terms (i.e., quadratic expressions in Δxα and Δyα ).
Specifically, they are
⎛ ⎞ ⎛ ⎞
2 x̄α Δxα Δxα2
⎜ 2Δxα ȳα + 2 x̄α Δyα ⎟ ⎜ 2Δxα Δyα ⎟
⎜ ⎟ ⎜ ⎟
⎜ 2 ȳα Δyα ⎟ ⎜ Δy 2 ⎟
Δ1 ξ α = ⎜

⎟,
⎟ Δ2 ξ α = ⎜

α ⎟.
⎟ (2.12)
⎜ 2 f 0 Δxα ⎟ ⎜ 0 ⎟
⎝ 2 f 0 Δyα ⎠ ⎝ 0 ⎠
0 0
Regarding the noise terms Δxα and Δyα as random variables, we define the covari-
ance matrix of ξ α by
V [ξ α ] = E[Δ1 ξ α Δ1 ξ 
α ], (2.13)
where E[ · ] denotes the expectation over the noise distribution, and  denotes
the vector transpose. We assume that Δxα and Δyα are subject to an independent
Gaussian distribution of mean 0 and standard deviation σ . Thus
E[Δxα ] = E[Δyα ] = 0, E[Δxα2 ] = E[Δyα2 ] = σ 2 , E[Δxα Δyα ] = 0. (2.14)
Substituting Eq. (2.12) and using this relationship, we obtain the covariance matrix
of Eq. (2.13) in the form
⎛ 2 ⎞
x̄α x̄α ȳα 0 f 0 x̄α 0 0
⎜ x̄α ȳα x̄α2 + ȳα2 x̄α ȳα f 0 ȳα f 0 x̄α 0 ⎟
⎜ ⎟
⎜ 0 x̄α ȳα ȳα2 0 f 0 ȳα 0 ⎟
V [ξ α ] = σ 2 V0 [ξ α ], V0 [ξ α ] = 4 ⎜
⎜ f 0 x̄α f 0 ȳα
⎟ . (2.15)
⎜ 0 f 02 0 0⎟ ⎟
⎝ 0 f 0 x̄α f 0 ȳα 0 f 02 0 ⎠
0 0 0 0 0 0
All the elements of V [ξ α ] have the multiple σ 2 , therefore we factor it out and call
V0 [ξ α ] the normalized covariance matrix. We also call the standard deviation σ the
noise level. The diagonal elements of the covariance matrix V [ξ α ] indicate the noise
susceptibility of each component of the vector ξ α and the off-diagonal elements
measure the correlation between its components.
The covariance matrix of Eq. (2.13) is defined in terms of the first-order noise term
Δ1 ξ α alone. It is known that incorporation of the second-order term Δ2 ξ α will have
little influence over the final results. This is because Δ2 ξ α is very small as compared
with Δ1 ξ α . Note that the elements of V0 [ξ α ] in Eq. (2.15) contain true values x̄α and
ȳα . They are replaced by their observed values xα and yα in actual computation. It
is known that this replacement has practically no effect on the final results.
2.4 Algebraic Methods 15

2.4 Algebraic Methods

2.4.1 Iterative Reweight

The following iterative reweight is an old and well-known method.

Procedure 2.2 (Iterative reweight)

1. Let θ 0 = 0 and Wα = 1, α = 1, . . . , N .
2. Compute the 6 × 6 matrix
1 
N
M= Wα ξ α ξ 
α. (2.16)
N
α=1
3. Solve the eigenvalue problem
Mθ = λθ , (2.17)
and compute the unit eigenvector θ for the smallest eigenvalue λ.
4. If θ ≈ θ 0 up to sign, return θ and stop. Else, update
1
Wα ← , θ 0 ← θ, (2.18)
(θ, V0 [ξ α ]θ )
and go back to Step 2.

Comments As is well known in linear algebra, computing the unit eigenvector for
the smallest eigenvalue of a symmetric matrix M is equivalent to computing the unit
vector θ that minimizes the quadratic form (θ, Mθ). Because
1  1  
N N N
(θ , Mθ) = (θ, Wα ξ α ξ  θ ) = W (θ , ξ ξ θ ) = 1 Wα (ξ α , θ)2 ,
α α α α
N N N
α=1 α=1 α=1
(2.19)

we minimize the sum of squares (ξ α , θ)2 weighted by Wα . This is commonly known


as weighted least squares. According to statistics, the weights Wα are optimal if they
are inversely proportional to the variance of each term, being small for uncertain terms
and large for certain terms. Inasmuch as (ξ α , θ ) = (ξ̄ α , θ) + (Δ1 ξ α , θ) + (Δ2 ξ α , θ)
and (ξ̄ α , θ) = 0, the variance is given from Eqs. (2.13) and (2.15) by
E[(ξ α , θ )2 ] = E[(θ , Δ1 ξ α Δ1 ξ  
α θ)] = (θ , E[Δ1 ξ α Δ1 ξ α ]θ ) = σ (θ , V0 [ξ α ]θ),
2

(2.20)
omitting higher-order noise terms. Thus we should let Wα = 1/(θ, V0 [ξ α ]θ), but
θ is not known yet. Therefore we instead use the weights Wα determined in the
preceding iteration to compute θ and update the weights as in Eq. (2.18). Let us
call the solution θ computed in the initial iteration the initial solution. Initially,
we set Wα = 1, therefore Eq. (2.19) implies that we are starting from the least-
squares solution. The phrase “up to sign” in Step 4 reflects the fact that eigenvectors
have sign indeterminacy; we align θ and θ 0 by reversing the sign θ ← −θ when
(θ , θ 0 ) < 0.
16 2 Ellipse Fitting

2.4.2 Renormalization and the Taubin Method

It has been well known that the accuracy of least squares and iterative reweight is
rather low with large bias when the input elliptic arc is short; they tend to fit a smaller
ellipse than expected. The following renormalization was introduced to reduce the
bias.

Procedure 2.3 (Renormalization)

1. Let θ 0 = 0 and Wα = 1, α = 1, . . . , N .
2. Compute the 6 × 6 matrices

1  1 
N N
M= Wα ξ α ξ 
α, N= Wα V0 [ξ α ]. (2.21)
N N
α=1 α=1
3. Solve the generalized eigenvalue problem
Mθ = λNθ , (2.22)
and compute the unit generalized eigenvector θ for the generalized eigenvalue λ
of the smallest absolute value.
4. If θ ≈ θ 0 up to sign, return θ and stop. Else, update
1
Wα ← , θ 0 ← θ, (2.23)
(θ , V0 [ξ α ]θ)
and go back to Step 2.

Comments As is well known in linear algebra, solving the generalized eigenvalue


problem of Eq. (2.22) for symmetric matrices M and N is equivalent to computing
the unit vector θ that minimizes the quadratic form (θ, Mθ) subject to the constraint
(θ , Nθ ) = constant. Initially Wα = 1, therefore the first iteration minimizes the
N N
sum of squares α=1 (ξ α , θ)2 subject to (θ , α=1 V0 [ξ α ] θ) = constant. This is
known as the Taubin method for ellipse fitting (→ Problem 2.2). Standard numerical
tools for solving the generalized eigenvalue problem in the form of Eq. (2.22) assume
that N is positive definite, but Eq. (2.15) implies that the sixth column and the sixth
row of the matrix V0 [ξ α ] all consist of zero, therefore N is not positive definite.
However, Eq. (2.22) is equivalently written as
1
Nθ = Mθ . (2.24)
λ
If the data contain noise, the matrix M is positive definite, therefore we can apply
a standard numerical tool to compute the unit generalized eigenvector θ for the
generalized eigenvalue 1/λ of the largest absolute value. The matrix M is not positive
definite only when there is no noise. We need not consider that case in practice,
but if M happens to have eigenvalue 0, which implies that the data are exact, the
corresponding unit eigenvector θ gives the true solution.
2.4 Algebraic Methods 17

2.4.3 Hyper-Renormalization and HyperLS

According to experiments, the accuracy of the Taubin method is higher than least
squares and iterative reweight, and renormalization has even higher accuracy. The
accuracy can be further improved by the following hyper-renormalization.

Procedure 2.4 (Hyper-renormalization)

1. Let θ 0 = 0 and Wα = 1, α = 1, . . . , N .
2. Compute the 6 × 6 matrices
1 
N
M= Wα ξ α ξ 
α, (2.25)
N
α=1

1 N
N= Wα V0 [ξ α ] + 2S [ξ α e ]
N
α=1

1  2
N
− Wα (ξ α , M− − 
5 ξ α )V0 [ξ α ] + 2S [V0 [ξ α ]M5 ξ α ξ α ] , (2.26)
N2
α=1
where S [ · ] is the symmetrization (S [A] = (A + A )/2), and e is the vector
e = (1, 0, 1, 0, 0, 0) . (2.27)

The matrix M5 is the pseudoinverse of M of truncated rank 5.
3. Solve the generalized eigenvalue problem
Mθ = λNθ , (2.28)
and compute the unit generalized eigenvector θ for the generalized eigenvalue λ
of the smallest absolute value.
4. If θ ≈ θ 0 up to sign, return θ and stop. Else, update
1
Wα ← , θ 0 ← θ, (2.29)
(θ, V0 [ξ α ]θ )
and go back to Step 2.

Comments The vector e is defined in such a way that E[Δ2 ξ α ] = σ 2 e for the
second-order noise term Δ2 ξ α in Eq. (2.12). The pseudoinverse M− 5 of truncated
rank is computed by
1 1
M− 
5 = μ θ 1θ 1 + · · · + μ θ 5θ 5 ,

(2.30)
1 5
where μ1 ≥ · · · ≥ μ6 are the eigenvalues of M, and θ 1 , . . . , θ 6 are the corresponding
eigenvectors (note that m6 and θ 6 are not used). The derivation of Eq. (2.26) is given
in Chap. 14.
The matrix N in Eq. (2.26) is not positive definite, but we can use a standard
numerical tool by rewriting Eq. (2.28) in the form of Eq. (2.24) and computing the
unit generalized eigenvector for the generalized eigenvalue 1/λ of the largest absolute
N
value. The initial solution minimizes the sum of squares α=1 (ξ α , θ)2 subject to
the constraint (θ , Nθ) = constant for the matrix N obtained by letting Wα = 1 in
Eq. (2.26). This corresponds to the method called HyperLS (→ Problem 2.3).
18 2 Ellipse Fitting

2.4.4 Summary of Algebraic Methods

We have seen that all the above methods compute the θ that satisfies
Mθ = λNθ , (2.31)
where the matrices M and N are defined from the data and contain the unknown θ.
Different choices of them lead to different methods:


⎪ 1
N

⎪ ξαξ

⎨N α, (least squares, Taubin, HyperLS)
M= α=1

⎪ 1 N
ξαξ

⎪ α
, (iterative reweight, renormalization, hyper-renormalization)

⎩N (θ , V0 [ξ α ]θ )
α=1
(2.32)



I (identity), (least squares, iterative reweight)



⎪ N
⎪ 1
⎪ V0 [ξ α ], (Taubin)



⎪ N
α=1



⎪ 1 N

⎪ V0 [ξ α ]

⎪ (renormalization)

⎪ N (θ , V0 [ξ α ]θ),

⎪ α=1



⎪ 1
N

⎪ V0 [ξ α ] + 2S [ξ α e ]
⎪N


⎨ α=1
N= 1 
N

⎪ − (ξ α , M− − 


⎪ N2 5 ξ α )V0 [ξ α ] + 2S [V0 [ξ α ]M5 ξ α ξ α ] ,

⎪ α=1



⎪ (HyperLS)


⎪ 1
⎪ N


1
V0 [ξ α ] + 2S [ξ α e ]



⎪ N (θ , V0 [ξ α ]θ)

⎪ α=1

⎪ 1 
N

⎪ 1


⎪ − 2 (ξ α , M− − 
5 ξ α )V0 [ξ α ] + 2S [V0 [ξ α ]M5 ξ α ξ α ] .

⎪ N (θ , V0 α ]θ)
[ξ 2

⎩ α=1
(hyper-renormalization)
(2.33)
For least squares, Taubin, and HyperLS, the matrices M and N do not contain the
unknown θ, thus Eq. (2.31) is a generalized eigenvalue problem that can be directly
solved without iterations. For other methods (iterative reweight, renormalization,
and hyper-renormalization), the unknown θ is contained in the denominators in the
expressions of M and N. We let the part that contains θ be 1/Wα , compute Wα
using the value of θ obtained in the preceding iteration, and solve the generalized
eigenvalue problem in the form of Eq. (2.31). We then use the resulting θ to update
Wα and repeat this process.
According to experiments, HyperLS has accuracy comparable to renormalization,
and hyper-renormalization has even higher accuracy. Because the iteration starts from
the HyperLS solution, the convergence is very fast; usually, three to four iterations
are sufficient.
Another random document with
no related content on Scribd:
Scarcely had he uttered these words, when, seizing a palette, he
seated himself at the easel, and was soon totally absorbed in his
occupation. Hour after hour passed unheeded by Sebastian, who
was too much engrossed by the beautiful creation of his pencil,
which seemed bursting into life, to mark the flight of time. “Another
touch,” he exclaimed, “a soft shade here—now the mouth. Yes!
there! it opens—those eyes—they pierce me through!—what a
forehead!—what delicacy! Oh my beautiful—” and Sebastian forgot
the hour, forgot he was a slave, forgot his dreaded punishment—all,
all was obliterated from the soul of the youthful artist, who thought of
nothing, saw nothing, but his beautiful picture.
But who can describe the horror and consternation of the
unhappy slave, when, on suddenly turning round, he beheld all the
pupils, with the master at their head, standing beside him.
Sebastian never once dreamt of justifying himself, and with his
palette in one hand, and his brushes in the other, he hung down his
head, awaiting in silence the punishment he believed he justly
merited. For some moments a dead silence prevailed; for if
Sebastian was confounded at being caught in the commission of
such a flagrant crime, Murillo and his pupils were not less astonished
at the discovery they had made.
Murillo, having, with a gesture of the hand, imposed silence on his
pupils, who could hardly restrain themselves from giving way to their
admiration, approached Sebastian, and concealing his emotion,
said, in a cold and severe tone, while he looked alternately from the
beautiful head of the virgin to the terrified slave, who stood like a
statue before him,
“Who is your master, Sebastian?”
“You,” replied the boy, in a voice scarcely audible.
“I mean your drawing-master,” said Murillo.
“You, Senor,” again replied the trembling slave.
“It cannot be; I never gave you lessons,” said the astonished
painter.
“But you gave them to others, and I listened to them,” rejoined the
boy, emboldened by the kindness of his master.
“And you have done better than listen—you have profited by
them,” exclaimed Murillo, unable longer to conceal his admiration.
“Gentlemen, does this boy merit punishment, or reward?”
At the word punishment, Sebastian’s heart beat quick; the word
reward gave him a little courage; but, fearing that his ears deceived
him, he looked with timid and imploring eyes towards his master.
“A reward, Senor!” cried the pupils, in a breath.
“That is well; but what shall it be?”
Sebastian began to breathe.
“Ten ducats, at least,” said Mendez.
“Fifteen,” cried Ferdinand.
“No,” said Gonzalo; “a beautiful new dress for the next holiday.”
“Speak, Sebastian,” said Murillo, looking at his slave, whom none
of these rewards seemed to move; “are these things not to your
taste? Tell me what you wish for. I am so much pleased with your
beautiful composition, that I will grant any request you may make.
Speak, then; do not be afraid.”
“Oh, master, if I dared—” and Sebastian, clasping his hands, fell
at the feet of his master. It was easy to read in the half-opened lips of
the boy and his sparkling eyes some devouring thoughts within,
which timidity prevented him from uttering.
With the view of encouraging him, each of the pupils suggested
some favor for him to demand.
“Ask gold, Sebastian.”
“Ask rich dresses, Sebastian.”
“Ask to be received as a pupil, Sebastian.”
A faint smile passed over the countenance of the slave at the last
words, but he hung down his head and remained silent.
“Ask for the best place in the studio,” said Gonzalo, who, from
being the last pupil, had the worst light for his easel.
“Come, take courage,” said Murillo gaily.
“The master is so kind to-day,” said Ferdinand, “that I would risk
something. Ask your freedom, Sebastian.”
At these words Sebastian uttered a cry of anguish, and raising his
eyes to his master, he exclaimed, in a voice choked with sobs, “The
freedom of my father! the freedom of my father!”
“And thine, also,” said Murillo, who, no longer able to conceal his
emotion, threw his arms around Sebastian, and pressed him to his
breast.
“Your pencil,” he continued, “shows that you have talent; your
request proves that you have a heart; the artist is complete. From
this day, consider yourself not only as my pupil, but my son. Happy
Murillo! I have done more than paint—I have made a painter!”
Murillo kept his word, and Sebastian Gomez, known better under
the name of the mulatto of Murillo, became one of the most
celebrated painters in Spain. There may yet be seen in the churches
of Seville the celebrated picture which he had been found painting by
his master; also a St. Anne, admirably done; a holy Joseph, which is
extremely beautiful; and others of the highest merit.

At a crowded lecture the other evening, a young lady standing at


the door of the church was addressed by an honest Hibernian, who
was in attendance on the occasion, with, “Indade, Miss, I should be
glad to give you a sate, but the empty ones are all full.”
Sketches of the Manners, Customs, and History
of the Indians of America.

CHAPTER V.
Peru discovered by Francisco Pizarro.—​He invites the Inca to visit
him.—​Description of the Inca.—​Rejects the Bible.—​
Treacherously seized by Pizarro.—​The Inca proposes to
ransom himself.—​The ransom brought.—​Pizarro seizes the
gold, then murders the Inca.—​Conquers Peru.

When the Spaniards first discovered the Pacific, Peru was a


mighty empire. It extended from north to south more than 2000
miles. Cuzco, the capital city, was filled with great buildings, palaces,
and temples, which last were ornamented, or covered, rather, with
pure gold. The improvements of civilized life were far advanced;
agriculture was the employment of the quiet villagers; in the cities
manufactures flourished; and science and literature were in a course
of improvement which would, doubtless, have resulted in the
discovery of letters.
Their government was a regular hereditary monarchy; but the
despotism of the emperor was restricted by known codes of law.
They had splendid public roads. That from Cuzco to Quito extended
a distance of 1500 miles or more. It passed over mountains, through
marshes, across deserts. Along this route, at intervals, were large
stone buildings, like the caravanseras of the East, large enough to
contain thousands of people. In some instances these caravanseras
were furnished with the means of repairing the equipments and arms
of the troops or travellers.
Such was the ancient empire of Peru, when Francisco Pizarro, an
obscure Spanish adventurer, with an army of only sixty-two
horsemen and a hundred or two foot-soldiers, determined to invade
it. He, like all the other Spaniards who went out to South America,
was thirsting to obtain gold. These men, miscalled Christians, gave
up their hearts and souls to the worship of mammon, and they
committed every horrible crime to obtain riches. But the Christian
who now cheats his neighbor in a quiet way-of-trade manner, to
obtain wealth—is he better than those Spaniards? I fear not. Had he
the temptation and the opportunity, he would do as they did.
At the time Pizarro invaded Peru, there was a civil war raging
between Atahualpa, the reigning monarch, or Inca, as he was called,
and his brother Huascar. These brothers were so engaged in their
strife, that Pizarro had marched into the country without being
opposed, and entered the city of Caxamala on the 15th of
November, 1532. Here the army of the Inca met the Spaniards.
Pizarro was sensible he could not contend with such a multitude, all
well armed and disciplined, so he determined by craft to get
possession of the person of the Inca.
He sent to invite the Inca to sup with him in the city of Caxamala,
and promised then to give an account of his reasons for coming to
Peru. The simple-hearted Inca believed the Spaniards were children
of the sun. Now the Inca worshiped the sun, and thought he himself
had descended from that bright luminary. He was very anxious,
therefore, to see the Spaniards, and could not believe they meant to
injure him; so he consented to visit Pizarro.
Atahualpa took with him twenty thousand warriors, and these
were attended by a multitude of women as bearers of the luggage,
when he set out to visit the Spaniards. The person of the sovereign
was one blaze of jewels. He was borne on a litter plated with gold,
overshadowed with plumes, and carried on the shoulders of his chief
nobles. On his forehead he had the sacred tuft of scarlet, which he
wore as the descendant of the sun. The whole moved to the sound
of music, with the solemnity of a religious procession.
The Inca putting the Bible to his ear.
When the Inca entered the fatal gates from which he was never to
return, his curiosity was his chief emotion. Forgetting the habitual
Oriental gravity of the throne, he started up, and continued standing
as he passed along, gazing with eagerness at every surrounding
object. A friar, named Valverde, now approached, bearing a cross
and a Bible. The friar commenced his harangue by declaring that the
pope had given the Indies to Spain; that the Inca was bound to obey;
that the book he carried contained the only true mode of worshiping
Heaven.
“Where am I to find your religion?” said the Inca.
“In this book,” replied the friar.
The Inca declared that whatever might be the peaceful intentions
of the Spaniards, “he well knew how they had acted on the road,
how they had treated his caciques, and burned his cottages.” He
then took the Bible, and turning over some of the leaves, put it
eagerly to his ear.
“This book,” said he, “has no tongue; it tells me nothing.” With
these words he flung it contemptuously on the ground.
The friar exclaimed at the impiety, and called on his countrymen
for revenge. The Inca spoke a few words to his people, which were
answered by murmurs of indignation. At this moment Pizarro gave
the signal to his troops: a general discharge of cannon, musketry,
and crossbows followed, and smote down the unfortunate Peruvians.
The cavalry were let loose, and they broke through the Inca’s guard
at the first shock. Pizarro rushed forward at the head of a chosen
company of shield-bearers, to seize the Inca.

Pizarro seizing the Inca.


That sovereign was surrounded by a circle of his high officers and
devoted servants. They never moved except to throw themselves
upon the Spanish swords. They saw that their prince was doomed,
and they gave themselves up to his fate. The circle rapidly thinned,
and the Inca would soon have been slain, had not Pizarro called to
his soldiers to forbear. He wished to take the Inca alive, that he might
extort gold from him for his ransom.
Pizarro, therefore, rushed forward, and, seizing the Inca by the
mantle, dragged him to the ground. The Peruvians, seeing his fall in
the midst of the Spanish lances, thought he was slain, and instantly
gave up the battle. In the force of their despair they burst through
one of the walls and fled over the open country. More than two
thousand were left dead within the gates, while not a single Spaniard
had been killed. It was a murder rather than a battle.
The Spaniards proceeded to plunder the camp of the Inca, and
he, seeing their passion for gold, offered to purchase his ransom. He
offered to cover the floor of the chamber where he was confined with
wedges of gold and silver. The Spaniards laughed at this, as they
conceived, impossible proposal. The Inca thought they despised the
small sum he had offered, and starting to his feet, he haughtily
stretched his arm as high as he could reach, and told them he would
give them that chamber full to the mark he then touched with his
hand. The chamber was twenty-two feet long, sixteen wide, and the
point he touched on the wall was nine feet high.
Pizarro accepted the proposal, and sent messengers to Cuzco to
obtain the ransom. These brought back twenty-six horse loads of
gold, and a thousand pounds’ weight of silver. The generals of the
Inca also brought additional treasures of gold and silver vessels, and
the room was filled. Pizarro grasped the treasure, and divided it
among his troops, after deducting one fifth for the king, and taking a
large share for himself.
Pizarro had promised to set the Inca at liberty; but it is probable
he never intended it. After he had, in the name of the Inca, drawn all
the gold he could from the country, he barbarously murdered the
poor Indian chief!
There is a tradition that the fate of the Inca was hastened by the
following circumstance. One of the soldiers on guard over him, wrote
the name of God on the thumb nail of the Inca, explaining to him at
the same time the meaning of the word. The Inca showed it to the
first Spaniard who entered. The man read it. The Inca was delighted;
and Pizarro appearing at the moment, the important nail was
presented to him. But Pizarro could not read! the conqueror of Peru
could not write his name; and the Inca manifested such contempt
towards him for this ignorance, that Pizarro resolved he should not
live.
After the Inca’s death, another long and bloody war, or, rather,
ravage, commenced. The Spaniards finally took Cuzco, the royal
city, plundered the temples, and desolated the land, till the
Peruvians, in despair, submitted to their chains, and became the
slaves of the Spaniards.
Since that time the Spanish power has always governed Peru, till
the revolution in 1823, when the colonists threw off the yoke of the
mother country. But, in justice to the kings of Spain, it should be
remembered that they have frequently made laws to protect their
Indian subjects in South America. Still the poor natives were often,
indeed always, cruelly oppressed by the colonists. But now the spirit
of liberality and improvement is ameliorating the condition of all the
laboring classes in the independent Republic of Peru, and the
Indians are entitled to the privileges of free citizens.

CHAPTER VI.
Indian tradition.—​Manco Capac.—​His reign.—​Religion.—​Property.—​
Agriculture.—​Buildings.—​Public roads.—​Manufactures.—​
Domestic animals.—​Results of the conquest of the country by
the Spaniards.

The Peruvians have a tradition that the city of Cuzco was founded
in this manner. The early inhabitants of the country were ignorant,
and brutal as the wild beasts of the forest, till a man and woman of
majestic form, and clothed in decent garments, appeared among
them. They declared themselves to be children of the sun, sent to
instruct and to reclaim the human race. They persuaded the savages
to conform to the laws they proposed, united them, the Indians,
together in a society, and taught them to build the city.
Manco Capac was the name of this wonderful man; the woman
was called Marna Ocollo. Though they were the children of the sun,
it seems they had been brought up very industriously; for Manco
Capac taught the Indians agriculture, and other useful arts; and
Marna Ocollo taught the women to spin and weave, and make
feather garments.
After the people had been taught to work, and had built houses
and cultivated fields, and so on, Manco Capac introduced such laws
and usages as were calculated to perpetuate the good habits of the
people. And thus, according to the Indian tradition, was founded the
empire of the Incas.
The territory was, at first, small; but it was gradually enlarged by
conquering the neighboring tribes,—merely, however, to do good by
extending the blessings of their laws and arts to the barbarians,—till
the dominions of the Inca Atahualpa, the twelfth in succession,
extended from north to south along the Pacific Ocean above 2000
miles; its breadth from east to west was from the ocean to the
Andes. The empire had continued four hundred years.
The most singular and striking circumstance in the Peruvian
government, was the influence of religion upon its genius and its
laws. The whole civil policy was founded on religion. The Inca
appeared not only as a legislator, but as the messenger of heaven.
His precepts were received as the mandates of the Deity. Any
violation of his laws was punished with death; but the people were so
impressed with the power and sacred character of their ruler that
they seldom ventured to disobey.
Manco Capac taught the Peruvians to worship the sun, as the
great source of light, of joy, and fertility. The moon and stars were
entitled to secondary honors. They offered to the sun a part of those
productions which his genial warmth had called forth from the bosom
of the earth, and his beams had ripened. They sacrificed some of the
animals which were indebted to his influence for nourishment. They
presented to him choice specimens of those works of ingenuity
which his light had guided the hand of man in forming. But the Incas
never stained the altar of the sun with human blood.
Thus the Peruvians were formed, by the spirit of the religion which
they had adopted, till they possessed a national character more
gentle than that of any other people in America.
The state of property in Peru was no less singular than that of
religion, and contributed, likewise, towards giving a mild turn of
character to the people. All the lands capable of cultivation, were
divided into three shares. One was consecrated to the sun, and the
product of it was applied to the erection of the temples, and
furnishing what was requisite towards celebrating the public rites of
religion.
The second share belonged to the Inca, or was set apart as the
provision made by the community for the support of government.
The third and largest share was reserved for the maintenance of the
people, among whom it was parcelled out. All such lands were
cultivated by the joint industry of the community.
A state thus constituted may be considered like one great family,
in which the union of the members was so complete, and the
exchange of good offices so perceptible, as to create stronger
attachment between man and man than subsisted under any other
form of society in the new world. The Peruvians were advanced far
beyond any of the nations in America, both in the necessary arts of
life, and in such as have some title to be called elegant.
Agriculture was carried on by the Peruvians with a good deal of
skill. They had artificial canals to water their fields; and to this day
the Spaniards have preserved and use some of the canals made in
the days of the Incas. They had no plough, but turned up the earth
with a kind of mattock of hard wood. The men labored in the fields
with the women, thus showing the advance of civilization over the
rude tribes which imposed all the drudgery upon females.
The superior ingenuity of the Peruvians was also obvious in their
houses and public buildings. In the extensive plains along the Pacific
Ocean, where the sky is always serene and the climate mild, the
houses were, of course, very slight fabrics. But in the higher regions,
where rain falls and the rigor of the changing seasons is felt, houses
were constructed with great solidity. They were generally of a square
form, the walls about eight feet high, built of bricks hardened in the
sun, without any windows, and the door strait and low. Many of these
houses are still to be seen in Peru.
But it was in the temples consecrated to the sun, and in the
buildings intended for the residence of their monarchs, that the
Peruvians displayed the utmost extent of their art. The temple of
Pachacmac, together with a palace of the Inca and a fortress, were
so connected together as to form one great structure, nearly two
miles in circuit.
Still this wide structure was not a very lofty affair. The Indians,
being unacquainted with the use of the pulley and other mechanical
powers, could not elevate the large stones and bricks which they
employed in building; and the walls of this, their grandest edifice, did
not rise above twelve feet from the ground. There was not a single
window in any part of the building. The light was only admitted by the
doors; and the largest apartments must have been illuminated by
some other means.
The noblest and most useful works of the Incas, were their public
roads. They had two, from Cuzco to Quito, extending,
uninterruptedly, above fifteen hundred miles. These roads were not,
to be sure, equal to our modern turnpikes; but at the time Peru was
discovered there were no public roads in any kingdom of Europe that
could be compared to the great roads of the Incas.
The Peruvians had, likewise, made considerable advances in
manufactures and the arts which may be called elegant. They made
cloth, and they could refine silver and gold. They manufactured
earthen ware; and they had some curious instruments formed of
copper, which had been made so hard as to answer the purposes of
iron. This metal they had not discovered. If they had only understood
the working of iron and steel as well as they did that of gold and
silver, they would have been a much richer and more civilized
people.
The Peruvians had tamed the duck and the llama, and rendered
them domestic animals. The llama is somewhat larger than the
sheep, and in appearance resembles a camel. The Indians
manufactured its wool into cloth; its flesh they used for food;
moreover, the animal was employed as a beast of burden, and would
carry a moderate load with much patience and docility. The aid of
domestic animals is essential to the improvement and civilization of
human society.
In short, the Peruvians, when contrasted with the naked, indolent,
and ignorant inhabitants of the West Indian Islands, seem to have
been a comfortable, ingenious, and respectable nation. The
conquest of their country destroyed their system of government.
They were made not merely to pay tribute to their new rulers, but, far
worse, they were reduced to the condition of slaves. They were
compelled to leave the pleasant fields they used to cultivate, and
driven in crowds to the mountains in search of gold. They were
forced to labor hard, and allowed only a scanty subsistence; till,
heart-broken and despairing of any change for the better, they sunk
under their calamities and died!

An Indian girl feeding a duck. Llama carrying a burden on its back.


In a few years after Pizarro entered Cuzco, a great part of the
ancient population of Peru had been swept away, destroyed by the
avarice and cruelty of their conquerors.
The Alligator.

I am not about to recommend this creature to you on account of


his beauty or amiable qualities. He has, in fact, too large a mouth,
and too long a tail, to be handsome, and his reputation is not of the
pleasantest kind. However, it is interesting to hear about all the
works of nature, and as this is one of the most wonderful, I shall
proceed to describe it.
Alligators live in warm climates, and spend the greater part of
their time in the water. There are four or five kinds in America, but
the most dangerous are found along the banks of the river
Mississippi. These creatures are sometimes fifteen or even twenty
feet in length; their mouths are two or three feet long and fourteen or
fifteen inches wide. Their teeth are strong and sharp, and their claws
are also very strong.
During the middle of the day the alligators are generally at rest—
lying lazily upon the shore, or in the water. Toward evening, however,
they begin to move about in search of prey, and then the roar of the
larger ones is terrific. It is louder and deeper than the lowing of the
bull, and it has all the savage wildness of the bittern’s cry. It would
seem that this bellowing could not be agreeable to anything, for as
soon as the birds and beasts hear it, they fly as if smitten with terror;
but still, when an alligator wishes to speak something loving into the
ear of another, he goes to bellowing with all his might, and this
sound, so awful to other creatures, seems very pleasant and musical
to the alligator which is thus addressed. This shows that there is a
great difference in tastes.
THE CROCODILE.
The male alligators sometimes engage in ferocious battles. These
usually take place in shallow water, where their feet can touch the
ground. At first they only cudgel each other with their tails; but the
blows given are tremendous, and soon rouse the anger of the
parties. They then go at it with teeth and claws. The snapping,
scratching, rending and thumping, are now tremendous; the water
boils around with the struggle; streams of blood mingle with the
waves; and at last one of the combatants is actually torn in pieces by
his adversary.
The appetite of the alligator is voracious; I never heard of one that
had the dyspepsia. Nothing of the animal kind comes amiss;
mountain cat, monkey, vulture, parrot, snake-lizard, and even the
electric eel, rattlesnake, and venomous bush-master, are alike
swallowed down! Nor does it matter whether the creature be alive or
dead, save only that it seems most admired when in a putrid state. It
frequently happens that the creature will deposit an animal he has
killed in the water till partly decayed, and when most offensive to us,
it seems most delicious to the alligator.
In some of the rivers of North and South America, within the
tropics, these creatures are very numerous. They also infest the
lakes and lagoons all around the Gulf of Mexico; and it is here that
the alligator’s paradise is found. When the spring rains come these
creatures have a perfect carnival. Many fishes, birds, and animals,
are killed during the freshets, and are borne along in the floods; upon
their remains these creatures feast; and as the vulture is provided by
providence to devour and remove offal from the land, which would
otherwise infect the air and produce pestilence; so the alligators are
the scavengers of the waters, and clear away putrescence that
would otherwise render them poisonous and unapproachable to
man. So, after all, the alligator has his part to play in the great
economy of nature, and is actually very useful.
The alligator is nearly the same as the crocodile of the eastern
continent. The females lay eggs, and one of them is said to produce
a hundred in a season. They are of the size of geese eggs, and are
often eaten, being esteemed tolerable food. The eggs, being
deposited in the sand and covered up, are hatched by the heat.

Braham’s Parrot.—Parrots, like cuckoos, form their notes deep


in the throat, and show great aptitude in imitating the human voice. A
lady who admired the musical talents of Braham, the celebrated
singer, gave him a parrot, which she had taught with much care. A
person who saw it at Braham’s house, thus describes it:—“After
dinner, during a pause in the conversation, I was startled by a voice
from one corner of the room, calling out in a strong, hearty manner,
‘Come, Braham, give us a song!’ Nothing could exceed the surprise
and admiration of the company. The request being repeated and not
ananswered, the parrot struck up the first verse of God save the
King, in a clear, warbling tone, aiming at the style of Braham, and
sung it through. The ease with which the bird was taught was equally
surprising with his performance. The same lady prepared him to
accost Catalani, when dining with Mr. Braham, which so alarmed
Madame that she nearly fell from her chair. Upon his commencing
Rule Brittania, in a loud and intrepid tone, the chantress fell upon her
knees before the bird, expressing, in terms of delight, her admiration
of its talents.”
This parrot has only been exceeded by Lord Kelly’s, who, upon
being asked to sing, replied, “I never sing on a Sunday.” “Never mind
that, Poll; come, give us a song.” “No, excuse me. I’ve got a cold—
don’t you hear how hoarse I am?” This extraordinary creature
performed the three verses entire of God save the King, words and
music, without hesitation, from beginning to end.
Mungo Park and the Frogs.

The tales of travellers often appear to us incredible, merely


because they relate things different from our own observation and
experience. You know that there are some countries so hot that they
never have ice or snow there. Now it chanced that a man from some
northern portion of the world, happening to be in one of those hot
places, told the people, that, where he lived, the water sometimes
became solid, in consequence of the cold, and almost as hard as a
stone.
Now this was so different from the experience of the people, that
they would not credit the traveller’s story. This shows us that a thing
may be a reality, which is, at the same time, very different from our
own observation and experience.
Mungo Park was a famous traveller in Africa. He went into
countries where no white man had been before, and he saw places
which no white man had seen. He tells us many curious things, but
perhaps nothing is more amusing than what he says about the frogs.
At a certain place that he visited, he went to a brook to let his horse
drink; but what was his surprise to find it almost covered with frogs,
who kept bobbing up and down, so that his horse was afraid to put
his nose into the water. At last Mr. Park was obliged to take a bush
and give the frogs a flogging, before he could make them get out of
the way so as to let his poor beast quench his thirst.
A Child lost in the Woods.

The Bangor Whig of the 11th of June contains an affecting


account of a search made at Linnæus, in the Aroostook country, for a
little girl of nine years, the daughter of Mr. David W. Barbar, who, on
the 4th, was sent through the woods to a neighbor’s, half a mile
distant, to borrow a little flour for breakfast. Not returning that day,
the next morning about forty of the neighbors set out to hunt for her,
but spent the day without success. The next day sixty searched the
woods, with no better fortune. The following morning between two
and three hundred of the settlers assembled early, anxious and
fearful for the safety of the lost child.
“The company set out,” says the Whig, “for a thorough and a last
search. The child had been in the woods three days and nights, and
many hearts were sunk in despondency at the utter hopelessness of
finding it alive. But to learn its fate or restore it was the determined
purpose of each. Half the day had been expended in advancing into
the forest. It was time for returning; but who could think of doing so
while an innocent child might be wandering but a few rods in
advance? On the company pushed, still deeper into the dense wilds.
The sun had reached the meridian, and was dipping down toward
the west. It seemed vain to look farther, and slowly and heavily those
stout-hearted men brushed a tear from their cheeks, gave up all as
lost, and, as their hearts seemed to die within them, commenced
their return. The line was stretched to include a survey of the
greatest possible ground; not a bush or tree, where it was possible
for a child to be concealed, within the limits of the line, was passed
without diligent search. Those at the extremities of the lines tasked
themselves to the utmost in examining the woods beyond the lines.
They had travelled for some time, when, at the farthest point of
vision, the man on one flank thought he saw a bush bend. He ran

You might also like