Professional Documents
Culture Documents
708
1063-6919/96$5.00 0 1996 IEEE
the bias in the computation of the coefficients of the feasible rotation.
conic section. We will explain and show in the ex-
periments that the image centers can be computed 3 From parallel lines t o conic sections
more reliably than the scaling factors. The main fac- To describe the camera model we introduce three
tor affecting accuracy is the amount of rotation carried coordina.te systems: The standard coordinate system
out by the camera. We test the sensitivity of the ap- (?,, ys , z,) with origin at the projection center (or op-
proach to error in the line parameters. We compare tical center or nodal point) and z,-axis parallel to the
our results to results obtained by the implementation optical axis of the camera, the normalized camera co-
of another active calibration approach that uses point ordinate system ( z n ,yn) on the plane zs == l with ori-
trajectories under varying zoom to compute the image gin at (01,0,1)of the standard coordinate system, and
center and orthogonal sets of parallel lines (conjugate the image coordinate system ( z p , , y p ) where the im-
vanishing points) to estimate the scaling factors [4]. age data are measured in pixel units. The perspective
projection from a point (zRys,z , ) in space to a point
2 Related Work ( z n ly n ) i!n the plane z, = 1 is given by z, = zs/.zs and
Considering related work we will refer only to ap- yn = ys/z,. Thus, a point (zn yn) in the normalized
proaches that use motion in order to calibrate the in- system defines the ray A(zn, ynl 1) through the point
trinsic parameters without knowledge of world points in space. The mapping from the norma.lized to the
or angles. Since the seminal paper by Maybmk image coordinate system is an affine transformation
and Faugeras [5] it is known that intrinsic parame-
ters encoded in the image of the absolute conic (can
be obtained from point correspondences in four views.
Later it was proven [6] that the image of the absolute
conic can also be obtained from point correspondences 5, S 5,
of three views if they are projections from points on
the plane at infinity. For projections of points with The task of intrinsic calibration is to correspond ray
finite depth this is equivalent t o three views arising directions to pixel positions. Since a ray direction is
from two pure rotations of the camera [7]. Although given by a point in the normalized coordinate system
the computation of the intrinsics is in principle linear intrinsic calibration means recovering the affine trans-
according to the latter approaches a non-linear refine- formation defined above. The parameter pair (zo, yo)
ment step is applied to increase accuracy. Dron's wsxk is the image center (or principal point) defined as the
[SI can be classified in the same framework of absent intersection of the optical axis with the image plane in
or known translation, however, it uses only linear least image coordinates (pixel units). The parameter sy is
squares techniques. equal to -thefocal length divided by the vertical dimen-
We proceed with approaches using point correspon- sion of a. cell in the CCD chip. The parameter s+ is
dences and known rotation angles. Du and Brady [9] equal to the focal length divided by the horizontal di-
as well as Basu [lo] use the optical flow arising from mension of a cell and multiplied by the saimpling ratio
small rotations. Basu applies two independent rata- between the digitizer and the CCD-signal. The pa-
tions (pan and tilt), uses contours instead of points, rameter szcycounts for non-orthogonality of the pixel
and makes an initial hypothesis on the location of the grid and will be regarded as negligible (see [l]for its
image center. Du and Brady use the conic sections interpretation). Furthermore, we do not consider in
from tjhe trajectories of points to obtain a more accu- this study non-linear radial distortioiis that appear in
rate image center estimate. Stein [ll]and Vieville [ l a ] case of wide-angle lenses.
solve the nonlinear equations of motion arising from The description given above is not sufficient
a rotation around a fixed axis using known rotation to model projections of points or lines at infin-
angles. Vieville gives also a solution for the unknown ity, therefore we introduce homogeneous coordinates
but constant rotation axis.
The most closely related work to ours is the ap-
(Zs, cs, ?:$,,.is)in P3 for the standard coordinate sys-
tem, iZn = ( E n , ?jnl &) in P2 for the normalized, and
proach by Beardsley et al. [13]. They use the trajec-
tories of the vanishing points arising from the fixed gP = (i+,,?jp, Gp)also in P2 for the image coordinate
axis rotation of parallel lines on a turntable. Since system. We obtain always the inhomogeneous coordi-
they can produce a full cycle rotation they obtain a nates by dividing by the last homogeneouij coordinate
full ellipse on the image. As they do not know the ro- if the latter is not zero. The equations '6,= 0 and
tation axis one trajectory is not enough t o recover hhe 221,. = 0 define the plane at infinity in P3 and the line
intrinsic parameters. From three ellipses the aspect at infinity in P2, respectively. As a 2D affine transfor-
ratio as well as the image center - intersection of Lhe mation leaves the line at infinity invariant its mapping
major axes - can be obtained. The focal length is then on the image coordinate system is also GF, = 0.
computed from t,he locus of vertices of circular cones A set of parallel lines with direction ( ( I z , I,, l z ) in
that could give rise to the given ellipse. Our approach projective space meet at the point of the plane at in-
avoids the use of a special apparatus (turntable) by finity ( I z , I,, 12, 0). The projection of this point on the
producing the trajectories with camera rotations. We image plane is the vanishing point - the initersection of
gain in the sense that the fixed axis is then known the projections of the parallel lines. Its normalized ho-
but we pay in accuracy since thc arising hyperbolic mogeneous coordinates are (lzl ly,lz) and its position
arcs are of limited extend due to the small amouni, of on the image plane is at infinity if 1, = 0.
709
Suppose now that a camera moves from ose 1 to
pose 2 with translation T and a rotation R b4):
710
according to the applied resolution of the Hough space. space limitations). Hence, E[Gu] in (13) is also un-
The MLE yields a least squares minimization of the equal zero and the estimate d is biased. 'To eliminate
+
linear system cos qixp sin qiyp = di for i = 1 . . . N . the bias we just have to subtract this variance from
The equation of the conic (6) can be written a s a every data matrix Ci. The final minimization that
quadratic form x F A x p where A is a symmetric ma- yields an unbiased estimate reads
trix with ,412 = A21 = 0 because the rotation axis N
in (6) is either (1,0,0) or (0,1,0). Decoupling the van-
ishing point coordinates from the unknowns we obtain
d T a = 0 with
As simulation tests by [16] showed the Kanatani pro-
(9) cedure is superior for data from small portions of the
conic. The same authors experimentally showed that
and this algorithm does not always detect the desired conic
UT = ( All A22 2A13 2A23 A33 ). (10) kind - here hyperbola - for high noise levels. However,
We also apply here MLE estimation which result:; to such estimates were observed in our experiments in a
the weighted minimization of much smaller extent than the one given in [16].
M 5 Experiments
(dTu)2 We begin with experimental results on synthetic
z=1 Var[dTu] ' data. The image lines used are noise corrupted images
of the lines on a cube covered with a checker-board
similar pattern. Noise is added in the line parameters
If we exactly compute the variance Var[dTu] we will (17, d ) subject t o a normal distribution. Computation
obtain a nonlinear minimization problem because the of the vanishing points and the conic coefficients is
unknown a will appear in the denominator of every carried out as described in the previous section. We
summation term. Therefore, we apply approximate present the results only in the computation of 20 and
weights equal to the trace t r ( C ; ) of the covariance ma- s, from a rotation around the y-axis. 'The compu-
trix of the vanishing point estimates. As the vector of tation of yo and sy using synthetic data is exactly
unknowns a has a degree of freedom we have to imro- symmetric.
duce a constraint if we do not use the exact weight- We compare our approach with the two-step ap-
ings in (11) where the coefficients appear in the de- proach in [4]. In this approach the image center
nominators. As Kanatani [3] argues minimizing with ( 2 0 , yo) is computed from the intersection of the radial
respect to translation and rotation invariance - con- point trajectories arising from a varying focal length
+
straint, ATl A& = 1 - does not apply in the case of (zooming). Then, the scaling factors (s,, sY) are esti-
points with anisotropic and unequal weightings. He mated from three vanishing points arising from three
proposes the usual constraint llull = 1 leading t o the mutually orthogonal sets of parallel lines. Of course
minimization we expect that this approach exhibits a much better
performance than ours since three sets of ]parallel lines
M .
and over twenty zoom point trajectories are exploited.
We first inspect the sensitivity to the noise level in
the imag;eline parameters. The rotation amount is 120
degrees around the y-axis. We observe (Fig. 3 ) that
The solution is the eigenvector of C corresponding to the relative error in the image center (x-coordinate z g )
the smallest eigenvalue. From the perturbation theo- is an order of rnagnitude lower t,han the error in the
rem for eigenvectors [3: 151 of symmetric matrices we scaling f.actor s, .
can derive the bias of the solution iL as To better explain the sensitivity behavior we pro-
m
ceed by varying the most important factor of our ap-
proach: the total amount of rotation. We see in
Fig. 4(a-b) the distribution of the vanishing points
for rotation amounts of 120 and 20 degrees. As ex-
pected the relative error in the intrinsics (Fig 5) de-
where (A, U ] ) are the four remaining eigenvalue-
~
711
-
:L4
40, 2.2
0.8
0.6
0.4
3 222 60 60 Ea ‘cc 120
Figure 3. The relatzve error an xo ( a ) and zn s, Figure 5. The relatave error an xo ( a ) and s, as
( b ) as a functzon of the standard devzatzon of nozse a functzon o f the rotataon amount. The dashed lane as
zn the lzne parameters. The amount of rotataon was the error zn the zoom-vanashzng poant approach which
120 degrees. The lower curve shows the error of the does not use rotatzons.
zoom-vanzshzng poant algorathm.
for 2 0 ,9.8% for yo, 2.7% for s,, and 5.3% for sy . With
symmetry index 45 means moving from 5 to 85 de- respect to the data used and the fact that the rota-
grees. We see the two extrema1 distributions in Fig. 6. tion was not 360 degrees like in other approaches the
The total amount of rotation in this experiment is 80 results are more than satisfactory.
degrees. The relative error in the x-coordinate of the
iiiiage center increases if symmetry is violated. Since
ZO gives the position of the transverse symmetry axis
of the hyperbola it is expected that asymmetric distri-
bution destabilizes the transverse axis. On the other
hand, the error in the scaling factor s, decreases as
more points lie laterally on the hyperbola because the
slope of the asymptote is constrained.
i;ib\
35, . . 8 I . . . . . . . :
0
0 5 1 0 1 5 2 0 2 5 5 3 5 3 5 4 5
*rr“=tmexw1
; L L J
0 5 1 0 , 5 2 0 2 5 5 3 5 3 5 4 5
Symnxtry!rdex@ul
a) b)
In all the experiments our algorithm could never
compete with the zoom-vanishing points approach [4] Figure 7. The relatave error an x o ( a ) and s, (b)
due to the much less amount of information used here. as a functzon of the symmetry zndex. A s the sym-
We tested both steps of our algorithm with two se- metry index uarzes f r o m 0 t o 45 degrees the set of
quenc,es of the image of a wall obtained with one of t>he vanzshzng poznts as shzfted to the left of the hyper-
cameras of an active binocular camera mount. The bola. For comparason as shown the constant error an
left column in Fig. 8 shows a pan movement and the the zoom-vanashang poant approach (dashed lane)
right column a tilt movement. The image lines were
detected using the Hough-transform applied on the
edge image. In Fig. 8 (c) and (f) we see the trajecto- References
ries of the vanishing points arising from a 120 degrees [I] 0. Faugeras, “Stratification of three-dimensional vi-
rotation. Ground truth for the camera is not known sion: projective, affine, and metric representations,”
b u t t,hv approach [4]applied before the experiment JournaI Opt. Soc. Am. A , vol. 12, pp. 465-484, 1995.
using a calibration cube serves as a reference. The
relative error with respect to this reference was 5.0% [2] R. Tsai, “An efficient and accurate camera calibra-
tion technique for 3d machine vision,” in IEEE Conf.
712
Computer Vision and Pattern Recognition, pp. 364- [l6] A. Fitzgibbon and R. Fisher, “A buyer’s guide t o
374, Miami Beach, FL, June 22-26, 1986. conic fitting,” in Sixth British Machine Vision Con-
[3] K. Kanatani, Geometric Computation for Machine
ference, Birmingham, UK, Sept. 11-14, 1.995.
Vision. Oxford University Press, Oxford, UK, 1993.
[4] M. Li, “Camera calibration of a head-eye system for
active vision,” in Proc. Third European Confewnee
on Computer Vision, (Stockholm, Sweden, May 2-6,
J.O. Eklundh (Ed.), Springer LNCS 800), pp. 543-
554, 1994.
[5] S. Maybank and 0. Faugeras, “A theory of self-
calibration of a moving camera,” International Jour-
nal of Computer Vision, vol. 8 , pp. 123-151, 1992.
[6] Q. Luong and T. Vieville, “Canonic representations
for the geometries of multiple projective views,” in
Proc. Third European Conference on Computer Vi-
sion, pp. 589-599, Stockholm, Sweden, May 2-6,
J.O. Eklundh (Ed.), Springer LNCS 800, 1994.
[7] R. Hartley, “Self-calibration from multiple views with
a rotating camera,” in Proc. Third European Confer-
ence on Computer Vision, pp. 471-478, Stockholm,
Sweden, May 2-6, J.O. Eklundh (Ed.), Springer LNCS
800, 1994.
[8] L. Dron, “Dynamic camera self-calibration from con-
trolled motion sequences,” in IEEE Conf. Computer
Viseon and Pattern Recognition, pp. 501-506, New
York, NY, June 15-17, 1993.
[9] F. Du and M. Brady, “Self-calibration of the intrinsic
parameters of cameras for active vision systems,” in
I E E E Conf. Computer Vision and Pattern Recogni-
tion, pp. 477-482, New York, NY, June 15-17, 1993. U
[lo] A. Basu, “Active calibration of cameras,” IEEE
Trans. Systems, Man, and Cybernetics, vol. 25,
pp. 256-265, 1995.
[ll] G. Stein, “Accurate internal camera calibration using
rotation. with analysis of sources of errors,” in Broc.
Int. Conf. on Computer Vision, pp. 230-236, Boston, Figure 8. The left column describes a pan-rotation
MA, June 20-23, 1995. around the y-axis f r o m left t o right. Images (a) and
( b ) are the first a n d the last in the sequence, respec-
[12] T. Vieville, “Auto-calibration of visual sensor param- tively. The regarded set of parallel lines is the set
eters on a robotic head,” Image and Vision Comput- with vanishing point on the upper right of image (a).
ing, vol. 12, pp” 227-237., 1994. Fig. ( c ) shows the vanishing points moving f r o m right
[13] P. Beardsley, D. Murray, and A. Zisserman, “Camera
to left and the fitted hyperbolae. In the ,right column
we show the tilt-rotation around the x-axts f r o m bot-
calibration using multiple images,” in Proc. Second
European Conference on Computer Vision, p p ~312-
320, Santa Margerita, Italy, May 23-26, G. Sitn-
t
t o m t o top. Images (d) and (e are the first and the
last in the sequence, respective y. The regarded set of
parallel lanes is the set with vanishing point on the left
dini (Ed.), Lecture Notes in Computer Science 5138, of amage (d). The vanishing point in Fig. (f)is mov-
Springer-Verlag, Berlin et al., 1992. ing f r o m top t o bottom.
[14] J . Craig, Introduction to Robotics: Mechanics a n d
Control. Addison-Wesley, Reading, MA, 2nd ed. ed.,
1989.
[15] G. Golub and C. van Loan, Matrix Computations.
Baltimore, Maryland: Second Edition, T h e Johns
Hopkins University Press, 1989.
713