You are on page 1of 4

2009 Second International Symposium on Electronic Commerce and Security

A Novel Camera Calibration Method of Variable Focal Length Based on Single-View

Gui-Hua Liu1,2 Wen-Bin Wang1 Jian-Yin Yuan1 Xian-Yong Liu1 Quan-Yuan Feng2
1 2
School of information engineering School of information science&technology
Southwest University of Science and Technology, Southwest Jiaotong University,
Mianyang, Sichuan , 621010, China Chengdu, Sichuan , 610031, China
liughua_swit@163.com fengquanyuan@163.com

Abstract— The key of camera calibration is to calculated the the demand of modern high accuracy measurement. We first
initial value of extrinsic and intrinsic camera parameters. give a new algorithm aimed at simplicity and insensitivity to
Since the calculation of initial value is not a purely linear noise. Calibration method proposed in this paper is based
problem, 3D measurement products are often using a fixed mainly on the 3D calibration approach of the projection
focal length camera calibration method, but the measuring
matrix. Focal lengh can be variable at will. Intrinsic and
exists the problem of inconvenience of usage and a limited
measurement-range. We have derived a novel camera extrinsic parameters of camera can be restored from each
calibration method which focal length of camera can be image, and the algorithm is simple to realize.
variable. By means of using the orthogonal rotation matrix and
constraints among camera parameters in the single-view Ⅱ. THE PRINCIPLE OF THE PROPOSED CAMERA
vision, a new algorithm to linearly and exactly calibrate the CALIBRATION ALGORITHM
camera from single-view at least six known feature points is The classical intrinsic camera parameters matrix K of
developed. The theoretical analysis and the experiments have the pinhole model is seen in equation (1).
demonstrated that the suggested algorithm is fast, exact,
efficient and rather robust against noise, and it can be applied ⎡ fu s u0 ⎤
to the area of 3D measurement of variable focal length.
K = ⎢⎢ 0 fv v0 ⎥⎥ (1)
Keywords - Camera calibration; Projection matrix; 3D ⎣⎢ 0 0 1 ⎦⎥
measurement; Single-view
The model consists of 5 parameters where f u , f v are
Ⅰ.INTRODUCTION image scale factors of u axis and v-axis respectively, ( u0 ,
It is essential for camera calibration to obtain 3D v0
)is coordinate of the cardinal point, and s is the distortion
information from the 2D information. Camera calibration factor. Due to the rapid development of the camera
was first put forward in photogrammetry [1][2]. At present, it hardware, the camera distortion factor s is close to zero.
has become a hot issue in the study of field of computer Therefore, the frequent use of s is often assigned to zero in
vision. With rapid development of modern digital cameras intrinsic camera parameters matrix K as the simplified
and network, to find a simple, flexible, high reliability of the model. A simplified model here is used in the paper.
calibration method has become an urgent requirement. The process of calibration method proposed in this paper
At present, there emerged many calibration method. goes mainly as the following steps. The algorithm first
Tsai's 3D calibration0] is a famous method which use the solves for projection matrix P from more than six known
parallel principle of the radial projection, and is able to space points. Then the intrinsic camera parameters are
calculate the extrinsic and intrinsic camera parameters obtained from projection matrix P. Finally the extrincic
quickly and easily. but it only takes radial distortion into camera parameters are derived from the intrinsic camera
parameters and projection matrix. All the steps of the
account, tangential distortion is not taken into consideration,
algorithm use the redundancy in the data to combat
so precision is not high.. Zhang's 2D calibration0] is a mature,
noise. The detailed description of the solution of each step
and commonly used way which adopted standard 2D planar are as follows.
drone and can not vary focal length in the process of
calibrition. Zhang's 1D calibration0] approach is under the A. Solving for projection matrix
conditions of fixed camera, mobile marker to realize the i = (u
m vi 1)
T

camera calibration. It is difficult to apply to hand-held Let the image pixel coordinate is i
, 3D
j
coordinates of space point is M = ( X i Z i 1)
T
Yi
cameras, and it also exists the problem of poor flexibility; , the
Calibration based on epipolar geometry0] using i
m j
projective relations beteen and M is
corresponding image points to achieve the calibration does i = PM
sm j (2)
not require space markers, so it is sensitive to noise and Where s is non-zero scale factor ,and P is the projection
the accuracy of calibration is poor which is difficult to meet matrix, which contains 12 unknown parameters. Namely P

978-0-7695-3643-9/09 $25.00 © 2009 IEEE 125


DOI 10.1109/ISECS.2009.235
has 11 freedom degrees. Only at least six coordinates of ⎡ r1 r2 r3 ⎤
space point and its corresponding image point are known, Because the rotation matrix R = ⎢⎢ r4 r5 r6 ⎥⎥ itself is a
can a linear projection matrix P be solved, and the solution
is the only one. If there are more than 6 points of space ⎣⎢ r7 r8 r9 ⎦⎥
known, the optimal solution of P can be obtained by orthogonal unit matrix, it has the following properties.
least-squares method. Specific method of solving projection 1) Normal number of each line is equal to 1:
matrix P is as follows. r12 + r22 + r32 = 1 , r42 + r52 + r62 = 1 , r72 + r82 + r92 = 1
Let 2) Any two row vector is orthogonal: r1r7 + r2 r8 + r3r9 = 0 ,
⎡ p1 p2 p3 q1 ⎤
P = ⎢⎢ p 4 p5 p6 q 2 ⎥⎥
(3) r4 r7 + r5r8 + r6 r9 = 0
⎢⎣ p 7 p8 p9 q 3 ⎥⎦ According to the special properties of rotation matrix, (6)
Using known space point coordinates and the can be simplified as:
corresponding image point coordinates and (2),equation (4) ⎧ p 72 + p 82 + p 92 = λ 2
is organized into the following form: ⎪
⎛ p1 ⎞ ⎪ p + p 22 + p 32 = λ 2 [ f u2 + u 02 ]
1
2

⎜ ⎟ ⎪
⎛ X1 Y1 Z1 1 0 0 0 0 −u1X1 −u1Y1 −u1Z1 −u1 ⎞ ⎜ p2 ⎟ ⎨ p + p 52 + p 62 = λ 2 [ f v2 + v 02 ]
4
2
(7)

−v1 X1 −vY −v1Z1

−v1 ⎟ ⎜ p3 ⎟ ⎪ p1 p 7 + p 2 p 8 + p 3 p 9 = λ u 0 2
⎜0 0 0 0 X1 Y1 Z1 1 1 1
⎜ ⎟ (4) ⎪
⎜# # # # # # # # # # # # ⎟ ⎜ q1 ⎟ = 0
⎜ ⎟⎜ ⎟ ⎩⎪ p 4 p7 + p5 p8 + p6 p9 = λ 2v0
⎜ Xi Yi Zi 1 0 0 0 0 −ui Xi −uiYi −ui Zi −ui ⎟ #
⎜ ⎟
⎜0 −vi ⎟⎠ ⎜ p9 ⎟
⎝ 0 0 0 Xi Yi Zi 1 −vi Xi −vYi i −vi Zi The elements of the intrinsic camera parameters matrix
⎜q ⎟
⎝ 3⎠ K can be obtained from (7).
In which. i = 1...n, n ≥ 6 , we rewrite (4) as: Ax = 0 . ⎧ λ 2 = p 72 + p 82 + p 92
Where ⎪
⎪ u0 = ( p1 p 7 + p 2 p 8 + p 3 p 9 ) / λ 2
⎪v = ( p4 p7 + p5 p8 + p6 p9 ) / λ 2 (8)
⎛ X1 Y1 Z1 1 0 0 0 0 −u1X1 −uY 1 1 −u1Z1 −u1 ⎞ ⎨ 0
⎜ ⎟ ⎪ f = (p + p + p )/λ
2 2 2 2
−u 2
⎜0 0 0 0 X1 Y1 Z1 1 −v1X1 −vY1 1 −v1Z1 −v1 ⎟ ⎪ u 1 2 3 0

A=⎜ # # # # # # # # # # # # ⎟ ⎪ f = (p + p + p )/λ
2 2 2 2
− v 02
⎜ ⎟ ⎩ v 4 5 6

⎜ Xi Yi Zi 1 0 0 0 0 −ui Xi −uYi i −u Z
i i −ui ⎟
⎜0
⎝ 0 0 0 Xi Yi Zi 1 −vi Xi −vY
i i −vi Zi −vi ⎟⎠ The whole process of the solution of the intrinsic camera
parameters of the camera take full advantage of the special
Then A is performed by SVD(Singular Value properties of rotation matrix as a constraint condition to
Decomposition), the required solution A is the realize the camera calibration. From the realization of the
corresponding eigenvector of the smallest eigenvalue. process, it can be found that the projection matrix are to be
B. Determining intrinsic camera parameters solved as long as enough points are known. Then the
Combining the nature of the rotation matrix R with the camera calibration from single-view can be achieved, and
projection matrix P resloved above, the intrinsic camera the algorithm can be applied to the filed of variable focal
parameters can be solved as follows. length measurement.
The projection matrix P satisfies (5) to present the
C. Determining extrinsic camera parameters
relationship between P and intrinsic and extrinsic camera
parameters . Extrinsic camera parameters matrix can be determined
from the intrinsic camera parameters and projection matrix
P = λK[ R T ] solved above.
Since the the intrinsic camera parameters matrix is
⎡ fu 0 u0 ⎤⎡r1 r2 r3 t1 ⎤
= λ ⎢⎢ 0 fv v0 ⎥⎢ ⎥ (5) reversible, and λ is non-zero scale factor, it can be
⎥⎢r4 r5 r6 t2 ⎥ P = λK [R T ]
⎣⎢ 0 0 1 ⎥⎢ ⎦⎣r7 r8 r9 t3 ⎥⎦ calculated by the formula .
⎡λ( fur1 + u0r7 ) λ( fur2 + u0r8 ) λ( fur3 + u0r9 ) λ( fut1 + u0t3)⎤ 1
= ⎢⎢λ( fvr4 + v0r7 ) λ( fvr5 + v0r8 ) λ( fvr6 + v0r9 ) λ( fvt2 + v0t3)⎥⎥
[R T] = K −1P = Q (9)
λ
⎢⎣ λr7 λr8 λr9 λt3 ⎥⎦
The rotation matrix R is consisted of the first three lines
From (3) and (5), we can get and the first three column of Q , translational matrix T is the
fourth column of Q.
⎧ p72 + p82 + p92 = λ2 As a result of positive and negative values of two
⎪ 2 2 2 2 2 2 2 2
⎪ p1 + p2 + p3 = λ [ fu (r1 + r2 + r3 ) + u0 (r7 + r8 + r9 ) + 2u0 fu (rr
2 2 2 2
1 7 + r2r8 + r3r9 )] options of λ , Q is made with polysemy. In order to
⎪ 2 2 2 2 2 2 2 2 2 2 2 2
⎨ p4 + p5 + p6 = λ [ fv (r4 + r5 + r6 ) + v0 (r7 + r8 + r9 ) + 2v0 fv (r4r7 + r5r8 + r6r9)] (6) determine the correct R and T, the following steps is used.

⎪ p1 p7 + p2 p8 + p3 p9 = λ2[ fu (rr 2 2 2
1 7 + r2r8 + r3r9 ) + u0 (r7 + r8 + r9 )] 1). Compute R and T according to (9) when λ is
⎪⎩ p4 p7 + p5 p8 + p6 p9 = λ [ fv (r4r7 + r5r8 + r6r9) + v0(r7 + r8 + r92 )]
2 2 2
adopted a positive value.

126
2). Reconstructe Matrix K, R, T to form a new 0.375 0.6343 0.3546 0.4467
projection matrix P.
0.5 0.6424 0.3510 0.4501
3). Determine the corresponding sign relations between
corresponding non-zero elements in a new P and in the TableⅢ. THE AVERAGE TRANSLATION VECTOR UNDER
original P. If the signs of corresponding elements are the DIFFERENT NOISE LEVELS
same, the R and T is the correct value.Otherwise, the correct Noise level Tx Ty Tz
(variance)
R and T have a negative sign for the current value.
0 396.0 693.1 1980.2
Ⅳ.THE EXPERIMENTS OF THE METHOD 0.125 392.0 692.7 1975.9
A. Simulation Experiments 0.25 394.4 691.6 1968.0
The camera calibration errors are estimated in terms of
0.375 443.9 685.0 2016.6
the variance of noise in images. The theoretical value of
intrinsic camera parameters of the camera were given: 0.5 467.6 675.2 2057.6
f u = 1000 f v = 1200 u0 = 512 v0 = 384
, , , ,the resolution of
images was 1024 × 768 . The extrinsic camera parameters
r = [π / 5 π / 9 π / 7]T
are set as follows:rotation vector ,
T = [400 700 2000]T
translation vector .
18 points composed a space cuboid was regarded as the
known control points, an image was collected for calibration
experiments. In order to verify the stability of algorithms,
Gaussian white noise was added to the image coordinates.
100 times random experiment in each noise levels have
been done. TableⅠ shows the average statistical results of
intrinsic camera parameters under different noise levels.
The above same parameter was used when extrinsic
camera parameters were determined. Tables 2 and Tables 3 Figure 1. The average absolute errors of rotation angles
list the result of the average of rotation angle and translation
vector respectively under different noise levels. The average
absolute errors of the three rotation angles and the three
components of translation vector changing with different
noise level are described in figures 1 and figures 2
respectively. Simulation experiment has proved that the
method can resolve the camera calibration problems with
robust and and high accurate results even in a high-level
noise situation.
Table Ⅰ.THE AVERAGE STATISTICAL RESULTS OF INTRINSIC
CAMERA PARAMETERS UNDER DIFFERENT NOISE LEVELS
Noise level fu fv u0 v0
(variance)
0 990.1 1188. 495. 396.
1 1 0
0.125 988.0 1185. 497. 396.
3 2 5 Figure 2. The average absolute errors of translation vectors
0.25 982.5 1180. 497. 397.
2 2 7 B. Real experiments
0.375 1001. 1211. 474. 401. In the process of the real experiment, the CCD camera
2 3 1 9
0.5 1016. 1236. 465. 407. was used, and the resolution of images was 3264 × 2448 .
2 9 8 9 " π " and "工" shape of markers constructed by some coded
points were put into the scene. Some 3D coordinate of
Table Ⅱ.THE AVERAGE ROTATION ANGLE UNDER coded points were caculated by the other algorithm(such as
DIFFERENT NOISE LEVELS Zhang's 2D calibration approach) as the known cotrol
Noise rotation angle rotation angle rotation angle
level points.Two images were taken from different angles and
around X-axis around Y-axis around Z-axis
(variance) with different focal length which is shown in figure 3.
0 0.6221 0.3456 0.4444
The intrinsic camera parameters matrix K of the camera
0.125 0.6202 0.3437 0.4448
in image(a) was given.
0.25 0.6178 0.3432 0.4451

127
# Point Residual #Point Residual
⎡3615.8 −11.5 1544.8 ⎤ error error
K1 = ⎢⎢ 0 3635.7 1414.0 ⎥⎥ 1 0.0235 10 0.0006
⎢⎣ 0 0 1.0 ⎥⎦ 2 0.0213 11 0.0037

The intrinsic camera parameters matrix K1 and the 3 0.0778 12 0.0034


rotation matrix R1 and translation matrix T1 were recoverd 4 0.0014 13 0.0006
according to the above method. 5 0.0039 14 0.0160
⎡3550.1 0 1797.0 ⎤ 6 0.0019 15 0.0290
K1 = ⎢ 0⎢ 3660.9 1820.5 ⎥⎥ 7 0.0027 16 0.0064

⎣⎢ 0 0 1.0 ⎥⎦ 8 0.0244 17 0.0078


9 0.0044 18 0.0038
⎡−0.8491 −0.5167 −0.1100⎤ ⎡ 7.4192 ⎤
R1 = ⎢⎢−0.4782 0.8166 −0.3231⎥⎥ T1 = ⎢⎢ 35.6849 ⎥⎥
⎢⎣ 0.2568 −0.2218 −0.9407⎥⎦ ⎢⎣97.6227 ⎥⎦
Changing the camera's focal length, image(b) in figure 3
was taken. The intrinsic camera parameters matrix K2 of the
camera in image(b) was given.
⎡ 4621.3 −13.2 1319.9 ⎤
K 2 = ⎢⎢ 0 4715.3 1592.4 ⎥⎥ Figure 4. Recoved 3D point cloud of the images
⎢⎣ 0 0 1.0 ⎥⎦
Ⅳ. CONCLUSION
The intrinsic camera parameters matrix K2 and the
rotation matrix R2 and translation matrix T2 were recoverd A novel camera calibration method based on projection
as follows. matrix is put forward in this paper. We can work out all the
⎡ 4680.3 0 1508.5⎤ intrinsic and extrinsic camera parameters in each image
⎢ linearly by this means as long as at least six known points
K2 = ⎢ 0 4837.2 1758.3⎥⎥ are captured in each image. The whole process of
⎢⎣ 0 0 1.0 ⎥⎦ calibration is simple to realize, and focal lenth of camera
⎡ −0.6647 −0.7381 −0.1158⎤ ⎡ −1.0067 ⎤ can be varied at will. It achieves fully automatic operation
to meet the need of flexible and easy-to-use requirements of
R2 = ⎢⎢ −0.6724 0.6554 −0.3440 ⎥⎥ T2 = ⎢⎢ 32.1954 ⎥⎥
camera calibration. Simulated experiments as well as real
⎢⎣ 0.3298 −0.1508 −0.9319 ⎥⎦ ⎢⎣ 99.3627 ⎥⎦ image tests show that the new method gives an accurate and
From the experimental results, it can be seen that the robust result even in the high-level noise situation.
restoration of the intrinsic camera parameters conform to the
actual value. ACKNOWLEDGMENT
The authors acknowledge the support by the National
science Foundation of China under Grant (No: 10876029).
REFERENCES
[1] Brown D C. “Close-range camera calibration Photogrammetric
Engineering”, 1971,Vol 37, No 8, pp.855-866
[2] Faig W. “Calibration of close range photogrammetry system:
(a) (b) Mathematical formulation Photogrammetric”, Engineering and
Figure 3. Real images Remote Sensing, 1975, Vol 41.No 12 , pp.1479- 1485
In order to verify the correctness of the restoration of the [3] R.Y. Tsai. “An Efficient and Accurate Camera Calibration Technique
extrinsic parameters, the two images in figure 3 are for 3D Machine Vision”. Proceedings of IEEE Conference on
Computer Vision and Pattern Recognition, Miami Beach, FL, pp.
constructed dual-view, and the projection matrix P was 364-374, 1986
reconstituted by the recoverd intrinsic camera and extrinsic [4] Zhengyou Zhang. “A flexible new technique for camera calibration,
camera parameters. 3D coordinate of coded points were Microsoft Corporation Technical Report” MSR-TR-98-71,1998
caculated according to triangulation principle, and residual [5] Zhengyou Zhang. “Camera Calibration with One-Dimensional
error between the each caculated and the known 3D coded Objects”, IEEE Transactions on Pattern Analysis and Machine
points were gotten, the result is shown in Table Ⅳ. 3D Intelligence, VOL 26, NO 7, JULY 2004
point cloud of the images are also presented in figure 4. It [6] Richard Hartley and Andrew Zisserman. “Multiple View Geometry in
can be seen that 3D shape of the recovery is in line with the Computer Vision Second Edition”, Cambridge University Press, 2004
actual appearance of objects and 3D point corrodinates of [7] Richard Hartley. “Cheirality invariants”, Proc. DARPA Image
Understanding Workshop, pp. 745-753, 1993
target points are also correct with high accuracy.
[8] R. I. Hartley. “Estimation of relative camera positions for uncalibrated
Table Ⅳ.RESIDUAL ERROR OF EACH CODED POINT cameras”, eccv2, pp. 579-587, 1992

128

You might also like