You are on page 1of 8

Optics and Lasers in Engineering 147 (2021) 106733

Contents lists available at ScienceDirect

Optics and Lasers in Engineering


journal homepage: www.elsevier.com/locate/optlaseng

Calibration of stereo cameras with a marked-crossed fringe pattern


Xiangcheng Chen a,b, Ying Chen a,b, Xiaokai Song a,b, Wenyuan Liang c, Yuwei Wang d,∗
a
School of Automation, Wuhan University of Technology, Wuhan 430070, China
b
Key Laboratory of Icing and Anti/De-icing, China Aerodynamics Research and Development Center, Mianyang 621000, China
c
National Research Center for Rehabilitation Technical Aids, Beijing, China 100176, China
d
College of Engineering, Anhui Agricultural University, Hefei 230036, China

a r t i c l e i n f o a b s t r a c t

Keywords: Stereo cameras have been widely used for three-dimensional (3D) imaging, and their performances are highly
Stereo cameras depended on the calibration accuracy. This paper presents a marked-crossed fringe pattern for the calibration of
Crossed fringe stereo cameras. Some coded markers are also encoded into the crossed fringe pattern, and the feature points are
Phase target
encoded into the carried phase. Two wrapped phase maps are recovered by using the Fourier transform, and the
Global coding
markers are extracted and decoded for absolute phase unwrapping. Referring to the absolute phase values, the
Feature points
Fourier transform one-to-one mapping between the world points and the image points can be established to calibrate the stereo
cameras. Compared with the traditional method, the proposed method adopts global coding, which can flexibly
select any part of the feature points for calibration and obtain high-precision camera parameters. Simulations
and experiments show the effectiveness of the method.

1. Introduction images, which introduces some natural inconveniences for the practical
calibration of stereo cameras.
Geometric calibration plays an important role for machine vision sys- Recently, several active targets based on phase-shifting algorithms
tems based on stereo cameras, which attempts to estimate the intrinsic have been developed for camera calibration. These active targets in-
and extrinsic parameters that are required to triangulate in the measure- clude sinusoidal fringes [18-26], circular fringes [27-29], and some oth-
ment volume [1]. In general, some precise targets with a large number ers [30-32]. Sinusoidal fringes always require six or more patterns to
of feature points are used for camera calibration, and the detection pre- carry two phase maps, that can generate abundant feature points. Cir-
cision will directly affect the calibration accuracy [2]. Therefore, some cular fringes require fewer patterns than sinusoidal fringes, but the num-
researches have focused on designing targets with obvious features that ber of feature points is limited. Although these active targets are robust
can be accurately detected in the target images. against camera defocusing, multiple pattern images should be captured
Currently, various targets have been designed for the calibration of per camera pose, hence limits their application ranges. Crossed fringes
stereo cameras [3], such as 3D targets [4-6], 2D targets [7-9], and 1D provide an optional solution which requires only one pattern in each po-
targets [10,11]. In addition, there are several available methods based sition for camera calibration [33-35]. Based on Fourier transform, the
on spheres [12,13], auxiliary equipment [14-16], and self-calibration feature points can be extracted with high-accuracy and high robustness.
[17]. The 3D targets with high calibration accuracy, but it is difficult These feature points, however, do not have the global coding features
to manufacture and expensive. The 1D targets has simple structure and which limits the flexibility of the crossed fringes calibration patterns.
only a few collinear feature points, so it is difficult to achieve high cal- This paper presents a marked-crossed fringe pattern with global cod-
ibration accuracy. The 2D targets just combine the advantages of the ing to calibrate stereo cameras, which can flexibly select any part of
two and avoid the disadvantages, and has the advantages of low cost, the feature points for calibration and obtain high-precision camera pa-
simple structure and convenient use. rameters. Simulations and experiments have been carried out, and their
As well known, chessboard and circles are the two most common results demonstrate the performance of the proposed method.
patterns of the 2D targets. The chessboard contains many squares, the
corners of which are extracted as feature points. Similarly, the circular 2. Camera model
pattern contains many circles, and the center is extracted as the feature
points. To our knowledge, the pattern should be placed within the over- 2.1. Pinhole camera model
lapping field-of-view (FOV) of stereo cameras to obtain complete pattern
This paper uses the well-known pinhole camera model, as shown in
Fig. 1. In the figure, given a 3D world point 𝑷 = [𝑋, 𝑌 , 𝑍 ], it corresponds


Corresponding author.
E-mail address: chenxgcg@ustc.edu (X. Chen).

https://doi.org/10.1016/j.optlaseng.2021.106733
Received 2 March 2021; Received in revised form 25 May 2021; Accepted 27 June 2021
Available online 10 July 2021
0143-8166/© 2021 Elsevier Ltd. All rights reserved.
X. Chen, Y. Chen, X. Song et al. Optics and Lasers in Engineering 147 (2021) 106733

the coordinate systems of the world, left camera and right camera, re-
spectively. The relationship between the camera coordinate system and
the world coordinate system can be expressed as follows:
{
𝑷 𝑙 =𝑹𝑤𝑙 𝑷 𝑤 + 𝒕𝑤𝑙
(6)
𝑷 𝑟 =𝑹𝑤𝑟 𝑷 𝑤 + 𝒕𝑤𝑟
where 𝑷 𝑤 , 𝑷 𝑙 and 𝑷 𝑟 are the coordinates of the same point respectively
defined in 𝑂𝑤 −𝑋𝑤 𝑌𝑤 𝑍𝑤 , 𝑂𝑙 −𝑋𝑙 𝑌𝑙 𝑍𝑙 and 𝑂𝑟 −𝑋𝑟 𝑌𝑟 𝑍𝑟 ; 𝑹𝑤𝑙 and 𝒕𝑤𝑙 is the
Fig. 1. The pinhole camera model. translation vector and rotation matrix from the left camera coordinate
system to the world coordinate system, respectively; 𝑹𝑤𝑟 and 𝒕𝑤𝑟 is the
translation vector and rotation matrix from the right camera coordi-
nate system to the world coordinate system, respectively. According to
Eq. (6), the relationship between the two camera coordinate systems can
be expressed as follows:
𝑷 𝑟 =𝑹𝑙𝑟 𝑷 𝑙 + 𝒕𝑙𝑟 = 𝑹𝑤𝑟 𝑹𝑇𝑤𝑙 𝑷 𝑙 + (𝒕𝑤𝑟 − 𝑹𝑤𝑟 𝑹𝑇𝑤𝑙 𝒕𝑤𝑙 ) (7)
where 𝑹𝑙𝑟 and 𝒕𝑙𝑟 are the rotation matrix and translation vector between
the coordinate systems of the two cameras, and the estimation of them
is often referred as the extrinsic calibration of the stereo cameras.

3. Calibration theory

3.1. Crossed fringe pattern

To calibrate the stereo cameras, this paper uses a marked-crossed


fringe pattern as the calibration target. As shown in Fig. 3(a), the inten-
sity of the crossed fringe pattern can be described as:
𝐽 (𝑥, 𝑦) = 𝑎 + 𝑏𝑥 cos(2𝜋𝑥∕𝑝𝑥 ) + 𝑏𝑦 cos(2𝜋𝑦∕𝑝𝑦 ) (8)
Fig. 2. The model of stereo cameras. where (𝑥, 𝑦) denotes the world coordinates of the fringe pattern; 𝑎, 𝑏𝑥 , 𝑏𝑦
are three constants, and we set 𝑎 = 1∕2 and 𝑏𝑥 = 𝑏𝑦 = 1∕4 in this paper;
𝑝𝑥 and 𝑝𝑦 are the fringe periods along 𝑥 and 𝑦 axes, respectively. The
to a 2D image point 𝒑 = [𝑢, 𝑣]. The relationship between 𝑷 and 𝒑 can be captured images of the fringe pattern can be expressed as:
described as: [ ]
𝐼(𝑢, 𝑣) = 𝐴(𝑢, 𝑣) + 𝐵𝑢 (𝑢, 𝑣) cos 𝜙𝑢 (𝑢, 𝑣)
[ ] (9)
𝜆[𝒑, 1]𝑇 = 𝑲 [𝑹, 𝒕][𝑷 , 1]𝑇 (1) +𝐵𝑣 (𝑢, 𝑣) cos 𝜙𝑣 (𝑢, 𝑣)

and where (𝑢, 𝑣) denotes a point on the camera sensor plane; 𝐴 denotes the
intensity of the background; 𝐵𝑢 and 𝐵𝑣 denote the intensities of the mod-
⎡𝑓 𝑢 𝛾 𝑢0 ⎤ ⎡𝑟11 𝑟12 𝑟13 ⎤ ⎡𝑡𝑥 ⎤ ulation lights; 𝜙𝑢 and 𝜙𝑣 denote the phase distributions along u and 𝑣
𝑲 =⎢0 𝑓𝑣 𝑣0 ⎥, 𝑹 = ⎢𝑟21 𝑟22 𝑟23 ⎥, 𝒕 = ⎢𝑡𝑦 ⎥ (2)
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ axes, and note that they are also the transformed version of the encoded
⎣0 0 1⎦ ⎣𝑟31 𝑟32 𝑟33 ⎦ ⎣ 𝑡𝑧 ⎦ phases 2𝜋𝑥∕𝑝𝑥 and 2𝜋𝑦∕𝑝𝑦 . Generally, there are two main steps to re-
where 𝜆 is an arbitrary scale factor; 𝑹 and 𝒕 denote rotation matrix and cover the absolute phase distributions: wrapped phase calculation and
translation vector respectively, and [𝑹, 𝒕] denotes the extrinsic matrix; absolute phase unwrapping.
𝑲 denotes the intrinsic matrix with the focal lengths [𝑓𝑢 , 𝑓𝑣 ], the prin- This paper employs the Fourier fringe analysis algorithm to calculate
cipal points [𝑢0 , 𝑣0 ] and a skew factor 𝛾. In general, the rotation angles the wrapped phase. As shown in Fig. 3(b), the Fourier transform of the
[𝜃𝑥 , 𝜃𝑦 , 𝜃𝑧 ] are related to 𝑹 through the Rodrigues formula. pattern image 𝐼(𝑢, 𝑣) can be expressed as:
Besides, lens distortions are so common that should be taken into 𝑄(𝑓𝑢 , 𝑓𝑣 ) = 𝑄0 (𝑓𝑢 , 𝑓𝑣 ) + 𝑄𝑢 (𝑓𝑢 − 𝑓𝑢0 , 𝑓𝑣 ) + 𝑄∗𝑢 (𝑓𝑢 + 𝑓𝑢0 , 𝑓𝑣 )
consideration. The relationship between the observed distorted points (10)
+𝑄𝑣 (𝑓𝑢 , 𝑓𝑣 − 𝑓𝑣0 ) + 𝑄∗𝑣 (𝑓𝑢 , 𝑓𝑣 + 𝑓𝑣0 )
(𝑢𝑑 , 𝑣𝑑 ) and the undistorted pixel points (𝑢, 𝑣) can be described as:
where 𝑄0 denotes the Fourier spectra of 𝐴, or the zero order spectra; 𝑄𝑢
[𝑥, 𝑦, 1]𝑇 = 𝑲 −1 [𝑢, 𝑣, 1]𝑇 (3) denotes the Fourier spectra of 12 𝐵𝑢 e𝑖𝜙𝑢 , or the fundamental spectra along
u axis; and 𝑄∗𝑢 denotes the conjugate of 𝑄𝑢 ; similarly, 𝑄𝑣 denotes the
[ ] ( )[𝑥] Fourier spectra of 12 𝐵𝑣 e𝑖𝜙𝑣 , or the fundamental spectra along v axis; and
𝑥𝑑
= 1 + 𝑘𝑟1 + 𝑘𝑟2
2 4
(4) 𝑄∗𝑣 denotes the conjugate of 𝑄𝑣 . If we select a suitable filter to separate
𝑦𝑑 𝑦
𝑄𝑢 and 𝑄𝑣 , and perform the inverse Fourier transform to obtain 𝑞𝑢 =
[ ]𝑇 [ ]𝑇
1
𝐵 e𝑖𝜙𝑢 and 𝑞𝑣 = 12 𝐵𝑣 e𝑖𝜙𝑣 , then 𝜙𝑢 and 𝜙𝑣 can be derived from 𝑞𝑢 and
2 𝑢
𝑢𝑑 , 𝑣𝑑 , 1 = 𝑲 𝑥𝑑 , 𝑦𝑑 , 1 (5) 𝑞𝑣 , as shown in Figs. 3(c)-(d). Unfortunately, 𝜙𝑢 and 𝜙𝑣 obtained above
are always wrapped in the range of [−𝜋, +𝜋], thus phase unwrapping
where k1 and k2 are radial distortion parameters, and 𝑟2 = 𝑥2 + 𝑦2 . Once
process should be carried out to recover the absolute phase maps Φ𝑢
the one-to-one mapping between 𝑷 and 𝒑 is built from feature detection
and Φ𝑣 . In general, the relationship between the wrapped phase 𝜙 and
of calibration targets, these unknown camera parameters can be esti-
the absolute phase Φ can be described as Φ = 𝜙 + 2𝜋𝑘. The key of phase
mated.
unwrapping is to find the integer number 𝑘, or the fringe order.

2.2. Stereo camera model 3.2. Marked-crossed fringe pattern

Fig. 2 shows the model of stereo cameras which uses two cameras as To determine the fringe order, this paper designs a marked-crossed
an example. Where 𝑂𝑤 −𝑋𝑤 𝑌𝑤 𝑍𝑤 , 𝑂𝑙 −𝑋𝑙 𝑌𝑙 𝑍𝑙 and 𝑂𝑟 −𝑋𝑟 𝑌𝑟 𝑍𝑟 represent fringe pattern, as shown in Fig. 4. Some coded markers are encoded into

2
X. Chen, Y. Chen, X. Song et al. Optics and Lasers in Engineering 147 (2021) 106733

Fig. 3. (a) The crossed fringe pattern, and its


(b) Fourier spectra, (c) Horizontal wrapped
phase, (d) Vertical wrapped phase.

are all ‘1’, thus white-dot markers are added to these periods according
to Eq. (11) introduced above.

3.3. Global phase unwrapping

This section will introduce how to perform the phase unwrapping


by using these white-dot markers on the fringe pattern. Fig. 5 shows the
flowchart of the global phase unwrapping, and the detailed steps are
summarized as follows:

Step 1. After capturing the image 𝐼 of the fringe pattern, we firstly


employ the Fourier fringe analysis algorithm to calculate two wrapped
phase maps 𝜙𝑢 and 𝜙𝑣 . Since the u and 𝑣 axes of the fringe pattern are
symmetrical, thus the following only describes the phase unwrapping
along u axis.

Step 2. We directly use the reliability-guided method [36] for local


phase unwrapping, and the continuous phase map Φ′𝑢 can be obtained,
as shown in Fig. 5.

Step 3. Then Φ′𝑢 is used to compute the local fringe order 𝑘′𝑢 as follows:
[ ]
𝑘′𝑢 (𝑢, 𝑣) = 𝑓 𝑙𝑜𝑜𝑟 Φ′ 𝑢 (𝑢, 𝑣)∕(2𝜋) (12)
Fig. 4. The marked-crossed fringe pattern.
where 𝑓 𝑙𝑜𝑜𝑟[] returns the truncated integer. As shown in Fig. 5, the
regions of different fringe periods can be separated by referring to the
different values of 𝑘′𝑢 .
the crossed fringe pattern to distinguish different fringe periods on the
basis of a code sequence 𝐶𝑆, and each fringe period along horizontal or Step 4. The median filter is applied to the captured image 𝐼. As shown
vertical directions is assigned to a 1-bit code in order. If the 1-bit code in Fig. 5, these white-dot markers are disappeared in the filtered image
of the current period is ‘0’, thus no markers are added to this period. On 𝐼 ′.
the contrary, if the 1-bit code of the current period is ‘1’, thus white-
dot markers are added to this period. Let us assume that the 1-bit code Step 5. Subtracting 𝐼 ′ from 𝐼 to get 𝐼 𝐼 ′ . At this time, 𝐼 𝐼 ′ is the image
of the i − th fringe period is 𝑚𝑖 along horizontal direction, and the 1-bit filtered by the median filter of 𝐼, which is composed of white-dot mark-
code of the j − th fringe period is 𝑛𝑗 along vertical direction. The location ers and stray points. We binarize 𝐼 ′ and 𝐼 𝐼 ′ , then subtract them to get
coordinates of these markers are given as following: 𝐼 𝐼 𝐼 ′ . Finally, intersect the binarized 𝐼 𝐼 𝐼 ′ with the binarized 𝐼 ′ to get
𝑀(𝑢, 𝑣).
1 1
(𝑥, 𝑦) = (𝑚𝑖 𝑛𝑗 × 𝑖𝑝𝑥 − 𝑝 , 𝑚 𝑛 × 𝑗 𝑝𝑦 − 𝑝𝑦 ) (11) Step 6. To globally unwrap the 𝜙𝑢 , the global fringe order 𝑘𝑢 should be
2 𝑥 𝑖 𝑗 2
determined. For this purpose, 𝑀 obtained from Step 5 is used to correct
where 1 ≤ 𝑖 ≤ 𝑀, 1 ≤ 𝑗 ≤ 𝑁, and 𝑀, 𝑁 are the total numbers of the the 𝑘′𝑢 obtained from Step 3. We firstly define:
fringe periods along horizontal or vertical directions. A multi-bit
codeword formed by the current fringe period and its several neighbor 𝑀 ′ (𝑢, 𝑣) = 𝑀(𝑢, 𝑣) ∗ 𝑘′𝑢 (𝑢, 𝑣) (13)
periods can be obtained by referring to the distribution of these white- Then we can obtain the 1-bit code C𝑖 of the 𝑖−𝑡ℎ fringe period as:
dot markers, and then the fringe order can be determined by looking up { [ ]
the position of the codeword in 𝐶𝑆. To this end, we should ensure that 1 ; ∃ 𝑀 ′ (𝑢, 𝑣) = 𝑖
𝐶𝑖 = [ ′ ] (14)
any codeword with a given length should appear only once in the whole 0 ; ∀ 𝑀 (𝑢, 𝑣) ≠ 𝑖
code sequence. If the codeword has a length of 6-bit, there are 26 = 64 where 𝑖 = 1, 2, ⋯ , max(𝑘′𝑢 ); ∃ denotes the existential symbol, and ∀ de-
different codewords. Therefore, the length of 𝐶𝑆 can be up to 64-bit to notes the arbitrary symbol. Computing C𝑖 one-by-one, we can obtain
ensure that each fringe period can be identified by a unique 6-bit code- the all codes of 𝐼. Then each six successive codes can form a 6-bit code-
word. Without loss of generality, the 64-bit 𝐶𝑆 selected in this paper is word, and the fringe order 𝑘𝑢 can be obtained by searching the position
‘0110111010101111001011001101001001110001010001100001000 of the 6-bit codeword in 𝐶𝑆. As shown in Fig. 4, the position of the 6-
000111111’, among which any 6-bit codeword appears only once. bit codeword ‘101010’ equals 7, thus the fringe order of these periods
Fig. 4 illustrates the encoding principle of white-dot markers based on should be 7, 8, 9, 10, 11, and 12, respectively.
a part of the 64-bit code sequence. For example, for the periods with
Step 7. Finally, combining 𝜙𝑢 and 𝑘𝑢 , the absolute phase map Φ𝑢 can
fringe orders 𝑘𝑣 = 1, 4 and 𝑘ℎ = 8, 10, 12, their corresponding codes are
be calculated as:
all ‘0’, thus no markers are added to these periods. For the periods with
fringe orders 𝑘𝑣 = 2, 3, 5, 6 and 𝑘ℎ = 7, 9, 11, their corresponding codes Φ𝑢 (𝑢, 𝑣) = 𝜙𝑢 (𝑢, 𝑣) − 2𝜋𝑘𝑢 (𝑢, 𝑣) (15)

3
X. Chen, Y. Chen, X. Song et al. Optics and Lasers in Engineering 147 (2021) 106733

Fig. 5. The flowchart of global phase unwrap-


ping.

Fig. 7. Part of the images under a certain angle. (a) Image with marked points;
(b) Image without marked points

Fig. 6. The calibration of stereo cameras. where 𝑎1 , 𝑏1 , 𝑐1 , 𝑎2 , 𝑏2 , 𝑐2 are fitting coefficients. Substituting Φ𝑢 =
2𝜋𝑚 and Φ𝑣 = 2𝜋𝑛 into the equation, the sub-pixel positions of feature
points are obtained.
Similarly, 𝜙𝑣 can be unwrapped to obtain the absolute phase map
Φ𝑣 . Step 4. Let’s use 𝑑 to represent the pixel pitch of the LCD. The world co-
ordinates of the feature point with known Φ𝑢 and Φ𝑣 can be determined
as:
3.4. Calibration of stereo cameras [ ] [ ]
𝑋 𝑑∗𝑃 Φ𝑢
= ∗ (17)
𝑌 2𝜋 Φ𝑣
The marked-crossed fringe pattern displayed on an LCD monitor is
served as the phase target. The world coordinate system is built as shown where 𝑃 denotes the number of the pixels per fringe period.
in Fig. 6. The origin 𝑂𝑤 is located at the top-left point of the LCD; Xw -
axis and Yw -axis are respectively parallel to the two sides of the LCD; Step 5. Once the one-to-one mapping between the image coordinates
Z-axis is perpendicular to the LCD. The conventional calibration method and the world coordinates of feature points are established, the intrinsic
only uses the feature points in the overlapping FOV of the stereo cam- and extrinsic parameters of stereo cameras can be calibrated by Zhang’s
eras, while the calibration method proposed in this paper can use all the method [37].
captured feature points. To be specific, the intrinsic parameters of the
left or right cameras are calibrated by all feature points inside the FOV 4. Simulations
of each camera, then the extrinsic parameters of stereo cameras can be
calibrated by using 𝑂𝑤 −𝑋𝑤 𝑌𝑤 𝑍𝑤 as intermediary. The calibration pro- 4.1. The influence of white-dot markers
cedures are given as below:
Since some white-dot markers are encoded into the crossed fringe
Step 1. Stereo cameras capture the images of the marked-crossed fringe pattern, these markers may affect the Fourier spectra, and then influ-
pattern from several different viewpoints. Two absolute phase maps Φ𝑢 ence the accuracy of the phase calculation. The pattern illustrated in
and Φ𝑣 of each pattern image are recovered by the global phase unwrap- Fig. 7 is employed to explore the influence of these markers. The reso-
ping process introduced above. lution of the pattern is 1360 × 960 pixels. The phase distribution of the
marked-crossed fringe pattern is calculated by the Fourier fringe analy-
Step 2. Without loss of generality, this paper chooses the pixels with sis algorithm, then compared with the theoretical phase distribution. To
Φ𝑢 = 2𝜋𝑚 and Φ𝑣 = 2𝜋𝑛 as the feature points. To this end, some candi- verify the universality, we randomly generate 15 patterns with differ-
dates can be extracted if they satisfy (|Φ𝑢 − 2𝜋𝑚| < 𝛿) ∩ (|Φ𝑣 − 2𝜋𝑛| < 𝛿), ent angles by simulation, and the simulated results are shown in Fig. 8.
where 𝛿 is a small threshold. If the pixel among these candidates has the Obviously, the phase errors are so small thus can be ignored, which con-
minimum value of |Φ𝑢 − 2𝜋𝑚| + |Φ𝑣 − 2𝜋𝑛|, that can be regarded as the firms that the markers have very little influence on the wrapped phase
rough location of the feature point. calculation.

Step 3. The extraction accuracy of feature points is optimized by win-


4.2. The calibration of stereo cameras
dowed least square linear fitting algorithm based on the following equa-
tion:
In this section, some simulations have been performed to verify the
{
𝑢 = 𝑎 1 Φ𝑢 + 𝑏 1 Φ𝑣 + 𝑐 1 proposed method. The stereo camera system consisting of two identical
(16) cameras placed in parallel. And the intrinsic and extrinsic parameters of
𝑣 = 𝑎 2 Φ𝑢 + 𝑏 2 Φ𝑣 + 𝑐 2

4
X. Chen, Y. Chen, X. Song et al. Optics and Lasers in Engineering 147 (2021) 106733

Table 1
Average intrinsic parameters of the second simulation.

Camera Parameter Left camera Right camera


[fu l, fv l ] [u0 l , v0 l ] [ k1 l , k2 l ] [fu r , fv r ] [u0 r , v0 r ] [ k1 r , k2 r ]
Simulation 800.5591 480.0528 -0.0001 800.6280 480.0512 0.0001
800.5577 360.0965 0.0002 800.6379 360.0239 -0.0000

Table 2
Average extrinsic results of the second simulation.

Parameter 𝜃x 𝜃y 𝜃z tx ty tz
Simulation 0.0005 0.0021 0.0000 500.0111 0.0636 0.1103

Fig. 8. The maximum and mean phase errors. (a) Maximum value; (b) Mean
value.

Fig. 10. Experiment platform.

errors of main points are less than 0.07%. The errors of rotation angles
are less than 7.7 × 10−3 °, and the errors of translation distances are
less than 0.44 mm. The simulated camera parameters are very close to
the ideal camera parameters. Simulation results show that the method
is effective and feasible.

5. Experiments

To further verify the proposed method, an experiment platform con-


sisting of two cameras (Point Grey Chameleon3) and a tablet (iPad
MW792CH/A) was built up, as shown in Fig. 10. The two cameras have
Fig. 9. Target poses.
a resolution of 1280 × 1024 pixels. The optical lenses (Kowa LM16JCM)
mounted on the cameras have a focal length of 12 mm. The tablet has
the stereo cameras are set as: a resolution of 2160 × 1620 pixels and a pixel pitch of 0.098 mm. The
marked-crossed fringe pattern with 𝑝𝑥 = 𝑝𝑦 = 32 pixels was displayed on
⎡800 0 480⎤
the tablet and served as the calibration target.
𝑲 𝑙 =𝑲 𝑟 = ⎢ 0 800 360⎥ (19)
⎢ ⎥
⎣ 0 0 1 ⎦
5.1. Comparison of different calibration methods
⎡1 0 0⎤ ⎡500⎤
𝑹𝑙𝑟 =⎢0 1 0⎥ , 𝒕𝑙𝑟 = ⎢ 0 ⎥ (20) Two different calibration methods are compared in the first exper-
⎢ ⎥ ⎢ ⎥ iment, in which the settings of the cameras keep constant. Here we
⎣0 0 1⎦ ⎣ 0 ⎦
use the classical chessboard calibration method for comparative exper-
More concretely, the focal lengths are 𝑓𝑢 = 𝑓𝑣 = 800 pixels; the principal iments.
points locates at (𝑢0 , 𝑣0 ) = (480, 360) pixels; the skew factor and distor- The tablet is placed within two camera degrees of freedom to obtain
tion coefficients equal to zero; the rotation angles are 𝜃𝑥 = 𝜃𝑦 = 𝜃𝑧 = 0°; a larger FOV overlap area. Stereo cameras take two targets from 10 dif-
and the translation distances are 𝑡𝑥 = 500 pixels, and 𝑡𝑦 = 𝑡𝑧 = 0 pixels. ferent camera postures to get two sets of calibration data. The pictures
The periods of the fringe pattern are 𝑝𝑥 = 𝑝𝑦 = 80 pixels. Then the stereo of the crossed fringe target are calibrated by the proposed method, in
cameras capture the images from seven different viewpoints, as shown which Fig. 11 shows the absolute phase maps of a certain camera pos-
in Fig. 9. Gaussian noise is added to each image and then calibrated. ture. Meanwhile, the pictures of the checkerboard target are calibrated
The simulation is repeated 15 times, and the mean value of the results by Zhang’s checkerboard calibration method. Table 3 shows the intrin-
are taken as the obtained parameters. Table 1 and Table 2 respectively sic parameters of the two calibration methods, and Table 4 shows the
show the average values of intrinsic and extrinsic parameters after 15 extrinsic parameters. Table 5 shows the re-projection errors of the two
calibrations. The errors of focal lengths are less than 0.12%, and the calibration methods. Obviously, the calibration parameters of the two

5
X. Chen, Y. Chen, X. Song et al. Optics and Lasers in Engineering 147 (2021) 106733

Fig. 11. Absolute phase maps of a certain cam-


era posture. (a) - (b) The absolute phase dia-
grams of the left camera; (c) - (d) The absolute
phase diagrams of the right camera.

Table 3
Calibration intrinsic parameters of the first experiment.

Camera Parameter Left camera Right camera


[fu l , fv l ] [u0 l , v0 l ] [ k1 l , k2 l ] [fu r , fv r ] [u0 r , v0 r ] [ k1 r , k2 r ]
marked-crossed 2671.4102 518.8393 -0.1166 2686.6154 506.5403 -0.1206
fringe pattern 2672.5473 643.5421 0.4129 2687.2703 627.3174 0.4469
Chessboard 2663.9657 517.3922 -0.1061 2676.9888 507.8445 -0.1176
2666.1174 639.9517 0.2462 2678.4712 630.9117 0.4082

Table 4
Calibration extrinsic results of the first experiment.

Parameter 𝜃x 𝜃y 𝜃z tx ty tz
marked-crossed fringe pattern 25.1135 0.3795 -0.25715 11.7154 -126.0518 33.2526
Chessboard 25.0355 0.4397 -0.2594 11.7084 -126.1492 32.9289

Table 5
Reprojection errors of the first experiment.

Camera Left camera Right camera


Parameter pixel_err_xl pixel_err_yl pixel_err_xr pixel_err_yr
Marked-crossed fringe pattern 0.0532 0.0728 0.0484 0.0706
Chessboard 0.07553 0.1087 0.0711 0.1046

methods are very similar. Surprisingly the re-projection errors of the 766.3498), the radius is 12.8549 mm, and the reconstruction error is
crossed fringe pattern calibration method are smaller than that of the 2.8392%. As a result of the chessboard calibration method, the center
checkerboard calibration method. The experimental results show that of reconstruction sphere is (-76.4775, 47.7434, 764.3272), the radius is
the proposed method has higher accuracy and better performance than 12.8795 mm, and the reconstruction error is 3.0360%. Therefore, the
the chessboard calibration method in the situation of the stereo cameras crossed fringe calibration method again verifies that it has higher cal-
with large overlapping FOV. ibration accuracy and better performance than the chessboard calibra-
tion method.
5.2. Three-dimensional object reconstruction
5.3. Comparison of calibration results with feature points in different
On the basis of the first experiment, we further verify the accuracy regions
of the crossed fringe calibration method. We reconstruct a standard ball
with a radius of 12.5mm by the calibration data from the first experi- The crossed fringe calibration method proposed in this paper uses
ment. We use the least square method to fit the calculated point cloud global coding, and can select any part of the feature points for calibra-
data. Fig. 12 shows the reconstructed sphere model obtained by two tion. In the third experiment, we adjust the two cameras to pose with
different calibration methods. As a result of the crossed fringe calibra- a certain FOV overlap. Each captured image can be divided into two
tion method, the center of reconstruction sphere is (-77.6911, 47.3050, parts: the areas within and without the overlapping FOV. Fig. 13 is a

6
X. Chen, Y. Chen, X. Song et al. Optics and Lasers in Engineering 147 (2021) 106733

Fig. 12. Reconstruction results of the standard


ball. (a) crossed fringe calibration; (b) chess-
board calibration

Fig. 13. The schematic diagrams of different


areas of the target captured by a stereo camera
in a certain shooting posture. (a) and (c) are
schematic diagrams of different areas of the left
and right cameras; (b) and (d) are the feature
points taken by the left and right cameras.

Table 6
Calibration extrinsic results of the third experiment.

Parameter 𝜃x 𝜃y 𝜃z tx ty tz
FOV overlapping 17.2518 0.4927 -0.3438 1.7088 -136.6721 -0.8755
FOV non-overlapping 17.5898 0.3151 -0.3094 1.5987 -136.5243 -0.0669
Full FOV area 17.6929 0.1261 -0.3094 1.6632 -136.5506 -0.4552

schematic diagram of two regions of the target in the shooting attitude regions are similar, the method is ergodic, and any part of the feature
of the stereo cameras. Fig. 13 (b) and (d) are the points captured by the points can be flexibly selected for calibration. Therefore, the method is
left and right cameras, respectively. Since each feature point is coded, it also suitable for the case of non-overlapping FOV area.
can be judged whether it is in the overlapping area of FOV according to
whether the coordinates in the world coordinate system are the same.
5.4. Calibration for non-overlapping FOV cameras
As shown in Fig. 13 (a) and (c), red represents FOV non-overlapping
area, and blue represents FOV overlapping area.
In the fourth experiment, the two cameras are adjusted to a non-
Then the points in the different regions are calibrated respectively.
overlapping field of view. As with the first experimental calibration pro-
Table 6 shows the extrinsic parameters of the different areas after cali-
cess, the stereo camera acquires fringe images from 10 different camera
bration. Experiments show that the extrinsic parameters of the different
positions. The calibration parameters are shown in Table 7 and Table 8.

7
X. Chen, Y. Chen, X. Song et al. Optics and Lasers in Engineering 147 (2021) 106733

Table 7
Calibration intrinsic parameters of the fourth experiment.

Camera Parameter Left camera Right camera


[fu l , fv l ] [u0 l , v0 l ] [ k1 l , k2 l ] [fu r , fv r ] [u0 r , v0 r ] [ k1 r , k2 r ]
marked-crossed 2648.5192 534.9413 -0.1255 2661.2514 638.3250 -0.1217
fringe pattern 2650.1752 633.2711 0.3954 2662.6524 496.9087 0.3098

Table 8
Calibration extrinsic results of the fourth experiment.

Parameter 𝜃x 𝜃y 𝜃z tx ty tz
marked-crossed fringe pattern 7.9154 -0.2345 -0.4913 -20.6257 -143.9624 10.2968

Experimental results show that this method is indeed suitable for non- [8] Liu J, Fang Z, Zhang K, Tan M. Algorithm for camera parameter adjustment in mul-
overlapping FOV. ticamera systems. Opt. Eng. 2015;54:104108.
[9] Liu X, Liu Z, Duan G, Cheng J, Jiang X, Tan J. Precise and robust binocular camera
calibration based on multiple constraints. Appl. Opt. 2018;57:5130–40.
6. Conclusion [10] Wang L, Wang W, Shen C, Duan F. A convex relaxation optimization algorithm for
multi-camera calibration with 1D objects. Neurocomputing 2016;215:82–9.
[11] Liu Z, Zhang G, Wei Z, Sun J. Novel calibration method for non-overlapping multiple
This paper presents a simple calibration method. Compared with the vision sensors based on 1D target. Opt. Lasers Eng. 2011;49:570–7.
chessboard calibration method, the calibration of stereo cameras with [12] Yu J, Da F. Bi-tangent line based approach for multi-camera calibration using
a marked-crossed fringe pattern has higher calibration accuracy. Mean- spheres. J. Opt. Soc. Amer. A 2018;35:221–9.
[13] Wei Z, Zhao K. Structural Parameters Calibration for Binocular Stereo Vision Sensors
while, the global coding method can flexibly select any part of the fea-
Using a Double-Sphere Target. Sensors (Basel) 2016;16:1074.
ture points for calibration, thereby making the calibration process more [14] Yang S, Gao Y, Liu Z, Zhang G. A calibration method for binocular stereo vi-
flexible. In the case of non-overlapping FOV, this method can also obtain sion sensor with short-baseline based on 3D flexible control field. Opt. Lasers Eng.
2020;124:105817.
camera parameters. Overall, the proposed method will have potential
[15] Liu Z, Yin Y, Wu Q, Li X, Zhang G. On-site calibration method for outdoor binocular
applications for large-range vision systems. stereo vision sensors. Opt. Lasers Eng. 2016;86:75–82.
[16] Wei Z, Zou W, Zhang G, Zhao K. Extrinsic parameters calibration of multi–
Credit authorship contribution statement camera with non-overlapping fields of view using laser scanning. Opt. Express
2019;27:16719–37.
[17] Guan B, Shang Y, Yu Q. Planar self-calibration for stereo cameras with radial distor-
Xiangcheng Chen: Conceptualization, Methodology, Writing - re- tion. Appl. Opt. 2017;56:9257–67.
view & editing. Ying Chen: Writing - original draft, Visualization. Xi- [18] Schmalz C, Forster F, Angelopoulou E. Camera calibration: active versus passive
targets. Opt. Eng. 2011;50:113601.
aokai Song: Investigation, Data curation. Wenyuan Liang: Investiga- [19] Huang L, Zhang Q, Asundi A. Camera calibration with active phase target: improve-
tion, Funding acquisition. Yuwei Wang: Conceptualization, Funding ac- ment on feature detection and optimization. Opt Lett 2013;38:1446–8.
quisition, Writing - review & editing. [20] Ma M, Chen X, Wang K. Camera calibration by using fringe patterns and 2D phase-d-
ifference pulse detection. Optik - Int. J. Light Electron Optics 2014;125:671–4.
[21] Bell T, Xu J, Zhang S. Method for out-of-focus camera calibration. Appl. Opt.
Declaration of Competing Interest 2016;55:2346–52.
[22] Zhao W, Su X, Chen W. Whole-field high precision point to point calibration method.
Opt. Lasers Eng. 2018;111:71–9.
The authors declare that they have no known competing financial
[23] Xu Y, Gao F, Ren H, Zhang Z, Jiang X. An Iterative distortion compensation algorithm
interests or personal relationships that could have appeared to influence for camera calibration based on phase target. Sensors (Basel) 2017;17:1188.
the work reported in this paper. [24] Xu Y, Feng G, Zhang Z, Jiang X. A calibration method for non-overlapping cameras
based on mirrored absolute phase target. Int. J. Adv. Manuf. Technol. 2017:1–7.
[25] Wang Y, Liu L, Cai B, Wang K, Chen X, Wang Y, Tao B. Stereo calibration with
Acknowledgments absolute phase target. Opt. Express 2019;27:22254–67.
[26] Barone S, Neri P, Paoli A, Razionale AV. 3D acquisition and stereo-camera calibra-
This work was supported by National Natural Science Foundation tion by active devices: A unique structured light encoding framework. Opt. Lasers
Eng. 2020;127:105989.
of China (NSFC) (51905005,51605130), Natural Science Foundation of [27] Wang Y, Chen X, Tao J, Wang K, Ma M. Accurate feature detection for out-of-focus
Anhui Province under Grant 2008085QF318, Beijing Natural Science camera calibration. Appl. Opt. 2016;55:7964–71.
Foundation under Grant L192039, Open Fund of Key Laboratory of Ic- [28] Cai B, Wang Y, Wang K, Ma M, Chen X. Camera calibration robust to defocus using
phase-shifting patterns. Sensors 2017;17:2361.
ing and Anti-/De-Icing under Grant IADL20190303, National Key R&D [29] Genovese K, Chi Y, Pan B. Stereo-camera calibration for large-scale DIC measure-
Program of China under Grant 2017YFA0701105. ments with active phase targets and planar mirrors. Opt. Express 2019;27:9040–53.
[30] Xue J, Su X, Xiang L, Chen W. Using concentric circles and wedge grating for camera
References calibration. Appl. Opt. 2012;51:3811–16.
[31] Tao J, Wang Y, Cai B, Wang K. Camera calibration with phase-shifting wedge grating
array. Appl. Sci. 2018;8:644.
[1] Xia R, Hu M, Zhao J, Chen S, Chen Y, Fu SP. Global calibration of non-over-
[32] Cai B, Wang Y, Wu J, Wang M, Li F, Ma M, Chen X, Wang K. An effective method
lapping cameras: State of the art. Optik - Int. J. Light Electron Optics
for camera calibration in defocus scene with circular gratings. Opt. Lasers Eng.
2017;158:S0030402617317977.
2019;114:44–9.
[2] Wang Y, Wang Y, Liu L, Chen X. Defocused camera calibration with a conventional
[33] Liu Y, Su X. Camera calibration with planar crossed fringe patterns. Optik - Int. J.
periodic target based on Fourier transform. Opt. Lett. 2019;44:3254–7.
Light Electron Optics 2012;123:171–5.
[3] Loaiza ME, Raposo AB, Gattass M. Multi-camera calibration based on an invariant
[34] Li L, Zhao W, Wu F, Liu Y, Gu W. Experimental analysis and improvement on camera
pattern. Comput. Graphics 2011;35:198–207.
calibration pattern. Opt. Eng. 2014;53:013104.
[4] Zhang J, Zhu J, Deng H, Chai Z, Ma M, Zhong X. Multi-camera calibration method
[35] Juarez-Salazar R, Guerrero-Sanchez F, Robledo-Sanchez C, Gonzalez-Garcia J. Cam-
based on a multi-plane stereo target. Appl. Opt. 2019;58:9353–9.
era calibration by multiplexed phase encoding of coordinate information. Appl. Opt.
[5] An P, Liu Q, Abedi F, Yang Y. Novel calibration method for camera array in spherical
2015;54:4895–906.
arrangement. Signal Process. Image Commun. 2020;80:115682.
[36] Herráez MA, Burton DR, Lalor MJ, Gdeisat MA. Fast two-dimensional phase-unwrap-
[6] Shen E, Hornsey R. Multi-Camera Network Calibration With a Non-Planar Target.
ping algorithm based on sorting by reliability following a noncontinuous path. Appl.
IEEE Sens. J. 2011;11:2356–64.
Opt. 2002;41:7437–44.
[7] Gai S, Da F, Dai X. A novel dual-camera calibration method for 3D optical measure-
[37] Zhang ZY. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal.
ment. Opt. Lasers Eng. 2018;104:126–34.
Mach. Intell. 2000;22:1330–4.

You might also like