Professional Documents
Culture Documents
Received September 6, 2018, accepted September 27, 2018, date of publication October 12, 2018,
date of current version November 14, 2018.
Digital Object Identifier 10.1109/ACCESS.2018.2875760
ABSTRACT Aiming at the problem of distance measurement by using key sensors and sensor networks in
battlefield, border patrol, search and rescue, critical structure monitoring, and surveillance, a plane-space
algorithm is proposed in this paper, and its range and boundary are estimated. In the view of how to use two
image sensors with different positions to measure the target points’ coordinates, the plane-space algorithm is
proposed. By comparing the target point coordinates obtained from two different image sensors, the actual
coordinates of the target points are calculated. The definition domain of the acquisition value of the image
sensor is defined as the domain of the plane-space algorithm. By calculating the range of the plane-space
algorithm under its defined domain, this paper presents a range scanning algorithm and uses it to calculate the
range of the plane-space algorithm and gets a range point set. In order to solve the problem of large amount
of work in the range scanning algorithm, the extreme value elimination method is proposed. Considering the
boundary points and poles of the target, the maximum value of each variable is eliminated at its extreme value
and the final target is obtained. Finally, the Matplotlib library-based on Python 2.7 is used to simulate the
algorithm. The simulation data shows that the theoretical value of the plane-space algorithm is consistent with
the actual value, and the measurement boundary of the plane-space algorithm is calculated when resolution
of the image sensor is 480 × 640 pixel with the distance of two sensors being 1000 pixel.
INDEX TERMS Binocular stereo version, extreme value elimination algorithm, measurement distance,
plane-space algorithm, range scanning algorithm.
2169-3536
2018 IEEE. Translations and content mining are permitted for academic research only.
62450 Personal use is also permitted, but republication/redistribution requires IEEE permission. VOLUME 6, 2018
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
E. Zhang, S. Wang: Plane-Space Algorithm Based on Binocular Stereo Vision
an onsite structure-parameter calibration method for large algorithm by the Range Scanning Algorithm, the Extreme
field of view (FOV) binocular stereovision based on a Value Elimination method is proposed.
small-size 2D target. Calibration experiments show that only
5 placements of the calibration target can accomplish the II. PLANE-SPACE ALGORITHM
accurate calibration of structure parameters and the average A. MODEL OF PLANE-SPACE ALGORITHM
value of 10 measurements for a distance of 1071 mm is The model of Plane-Space Algorithm is shown in Figure 1.
1071.057 mm with a RMS less than 0.2 mm. The mea- The coordinates of all points are the coordinates of the point
surement accuracy of the stereovision calibrated with their in the spatial coordinate system. The rectangle A1 B1 C1 D1
method is a little more accurate than that of the one cali- and rectangle A2 B2 C2 D2 in the picture are two photosensitive
brated with traditional method [4]. Gai adopts the combina- elements of the image sensor. Point O1 (xo1 , 0, 0) and point
tion method of polar constraint and ant colony algorithm to O2 (xo2 , 0, 0) are the intersection points of the lateral field
improve the convergence speed and accuracy in 3D recon- angle of the photosensitive element and the plane of the lon-
struction based on binocular vision. The simulation results gitudinal field angle (only the quadrangle is square, in many
show that by combining the advantage of polar constraint cases the sensor is not square, so the image is cut before the
and ant colony algorithm, the stereo matching range of 3D comparison) E1 , and E2 is the sense of the point E in the
reconstruction based on binocular vision is simplified, and image sensor.
the convergence speed and accuracy of this stereo matching
process are improved [5]. Xu et al. presents a novel camera
calibration method with an iterative distortion compensa-
tion algorithm, Both the simulation work and experimental
results show that the proposed calibration method is more
than 100% more accurate than the conventional calibration
method [6]. Jia et al. proposed an improved camera calibra-
tion method, the results of calibration experiments show that
the accuracy of the calibration method can reach 99.91% [7].
Yang and Fang proposed an effective geometric camera cal-
ibration method is proposed where the eccentricity error is
compensated by establishing the correspondence between
the estimated model position and the real position of the
fitted center of the deformed ellipse, Both simulation and
measurement data verify the existence of the eccentricity
FIGURE 1. Model of plane-apace algorithm.
error and the effectiveness of the proposed method [8]. The
neural network (NN) model that considers the measurement
distance in addition to the position is proposed by Chung BM,
From the preceding conditions, the coordinates of the
Experiments were conducted on a 100 X 100 mm2 area, and
E (x0 , y0 , z0 ) can be obtained.
a maximum error of 0.5 mm is observed for the mathemat-
ical model. However, when the NN model considering the
height of the object is used, the error is reduced by 60% to x1 xo2 − x2 xo1
x0 =
0.2 mm [9]. Wang et al. proposed a new model of camera lens (xo2 − xo1 ) + (x1 − x2 )
(xo2 − xo1 ) y1
distortion, according to which lens distortion is governed by
y0 = (2.1)
the coefficients of radial distortion and a transform from ideal (xo2 − xo1 ) + (x1 − x2 )
(xo2 − xo1 ) z2
image plane to real sensor array plane, experiments show that
z0 =
the new model has about the same correcting effect upon (xo2 − xo1 ) + (x1 − x2 )
lens distortion as the conventional model including all the
radial distortion, decentering distortion and prism distortion.
From the preceding conditions, the coordinates of the
Compared with the conventional model, the new model has
E(x0 , y0 , z0 ) can be obtained.
fewer parameters to be calibrated and more explicit physical
The specific calculation process is as follows:
meaning [10].
The following equations can be obtained from the model
Although the algorithm of binocular stereo vision is opti-
described in Figure 1,
mized from different layers, the algorithm is not unified
and cannot be used to estimate the range of the algorithm
adopted. In order to accurately measure the coordinates of x−x y z
o1
= =
the target points, Plane-Space Algorithm is proposed in this x1 − xo1
y1 z1
paper. In order to calculate the range of the Plane-Space y1 = y2 (2.2)
Algorithm, Range Scanning Algorithm is proposed. In view x − xo2 y z
= =
of the problem of calculating the overload of the disparity x2 − xo2 y2 z2
VOLUME 6, 2018 62451
E. Zhang, S. Wang: Plane-Space Algorithm Based on Binocular Stereo Vision
that is, When the size of these elements is very small, and it is hard
x−x
o1 y for naked eyes to distinguish, the human brain can understand
= this approximate set of points as an image when dealing
− xo1
xx1 − y1
xo2 y with such a set of images. In this way, the image sensor
= (2.3) can collect the image through the photoelectric conversion
x − x y
y2 o2 1
= z
and transfer the final information to the display. As long
y1 z2 as the phase element is small enough, it is not the same as
Solve the equation, the actual object in the naked eye. But in the view of the
machine, the image has a great loss, and because the working
x1 xo2 − x2 xo1
x= principle of the image sensor determines that the coordinates
(xo2 − xo1 ) + (x1 − x2 )
of the collected points are only an integer phase element,
(xo2 − xo1 ) y1
y= (2.4) so there are many limitations when measuring the target with
(x o2 − xo1 ) + (x1 − x2 ) the binocular vision principle of the image sensor, which is
(xo2 − xo1 ) z2
z =
mainly manifested in only a limited target point.
(xo2 − xo1 ) + (x1 − x2 )
It equals that,
III. RANGE SCANNING ALGORITHM
x1 xo2 − x2 xo1
x0 = There is no specific formula for the calculation method here,
(x xo1 ) + (x1 − x2 )
o2 − but it is only a calculation idea, and it does not need to
(xo2 − xo1 ) y1
y0 = (2.5) be derived from the formula, because the range is a set of
(xo2 − xo1 ) + (x1 − x2 ) values or a range, and this range is actually existing. Take the
(xo2 − xo1 ) z2
z0 =
binocular stereo vision ranging device shown in Figure 1 as
(xo2 − xo1 ) + (x1 − x2 )
an example, the specific methods are as follows:
B. DOMAIN OF THE ALGORITHM First, we need to determine the range of value of x1 , x2 ,
Formula (2.1) shows that the input variables of Plane-Apace z2 , note that these values are discrete, all the possible values
Algorithm are x1 , x2 , z2 , respectively. The values of these are listed. Then three sets of xR1 , xR2 , zR2 , are composed
points depend on the resolution of the image sensor. The respectively, and these points are brought into the solution of
definition domain of the Plane-Space Algorithm is all the the plane space algorithm. A set of points is obtained. The set
values collected by the image sensor. Therefore, the range of this point is the measurement range of the binocular vision
of the Plane-Space Algorithm should be a set of points of system.
discrete points. Next, the process of derivation is explained:
The principle of the image sensor’s photoreceptor imaging The binocular vision system performs the measurement by
is to map the actual image of the optical system of the image parallax. The image sensor determines the parallax through
sensor to the photosensitive element, and to transform the the pixel coordinates difference. But the measurement value
light signal into the electrical signal to achieve the purpose of the image sensor is only an integer times of the pixel value,
of imaging, but it will cause a certain error in the process so the parallax of the binocular vision system can only be the
of converting the optical signal into the electrical signal. integer times of the pixel. It can be seen that any image sensor
The schematic diagram of the structure of the photosensitive can measure a finite number of values and can find out all the
original is shown in Figure 2. measurements. The set of these results is the measurement
range of the binocular vision system. That is to say, the
binocular vision system can only measure these points, and
if the measured values are not these points, the measurement
must be wrong.
The value range of each coordinate point is discussed extreme value of the parameter 1
according to formula (2.1). xo1 , xo2 , y1 are constant and have axo2 − x2 xo1
xo2 > xo1 . The ranges of values for x1 and x2 are [a, b] and x0 |x1 =a = (4.4)
a − x2 − xo1 + xo2
[a + l, b + l], respectively. The range of z2 is [−w, w], and
extreme value of the parameter 2
have l > b > a. It should be noted that the range is boundary
value and can only be integers. Take x0 , for example, to judge bxo2 − x2 xo1
1 xo2 −x2 xo1
x0 |x1 =b = (4.5)
the range of the value of x0 by x0 = (xo2x−x o1 )−(x1 −x2 )
. Since b − x2 − xo1 + xo2
there are two unknown quantities and independent values Next to the partial derivative of x2 ,
in the range of values, it is an extreme value problem for
∂ x0 |x1 =a
a function of two elements, although it is a problem with xo1
=−
constraints, the constraint is not a curve or a relation, so the ∂x2 a − x2 − xo1 + xo2
Lagrange multiplier method cannot be used to solve it. There- axo2 − x2 xo1
+ (4.6)
fore, we need to develop a new algorithm, extreme value (a − x2 − xo1 + xo2 )2
∂ x0 |x1 =b
elimination method, to solve this problem. The principle of xo1
extreme value elimination is to eliminate the variables of the =−
∂x2 b − x2 − xo1 + xo2
multivariate function from the extremum or boundary value bxo2 − x2 xo1
so that the extreme value problem of the multivariate function + (4.7)
(b − x2 − xo1 + xo2 )2
is transformed into the extreme value of a multiple function.
The extreme value of all the function is the extreme value Let,
∂ x0 |x1 =a
of the multivariate function. In addition to this, we need to
=0 (4.8)
consider the poles. The specific calculation process will be ∂x2
described in detail in Section IV.B. ∂ x0 |x1 =b
=0 (4.9)
∂x2
B. CALCULATING THE RANGE OF PLANE SPACE
ALGORITHM USING EXTREME VALUE There is no solution to the equation, so the extreme value of
ELIMINATION ALGORITHM the parameters under x2 is in the range boundary point. And
1) THE CASE OF THE POLES OF A COLLECTION then,
POINT IS NOT CONSIDERED axo2 − xo1 (a + l)
x0 |x1 =a,x2 =a+l = (4.10)
In the Plane-Space Algorithm model shown in Figure 1, the −l − xo1 + xo2
coordinates of the target points P (x0 , y0 , z0 )can be obtained axo2 − xo1 (b + l)
x0 |x1 =a,x2 =b+l = (4.11)
as a formula (2.1). a − b − l − xo1 + xo2
xo1 , xo2 , y1 are constant and have xo2 > xo1 . The range of bxo2 − xo1 (a + l)
x0 |x1 =b,x2 =a+l = (4.12)
x1 , x2 is [a, b], [a + l, b + l]. The range of the value of z2 −a + b − l − xo1 + xo2
is [−w, w]. It should be noted that the range is boundary bxo2 − xo1 (b + l)
value and can only be integers. The Extreme Value Elim- x0 |x1 =b,x2 =b+l = (4.13)
−l − xo1 + xo2
ination Algorithm is used to calculate the boundary value
The maximum and minimum values of the four sets of
of x0 , y0 , z0 .
solutions are the maximum and minimum values of x0 .
Calculate the range of the value of x0 , and eliminate x1 after
Then, calculate the range of the value of y0 , and
eliminating x2 . Derivation of partial derivative for x1 ,
eliminate x1 after eliminating x2 .
∂x0 xo2 x1 xo2 − x1 xo1
= − (4.1) ∂y0 y1 (−xo1 + xo2 )
∂x1 x1 −x2 −xo1 +xo2 (x1 − x2 − xo1 + xo2 )2 =− (4.14)
∂x1 (x1 − x2 − xo1 + xo2 )2
Let ,
Let,
∂x0
=0 (4.2) ∂y0
∂x1 =0 (4.15)
∂x1
that is,
There is no solution to the equation, so its extreme value of
xo2 x1 xo2 − x1 xo1 the parameters is,
− = 0 (4.3)
x1 − x2 − xo1 + xo2 (x1 − x2 − xo1 + xo2 )2 y1 (−xo1 + xo2 )
There is no solution to the equation. This shows that when x1 y0 |x1 =a = (4.16)
a − x2 − xo1 + xo2
is a variable and x2 is a constant (x2 can take any value in the y1 (−xo1 + xo2 )
definition field), the boundary of x0 value is on the boundary y0 |x1 =b = (4.17)
b − x2 − xo1 + xo2
of the definition domain. When x1 is the corresponding value,
the variable x1 is removed, and the value T (x) is obtained, but And
∂ y0 |x1 =a y1 (−xo1 + xo2 )
the remaining variables are still there. T (x) is named as the
= (4.18)
extreme value of the parameter. ∂x2 (a − x2 − xo1 + xo2 )2
VOLUME 6, 2018 62453
E. Zhang, S. Wang: Plane-Space Algorithm Based on Binocular Stereo Vision
1) IDEAL STATE
FIGURE 5. Coordinate performance curve when enlarging the non-ideal
The ideal state is that the image sensor can collect the state state of collection point.
of the error-free collection point. At this time, x0 ∈ [25, 30],
y0 ∈ [20, 25], z0 ∈ [50, 65], the collection point is 100, and
then draw the coordinate performance curve. The coordinate more dispersed way, so the Plane-Space Algorithm meets the
performance curve is shown in Figure 3. requirements.
Figure 3 shows that the coordinate performance curves of In summary, the simulation data shows that the plane space
the three coordinates at this time are all straight lines with a algorithm is reasonable.
slope of 1 and an intercept of 0.
B. SIMULATION OF RANGE SCANNING ALGORITHM
2) NON-IDEAL STATE The purpose of the Plane-Space algorithm is to range the
Non-ideal state means that the image sensor can collect the distance. The distance between the target point and the mea-
state of the collection points with errors. The reason for the suring device is mainly influenced by the ordinate of the
error is that the image sensor can only acquire the coordinate target point. Therefore, the performance of the algorithm can
value of the integer multiple pixel. At this time x0 ∈ [25, 30], be evaluated by the analysis of the longitudinal coordinates
y0 ∈ [20, 25], z0 ∈ [50, 65], the collection point is 100. The of the results of the simulation data.
measurement algorithm of the image sensor is rounding. The The binocular stereo vision ranging model used is shown
coordinate performance curve is shown in Figure 4. in Figure1. Take, xo1 = (24,0,0), xo2 = (1072,0,0), x1 ∈
From Figure 4, the performance curve of the sitting table (0, 48), x2 ∈ (1048, 1096), z2 ∈ (0, 48), y2 = 10, and
under non-ideal condition is not very satisfactory. The P point draw the calculated points in a three-dimensional coordinate
at this time is relatively small, and the scope is enlarged to system as shown in Figure 6.
get the image shown in Figure 5. At this time x0 ∈ [25, 30], It can be seen from the simulated image that the ordi-
y0 ∈ [20, 25], z0 ∈ [50, 65], the collection point is 100. nate of the measurable target point is mainly in the interval
As shown in Figure 5, the coordinate performance curve has [−16000, 16000]. Considering the actual situation, the lon-
greatly improved. Thus, we can see that the more dispersed gitudinal coordinates of the target points calculated by the
the target is, the better the algorithm is. In the actual working algorithm can not appear negative. The result of the negative
environment, the target points are usually distributed in a value of the image is calculated the result of the impossible
[3] Z. Liu, S. Wu, Y. Yin, and J. Wu, ‘‘Calibration of binocular vision sensors
based on unknown-sized elliptical stripe images,’’ Sensors, vol. 17, no. 12,
p. 2873, 2017.
[4] Z. Wang, Z. Wu, X. Zhen, R. Yang, and J. Xi, ‘‘An onsite structure
parameters calibration of large FOV binocular stereovision based on small-
size 2D target,’’ Optik—Int. J. Light Electron Opt., vol. 124, no. 21,
pp. 5164–5169, 2013.
[5] Q. Gai, ‘‘Optimization of stereo matching in 3D reconstruction based on
binocular vision,’’ J. Phys., Conf., vol. 960, no. 1, p. 012029, 2018.
[6] Y. Xu, F. Gao, H. Ren, Z. Zhang, and X. Jiang, ‘‘An iterative distortion
compensation algorithm for camera calibration based on phase target,’’
Sensors, vol. 17, no. 6, p. 1188, 2017.
[7] Z. Jia et al., ‘‘Improved camera calibration method based on perpendicular-
ity compensation for binocular stereo vision measurement system,’’ Opt.
Express, vol. 23, no. 12, pp. 15205–15223, 2015.
[8] X. Yang and S. Fang, ‘‘Eccentricity error compensation for geometric
camera calibration based on circular features,’’ Meas. Sci. Technol., vol. 25,
no. 2, pp. 149–156, 2014.
[9] B.-M. Chung, ‘‘Neural-network model for compensation of lens distor-
FIGURE 9. The result of simulation when xo1 = 320, xo2 = 3640,
tion in camera calibration,’’ Int. J. Precis. Eng. Manuf., vol. 19, no. 7,
y1 = 879.2, z2 = 130 and considered the poles. pp. 959–966, 2018.
[10] J. Wang, F. Shi, J. Zhang, and Y. Liu, ‘‘A new calibration model of camera
lens distortion,’’ Pattern Recognit., vol. 41, no. 2, pp. 607–615, 2008.
the range of x, y, z coordinates of the target points at this time
is all within the range calculated by the extreme elimination
method.
Given the above, it is reasonable to measure range of
Plane-Space Algorithm by using Extreme Value Elimination ENSHUO ZHANG was born in Chifeng, Inner
Mongolia, China, in 1992. He received the bach-
Algorithm. elor’s degree from the Inner Mongolia University
of Technology, Hohhot, China, in 2014, and the
VI. CONCLUSION master’s degree from Inner Mongolia University,
Aiming at the problem of how to use two different position Hohhot, China, in 2018.
He studied binocular stereo vision-based rang-
image sensors to measure the coordinates of target points, ing algorithms with the University of Inner Mon-
Plane-Space Algorithm is proposed. According to the struc- golia between 2015 and 2018, during which he
tural characteristics of the photosensitive element of the published two papers and applied for two Chinese
image sensor, the method of how to calculate Plane-Space patents.
Algorithm is proposed. The simulation results show that
the Plane-Space Algorithm is reasonable. In order to study
the range of Plane-Space Algorithm, Range Scanning Algo-
rithm is proposed, and the factors that affect the performance
of the algorithm are explored through simulation. The con-
Shubin Wang was born in Chifeng, Inner Mon-
clusion is that the larger the distance between the two image golia, China, in 1971. He received the B.S. and
sensors is, the farther the distance of the target points can M.S. degrees in physics and electronic engi-
be measured. In order to reduce the calculation load of the neering from Inner Mongolia University, China,
Range Scanning Algorithm, the Extreme Value Elimination in 1996 and 2003, respectively, and the Ph.D.
degree from the Beijing University of Posts and
method is proposed to calculate the measurement boundary of Telecommunications in 2010.
the Plane-Space Algorithm. The Extreme Value Elimination From 2009 to 2010, he was a Research Fellow
method can be seen, through the simulation data, to meet the supervised by Prof. K. S. Kwak with Inha Univer-
requirements of the measurement range of the plane space sity, Inchon, South Korea. Since 2013, he has been
a Professor with the College of Electronic Information Engineering, Inner
algorithm. Mongolia University. He is the author of over 40 articles, over 7 inventions,
and holds four Chinese patents. His research interests include wireless sensor
REFERENCES networks, cognitive radio networks, machine vision, binocular stereo vision,
[1] T. Junchao and Z. Liyan, ‘‘Effective data-driven calibration for a gal- and signal processing.
vanometric laser scanning system using binocular stereo vision,’’ Sensors, Dr. Wang was a recipient of the eighth China Inner Mongolia Autonomous
vol. 18, no. 1, p. 197, 2018. Region Youth Science and Technology Award in 2011, the Inner Mongolia
[2] W. Zhenzhong and Z. Kai, ‘‘Structural parameters calibration for binocular Autonomous Region Science and Technology Progress Award in 2009, and
stereo vision sensors using a double-sphere target,’’ Sensors, vol. 16, the CSPS Best Symposium Paper Award in 2014.
no. 7, p. 1074, 2016.