You are on page 1of 8

SPECIAL SECTION ON MISSION CRITICAL SENSORS AND SENSOR NETWORKS (MC-SSN)

Received September 6, 2018, accepted September 27, 2018, date of publication October 12, 2018,
date of current version November 14, 2018.
Digital Object Identifier 10.1109/ACCESS.2018.2875760

Plane-Space Algorithm Based on Binocular


Stereo Vision With Its Estimation of Range
and Measurement Boundary
ENSHUO ZHANG AND SHUBIN WANG
College of Electronic Information Engineering, Inner Mongolia University, Hohhot 010021, China
Corresponding author: Shubin Wang (wangshubin@imu.edu.cn)
This work was supported in part by the National Natural Science Foundation of China under Grant 61761034 and Grant 61261020,
and in part by the Natural Science Foundation of Inner Mongolia, China under Grant 2016MS0616.

ABSTRACT Aiming at the problem of distance measurement by using key sensors and sensor networks in
battlefield, border patrol, search and rescue, critical structure monitoring, and surveillance, a plane-space
algorithm is proposed in this paper, and its range and boundary are estimated. In the view of how to use two
image sensors with different positions to measure the target points’ coordinates, the plane-space algorithm is
proposed. By comparing the target point coordinates obtained from two different image sensors, the actual
coordinates of the target points are calculated. The definition domain of the acquisition value of the image
sensor is defined as the domain of the plane-space algorithm. By calculating the range of the plane-space
algorithm under its defined domain, this paper presents a range scanning algorithm and uses it to calculate the
range of the plane-space algorithm and gets a range point set. In order to solve the problem of large amount
of work in the range scanning algorithm, the extreme value elimination method is proposed. Considering the
boundary points and poles of the target, the maximum value of each variable is eliminated at its extreme value
and the final target is obtained. Finally, the Matplotlib library-based on Python 2.7 is used to simulate the
algorithm. The simulation data shows that the theoretical value of the plane-space algorithm is consistent with
the actual value, and the measurement boundary of the plane-space algorithm is calculated when resolution
of the image sensor is 480 × 640 pixel with the distance of two sensors being 1000 pixel.

INDEX TERMS Binocular stereo version, extreme value elimination algorithm, measurement distance,
plane-space algorithm, range scanning algorithm.

I. INTRODUCTION reconstruction experiment are carried out respectively. The


In the field of battlefield, border patrol, search and rescue, validity of the method is verified, and the availability of
critical structure monitoring and surveillance, key sensors the binocular stereo vision algorithm is verified [1]. Zhen-
and sensor networks are needed to measure the target. The zhong and Kai starts with the calibration of the structural
device based on binocular stereo vision algorithm can meet parameters of binocular stereo vision sensor (BSVS), and
the requirements above, but the range of work of the device proposes a method to calibrate the structural parameters of
needs to be estimated before using the equipment, so it is BSVS based on a double-sphere target. Experimental results
necessary to study the target measurement algorithm and the on real data show that the accuracy of measurement is higher
range or boundary of binocular stereo vision algorithm as than 0.9%, with a distance of 800 mm and a view field of
well. 250 × 200mm2 [2]. Liu et al. proposed a calibration method
The core of binocular stereo vision algorithm is to measure based on unknown-sized elliptical stripe images. Simulative
the coordinates of the target points. At present, many schol- and physical experiments are conducted to validate the effi-
ars have studied binocular stereo vision ranging algorithm. ciency of the proposed method. When the field of view is
Junchao Tu et al. proposed a method to calibrate the laser approximately 400 mm × 300 mm, the proposed method can
scanning system using binocular stereo vision algorithm. reach a calibration accuracy of 0.03 mm, which is compara-
The calibration experiment, projection experiment and 3D ble with that of Zhang’s method [3]. Wang et al. proposed

2169-3536
2018 IEEE. Translations and content mining are permitted for academic research only.
62450 Personal use is also permitted, but republication/redistribution requires IEEE permission. VOLUME 6, 2018
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
E. Zhang, S. Wang: Plane-Space Algorithm Based on Binocular Stereo Vision

an onsite structure-parameter calibration method for large algorithm by the Range Scanning Algorithm, the Extreme
field of view (FOV) binocular stereovision based on a Value Elimination method is proposed.
small-size 2D target. Calibration experiments show that only
5 placements of the calibration target can accomplish the II. PLANE-SPACE ALGORITHM
accurate calibration of structure parameters and the average A. MODEL OF PLANE-SPACE ALGORITHM
value of 10 measurements for a distance of 1071 mm is The model of Plane-Space Algorithm is shown in Figure 1.
1071.057 mm with a RMS less than 0.2 mm. The mea- The coordinates of all points are the coordinates of the point
surement accuracy of the stereovision calibrated with their in the spatial coordinate system. The rectangle A1 B1 C1 D1
method is a little more accurate than that of the one cali- and rectangle A2 B2 C2 D2 in the picture are two photosensitive
brated with traditional method [4]. Gai adopts the combina- elements of the image sensor. Point O1 (xo1 , 0, 0) and point
tion method of polar constraint and ant colony algorithm to O2 (xo2 , 0, 0) are the intersection points of the lateral field
improve the convergence speed and accuracy in 3D recon- angle of the photosensitive element and the plane of the lon-
struction based on binocular vision. The simulation results gitudinal field angle (only the quadrangle is square, in many
show that by combining the advantage of polar constraint cases the sensor is not square, so the image is cut before the
and ant colony algorithm, the stereo matching range of 3D comparison) E1 , and E2 is the sense of the point E in the
reconstruction based on binocular vision is simplified, and image sensor.
the convergence speed and accuracy of this stereo matching
process are improved [5]. Xu et al. presents a novel camera
calibration method with an iterative distortion compensa-
tion algorithm, Both the simulation work and experimental
results show that the proposed calibration method is more
than 100% more accurate than the conventional calibration
method [6]. Jia et al. proposed an improved camera calibra-
tion method, the results of calibration experiments show that
the accuracy of the calibration method can reach 99.91% [7].
Yang and Fang proposed an effective geometric camera cal-
ibration method is proposed where the eccentricity error is
compensated by establishing the correspondence between
the estimated model position and the real position of the
fitted center of the deformed ellipse, Both simulation and
measurement data verify the existence of the eccentricity
FIGURE 1. Model of plane-apace algorithm.
error and the effectiveness of the proposed method [8]. The
neural network (NN) model that considers the measurement
distance in addition to the position is proposed by Chung BM,
From the preceding conditions, the coordinates of the
Experiments were conducted on a 100 X 100 mm2 area, and
E (x0 , y0 , z0 ) can be obtained.
a maximum error of 0.5 mm is observed for the mathemat-
ical model. However, when the NN model considering the
height of the object is used, the error is reduced by 60% to x1 xo2 − x2 xo1

x0 =
0.2 mm [9]. Wang et al. proposed a new model of camera lens (xo2 − xo1 ) + (x1 − x2 )



(xo2 − xo1 ) y1

distortion, according to which lens distortion is governed by

y0 = (2.1)
the coefficients of radial distortion and a transform from ideal (xo2 − xo1 ) + (x1 − x2 )
(xo2 − xo1 ) z2


image plane to real sensor array plane, experiments show that


z0 =

the new model has about the same correcting effect upon (xo2 − xo1 ) + (x1 − x2 )
lens distortion as the conventional model including all the
radial distortion, decentering distortion and prism distortion.
From the preceding conditions, the coordinates of the
Compared with the conventional model, the new model has
E(x0 , y0 , z0 ) can be obtained.
fewer parameters to be calibrated and more explicit physical
The specific calculation process is as follows:
meaning [10].
The following equations can be obtained from the model
Although the algorithm of binocular stereo vision is opti-
described in Figure 1,
mized from different layers, the algorithm is not unified
and cannot be used to estimate the range of the algorithm
adopted. In order to accurately measure the coordinates of  x−x y z
o1
 = =
the target points, Plane-Space Algorithm is proposed in this  x1 − xo1

 y1 z1
paper. In order to calculate the range of the Plane-Space y1 = y2 (2.2)
Algorithm, Range Scanning Algorithm is proposed. In view  x − xo2 y z


 = =
of the problem of calculating the overload of the disparity x2 − xo2 y2 z2
VOLUME 6, 2018 62451
E. Zhang, S. Wang: Plane-Space Algorithm Based on Binocular Stereo Vision

that is, When the size of these elements is very small, and it is hard
 x−x
o1 y for naked eyes to distinguish, the human brain can understand
 = this approximate set of points as an image when dealing
− xo1
 xx1 − y1



xo2 y with such a set of images. In this way, the image sensor
= (2.3) can collect the image through the photoelectric conversion
x − x y
 y2 o2 1

 = z

 and transfer the final information to the display. As long
y1 z2 as the phase element is small enough, it is not the same as
Solve the equation, the actual object in the naked eye. But in the view of the
machine, the image has a great loss, and because the working
x1 xo2 − x2 xo1

 x= principle of the image sensor determines that the coordinates
(xo2 − xo1 ) + (x1 − x2 )


 of the collected points are only an integer phase element,
(xo2 − xo1 ) y1


y= (2.4) so there are many limitations when measuring the target with
(x o2 − xo1 ) + (x1 − x2 ) the binocular vision principle of the image sensor, which is
(xo2 − xo1 ) z2




z =
 mainly manifested in only a limited target point.
(xo2 − xo1 ) + (x1 − x2 )
It equals that,
III. RANGE SCANNING ALGORITHM
x1 xo2 − x2 xo1

 x0 = There is no specific formula for the calculation method here,
(x xo1 ) + (x1 − x2 )


 o2 − but it is only a calculation idea, and it does not need to
(xo2 − xo1 ) y1


y0 = (2.5) be derived from the formula, because the range is a set of
(xo2 − xo1 ) + (x1 − x2 ) values or a range, and this range is actually existing. Take the
(xo2 − xo1 ) z2




z0 =
 binocular stereo vision ranging device shown in Figure 1 as
(xo2 − xo1 ) + (x1 − x2 )
an example, the specific methods are as follows:
B. DOMAIN OF THE ALGORITHM First, we need to determine the range of value of x1 , x2 ,
Formula (2.1) shows that the input variables of Plane-Apace z2 , note that these values are discrete, all the possible values
Algorithm are x1 , x2 , z2 , respectively. The values of these are listed. Then three sets of xR1 , xR2 , zR2 , are composed
points depend on the resolution of the image sensor. The respectively, and these points are brought into the solution of
definition domain of the Plane-Space Algorithm is all the the plane space algorithm. A set of points is obtained. The set
values collected by the image sensor. Therefore, the range of this point is the measurement range of the binocular vision
of the Plane-Space Algorithm should be a set of points of system.
discrete points. Next, the process of derivation is explained:
The principle of the image sensor’s photoreceptor imaging The binocular vision system performs the measurement by
is to map the actual image of the optical system of the image parallax. The image sensor determines the parallax through
sensor to the photosensitive element, and to transform the the pixel coordinates difference. But the measurement value
light signal into the electrical signal to achieve the purpose of the image sensor is only an integer times of the pixel value,
of imaging, but it will cause a certain error in the process so the parallax of the binocular vision system can only be the
of converting the optical signal into the electrical signal. integer times of the pixel. It can be seen that any image sensor
The schematic diagram of the structure of the photosensitive can measure a finite number of values and can find out all the
original is shown in Figure 2. measurements. The set of these results is the measurement
range of the binocular vision system. That is to say, the
binocular vision system can only measure these points, and
if the measured values are not these points, the measurement
must be wrong.

IV. EXTREME VALUE ELIMINATION ALGORITHM


A. INTRODUCTION OF ALGORITHM
In the Plane-Space Algorithm model shown in Figure 1, the
coordinates of the target points P (x0 , y0 , z0 ) can be obtained
as a formula (2.1), and this conclusion is for continuous
FIGURE 2. A schematic diagram of a photosensitive original of an image
acquisition points. But the measured value of the image
sensor. sensor is limited because the coordinates of points obtained
by phase measurement are integers. That is to say, when the
In Figure 2, the photosensitive original part converts the acquisition point of the image sensor is continuous, theEpoint
real image of the optical system of the image sensor into a can be at infinity. The measuring range of binocular stereo
set of points. The point is just an analogy as an abstract point, vision measuring device can be determined according to the
which is a phase element in the actual device, that is, a region. parameters of the image sensor.

62452 VOLUME 6, 2018


E. Zhang, S. Wang: Plane-Space Algorithm Based on Binocular Stereo Vision

The value range of each coordinate point is discussed extreme value of the parameter 1
according to formula (2.1). xo1 , xo2 , y1 are constant and have axo2 − x2 xo1
xo2 > xo1 . The ranges of values for x1 and x2 are [a, b] and x0 |x1 =a = (4.4)
a − x2 − xo1 + xo2
[a + l, b + l], respectively. The range of z2 is [−w, w], and
extreme value of the parameter 2
have l > b > a. It should be noted that the range is boundary
value and can only be integers. Take x0 , for example, to judge bxo2 − x2 xo1
1 xo2 −x2 xo1
x0 |x1 =b = (4.5)
the range of the value of x0 by x0 = (xo2x−x o1 )−(x1 −x2 )
. Since b − x2 − xo1 + xo2
there are two unknown quantities and independent values Next to the partial derivative of x2 ,
in the range of values, it is an extreme value problem for
∂ x0 |x1 =a

a function of two elements, although it is a problem with xo1
=−
constraints, the constraint is not a curve or a relation, so the ∂x2 a − x2 − xo1 + xo2
Lagrange multiplier method cannot be used to solve it. There- axo2 − x2 xo1
+ (4.6)
fore, we need to develop a new algorithm, extreme value (a − x2 − xo1 + xo2 )2
∂ x0 |x1 =b

elimination method, to solve this problem. The principle of xo1
extreme value elimination is to eliminate the variables of the =−
∂x2 b − x2 − xo1 + xo2
multivariate function from the extremum or boundary value bxo2 − x2 xo1
so that the extreme value problem of the multivariate function + (4.7)
(b − x2 − xo1 + xo2 )2
is transformed into the extreme value of a multiple function.
The extreme value of all the function is the extreme value Let,
∂ x0 |x1 =a

of the multivariate function. In addition to this, we need to
=0 (4.8)
consider the poles. The specific calculation process will be ∂x2 
described in detail in Section IV.B. ∂ x0 |x1 =b
=0 (4.9)
∂x2
B. CALCULATING THE RANGE OF PLANE SPACE
ALGORITHM USING EXTREME VALUE There is no solution to the equation, so the extreme value of
ELIMINATION ALGORITHM the parameters under x2 is in the range boundary point. And
1) THE CASE OF THE POLES OF A COLLECTION then,
POINT IS NOT CONSIDERED axo2 − xo1 (a + l)
x0 |x1 =a,x2 =a+l = (4.10)
In the Plane-Space Algorithm model shown in Figure 1, the −l − xo1 + xo2
coordinates of the target points P (x0 , y0 , z0 )can be obtained axo2 − xo1 (b + l)
x0 |x1 =a,x2 =b+l = (4.11)
as a formula (2.1). a − b − l − xo1 + xo2
xo1 , xo2 , y1 are constant and have xo2 > xo1 . The range of bxo2 − xo1 (a + l)
x0 |x1 =b,x2 =a+l = (4.12)
x1 , x2 is [a, b], [a + l, b + l]. The range of the value of z2 −a + b − l − xo1 + xo2
is [−w, w]. It should be noted that the range is boundary bxo2 − xo1 (b + l)
value and can only be integers. The Extreme Value Elim- x0 |x1 =b,x2 =b+l = (4.13)
−l − xo1 + xo2
ination Algorithm is used to calculate the boundary value
The maximum and minimum values of the four sets of
of x0 , y0 , z0 .
solutions are the maximum and minimum values of x0 .
Calculate the range of the value of x0 , and eliminate x1 after
Then, calculate the range of the value of y0 , and
eliminating x2 . Derivation of partial derivative for x1 ,
eliminate x1 after eliminating x2 .
∂x0 xo2 x1 xo2 − x1 xo1
= − (4.1) ∂y0 y1 (−xo1 + xo2 )
∂x1 x1 −x2 −xo1 +xo2 (x1 − x2 − xo1 + xo2 )2 =− (4.14)
∂x1 (x1 − x2 − xo1 + xo2 )2
Let ,
Let,
∂x0
=0 (4.2) ∂y0
∂x1 =0 (4.15)
∂x1
that is,
There is no solution to the equation, so its extreme value of
xo2 x1 xo2 − x1 xo1 the parameters is,
− = 0 (4.3)
x1 − x2 − xo1 + xo2 (x1 − x2 − xo1 + xo2 )2 y1 (−xo1 + xo2 )
There is no solution to the equation. This shows that when x1 y0 |x1 =a = (4.16)
a − x2 − xo1 + xo2
is a variable and x2 is a constant (x2 can take any value in the y1 (−xo1 + xo2 )
definition field), the boundary of x0 value is on the boundary y0 |x1 =b = (4.17)
b − x2 − xo1 + xo2
of the definition domain. When x1 is the corresponding value,
the variable x1 is removed, and the value T (x) is obtained, but And
∂ y0 |x1 =a y1 (−xo1 + xo2 )

the remaining variables are still there. T (x) is named as the
= (4.18)
extreme value of the parameter. ∂x2 (a − x2 − xo1 + xo2 )2
VOLUME 6, 2018 62453
E. Zhang, S. Wang: Plane-Space Algorithm Based on Binocular Stereo Vision

∂ y0 |x1 =b y1 (−xo1 + xo2 )



= (4.19) For x0 , the x1 is eliminated first, according to formula (2.1),
∂x2 (b − x2 − xo1 + xo2 )2 x0 = (xo2x−x1 xo2 −x2 xo1
, According to formula (4.3), we know
o1 )+(x1 −x2 )
let, that the extreme value of x0 exists when x1 ∈ [a, b] is on the
boundary. When x1 = x2 + xo1 − xo2 , x0 is infinity, that is
∂ y0 |x1 =a

=0 (4.20) lim x0 = +∞.
∂x2  x1 →(x2 +xo1 −xo2 )+
∂x0
∂ y0 |x1 =b According to formula (4.1), ∂x = (x2 −xo2 )(xo1 −xo22) ,
1 (x1 −x2 −xo1 +xo2 )
∂x2
=0 (4.21) and xo2 > xo1 . x2 is considered to be a constant,
The definition domain of x0 is divided into [a + l, xo2 ],
There is no solution to the equation, so its extreme value of (xo2 , b + l). When x1 ∈ [a, b], x2 ∈ [a + l, xo2 ]
the parameters is, the range of values can be divided into x0 (a, x2 ) =
y1 (−xo1 + xo2 ) axo2 −x2 xo1
(b, x2 ) = b−x bxo2 −x2 xo1
, +∞. And Then
y0 |x1 =a,x2 =a+l = (4.22) a−x2 −xo1 +xo2 ,x
0 2 −x
 o1 +xo2
−l − xo1 + xo2 axo1 −bxo2 +lxo1
x0 (x1 , x2 ) ∈ a−b+l+xo1 −xo2 , +∞ . When x1 ∈ [a, b] , x2 ∈
y1 (−xo1 + xo2 )
y0 |x1 =a,x2 =b+l = (4.23) (xo2 , b + l) the range of values can be divided into
a − b − l − xo1 + xo2 x0 (b, x2 ) = b−x bxo2 −x2 xo1
, x0 (a, x2 ) = a−x axo2 −x2 xo1
−xo1 +xo2 , +∞.
y1 (−xo1 + xo2 ) 2 −xo1 +x o2
axo1 −bxo2 +lxo1
2
y0 |x1 =b,x2 =a+l = (4.24) And Then x0 (x1 , x2 ) ∈ a−b+l+xo1 −xo2 , +∞ . Finally x0 ∈
−a + b − l − xo1 + xo2
y1 (−xo1 + xo2 ) (−∞, +∞).
y0 |x1 =b,x2 =b+l = (4.25) For y0 according to formula (2.1), y0 = (xo2(x o2 −xo1 )y1
−xo1 )+(x1 −x2 ) .
−l − xo1 + xo2
Then ∂y 0
∂x1 =
y1 (xo1 −xo2 )
2 , in most situations, y1 >
The maximum and minimum values of the four sets of solu- (x1 −x2 −xo1 +xo2 )
∂y0
tions are the maximum and minimum values of y0 . 0, so it can be obtained that ∂x 1
< 0, So y0 is a
Calculate z0 in the same way, subtraction function on the continuous interval on the
w (−xo1 + xo2 ) level x1 . According to x2 + xo1 − xo2 ∈ [2a − b, a],
z0 |x1 =a,x2 =a+l,z2 =−w = − (4.26) we have lim y0 = +∞. So the domain of
−l − xo1 + xo2 x1 →(x2 +xo1 −xo2 )+
w (−xo1 + xo2 ) y0 can be divided into y0 (a, x2 ) = (xo2(x−x o2 −xo1 )y1
o1 )+(a−x2 )
,
z0 |x1 =a,x2 =b+l,z2 =−w = (4.27)
(xo2 −xo1 )y1
−l − xo1 + xo2 y0 (b, x2 ) = (xo2 −xo1 )+(b−x2 ) , +∞. According to y0 (a, x2 ) ∈
w (−xo1 + xo2 )
−∞, (l+b−a)y (l+b−a)y1 (l+b−a)y1
h i h i
z0 |x1 =b,x2 =a+l,z2 =−w = − (4.28) b−a
1
, y 0 (b, x 2 ) ∈ 2(b−a) , b−a , we
−a + b − l − xo1 + xo2
w (−xo1 + xo2 ) have y0 ∈ (−∞, +∞).
z0 |x1 =b,x2 =b+l,z2 =−w = (4.29) For y0 according to formula (2.1), y0 = (xo2(x o2 −xo1 )y1
−xo1 )+(x1 −x2 ) .
−a + b − l − xo1 + xo2
w (−xo1 + xo2 ) According to ∂z ∂z2
0
= (xo2 −x (xo2 −xo1 )
o1 )+(x1−x2 )
, (xo2 − xo1 ) +
z0 |x1 =a,x2 =a+l,z2 =w = − (4.30) ∂z0
(x1 − x2 ) ∈ [0, 2b], we have ∂z2 ∈ 2b , +∞ . So the
l+b−a

a − b − l − xo1 + xo2
w (−xo1 + xo2 ) domain of z0 can be divided into,
z0 |x1 =a,x2 =b+l,z2 =w = (4.31)
a − b − l − xo1 + xo2 (xo2 − xo1 ) w
z0 (x1 , x2 , −w) = − (4.34)
w (−xo1 + xo2 ) (xo2 − xo1 ) + (x1 − x2 )
z0 |x1 =b,x2 =a+l,z2 =w = − (4.32)
−l − xo1 + xo2 (xo2 − xo1 ) w
z0 (x1 , x2 , w) = (4.35)
w (−xo1 + xo2 ) (xo2 − xo1 ) + (x1 − x2 )
z0 |x1 =b,x2 =b+l,z2 =w = (4.33)
−l − xo1 + xo2 The range of the formula (4.34) and the formula (4.35) are
The maximum and minimum values of the four sets of solu- (−∞, +∞) and (−∞, +∞) respectively.
tions are the maximum and minimum values of z0 . Above all, x0 ∈ (−∞, +∞), y0 ∈ (−∞, +∞), z0 ∈
(−∞, +∞).
2) THE CASE OF THE POLES OF A COLLECTION
POINT IS CONSIDERED V. SIMULATION AND ANALYSIS
The measurement boundary of the binocular stereo vision This article completes all simulation analysis based on
range system can be calculated by the formula (2.1) and the Matplotlib Library of Python2.7. The unit of simulation data
corresponding conditions. The attention should be paid to the is 1.
measurement of the boundary rather than the range of value
because the definition field of x1 , x2 , z2 , is discrete at this A. SIMULATION OF PLANE-APACE ALGORITHM
time, and only complete integer values can be taken. First, the concept of a coordinate performance curve of the
When xo1 , xo2 , y1 are constant, the range of x1 , x2 are [a, b], algorithm is introduced. The coordinate performance curve
[a + l, b + l], the range of z2 is [−w, w], and has l > b > a. is a curve drawn with the calculated value of the calculated
One thing to note is that if the pole is not an integer, it will object as the ordinate, and the calculated value of the algo-
never get the value at the pole. But the limit of the value range rithm as the abscissa. The ideal coordinate performance curve
will appear near this point. is a straight line with a slope of 1 and an intercept of 0.

62454 VOLUME 6, 2018


E. Zhang, S. Wang: Plane-Space Algorithm Based on Binocular Stereo Vision

FIGURE 3. Coordinate performance curves in ideal state.


FIGURE 4. Coordinate performance curves in non-ideal state.

The simulation method of the plane-space algorithm is to


compare the coordinates of a given point P (x0 , y0 , z0 ) with
the actual coordinates of the Plane-Space Algorithm, so as
to evaluate the performance of the algorithm. The specific
methods are as follows:
A known coordinates of the target point P (x0 , y0 , z0 ) is
given and then calculate its corresponding collection point
coordinates E1 (x1 , y1 , z1 ), E2 (x2 , y2 , z2 ), Calculate the coor-
dinates of the target point P0 (x0 , y0 , z0 ) corresponding with
E1 (x1 , y1 , z1 ), E2 (x2 , y2 , z2 ), Draw the curve of the coordi-
nate performance of P (x0 , y0 , z0 ), P0 (x0 , y0 , z0 ).

1) IDEAL STATE
FIGURE 5. Coordinate performance curve when enlarging the non-ideal
The ideal state is that the image sensor can collect the state state of collection point.
of the error-free collection point. At this time, x0 ∈ [25, 30],
y0 ∈ [20, 25], z0 ∈ [50, 65], the collection point is 100, and
then draw the coordinate performance curve. The coordinate more dispersed way, so the Plane-Space Algorithm meets the
performance curve is shown in Figure 3. requirements.
Figure 3 shows that the coordinate performance curves of In summary, the simulation data shows that the plane space
the three coordinates at this time are all straight lines with a algorithm is reasonable.
slope of 1 and an intercept of 0.
B. SIMULATION OF RANGE SCANNING ALGORITHM
2) NON-IDEAL STATE The purpose of the Plane-Space algorithm is to range the
Non-ideal state means that the image sensor can collect the distance. The distance between the target point and the mea-
state of the collection points with errors. The reason for the suring device is mainly influenced by the ordinate of the
error is that the image sensor can only acquire the coordinate target point. Therefore, the performance of the algorithm can
value of the integer multiple pixel. At this time x0 ∈ [25, 30], be evaluated by the analysis of the longitudinal coordinates
y0 ∈ [20, 25], z0 ∈ [50, 65], the collection point is 100. The of the results of the simulation data.
measurement algorithm of the image sensor is rounding. The The binocular stereo vision ranging model used is shown
coordinate performance curve is shown in Figure 4. in Figure1. Take, xo1 = (24,0,0), xo2 = (1072,0,0), x1 ∈
From Figure 4, the performance curve of the sitting table (0, 48), x2 ∈ (1048, 1096), z2 ∈ (0, 48), y2 = 10, and
under non-ideal condition is not very satisfactory. The P point draw the calculated points in a three-dimensional coordinate
at this time is relatively small, and the scope is enlarged to system as shown in Figure 6.
get the image shown in Figure 5. At this time x0 ∈ [25, 30], It can be seen from the simulated image that the ordi-
y0 ∈ [20, 25], z0 ∈ [50, 65], the collection point is 100. nate of the measurable target point is mainly in the interval
As shown in Figure 5, the coordinate performance curve has [−16000, 16000]. Considering the actual situation, the lon-
greatly improved. Thus, we can see that the more dispersed gitudinal coordinates of the target points calculated by the
the target is, the better the algorithm is. In the actual working algorithm can not appear negative. The result of the negative
environment, the target points are usually distributed in a value of the image is calculated the result of the impossible

VOLUME 6, 2018 62455


E. Zhang, S. Wang: Plane-Space Algorithm Based on Binocular Stereo Vision

target point of the algorithm. Therefore, we need to develop


a method that can calculate the boundary of the algorithm
directly through the algorithm parameters.

C. SIMULATION OF EXTREME VALUE


ELIMINATION ALGORITHM
1) THE CASE OF THE POLES OF A COLLECTION
POINT IS NOT CONSIDERED
The principle of simulation is to calculate all the values of
formula (2.1) and draw the results into images, and compare
them with the range of values displayed by the images.
Let, xo1 = 320, xo2 = 3640, y1 = 879.2, z2 = 130,
l = 3000, a = 0, b = 640, The measurement range is
simulated. It is shown in Figure 8.

FIGURE 6. Range of range calculation results image.

acquisition point. In order to explore the distance between


the two image sensors. The effect of algorithm on the range
of the algorithm is first increased by two image sensor range.
Take, xo1 = (24,0,0), xo2 = (3072,0,0), x1 ∈ (0, 48), x2 ∈
(3048, 3096), z2 ∈ (0, 48), y2 = 10, to get the simulation
image as shown in Figure 7.

FIGURE 8. The result of simulation when xo1 = 320, xo2 = 3640,


y1 = 879.2, z2 = 130 and do not consider the poles.

Calculate the range of x0 , y0 , z0 . Bring the relevant param-


eters into the formula in the section IV.B, and get x0Max =
3640, x0Min = −3000; y0Min = −9121, y0Max = 9121;
z0Min = −6640, z0Min = 6640. The y0 value in the actual
system is not negative, but the negative number is calcu-
lated in the actual system. Because the collection point cal-
culates all the possibilities in the traversal calculation pro-
cess, the possibilities may exist mathematically. Sometimes it
doesn’t match the actual situation. Because there is no stable
FIGURE 7. The measurement range of image sensor spacing is increased matching algorithm at present to collect coordinates to meet
to calculate the result image. the actual conditions, it is reasonable to have such a value.
The observation image will find some values outside the
From the simulation image we can see that the ordi- estimated range. By observing the image, some values will
nate of the measurable target point is mainly concentrated be found outside the scope of estimation. This is due to the
in the interval [−30000, 30000]. The cause of negative fact that several points have not been taken into consideration.
value has been given in the analysis of Figure 6, and no
longer is described. By comparing the simulation images of 2) THE CASE OF THE POLES OF A COLLECTION POINT IS
Figure 6 and Figure 7, the greater the distance between the CONSIDERED
two image sensors, the greater the vertical absolute coordi- Take xo1 = 320, xo2 = 3640, y1 = 879.2, z2 = 130,
nates of the target points can be measured. l = 3000, a = 0, b = 640 and simulate the algorithm,
The range scanning algorithm can calculate the range of the the obtained simulated image is shown in Figure 9. The
plane space algorithm, but the calculation load is very large. interval boundary data in the graph can be considered to
In most cases, it does not need to know every measurable be approximately infinite. According to the image obtained,

62456 VOLUME 6, 2018


E. Zhang, S. Wang: Plane-Space Algorithm Based on Binocular Stereo Vision

[3] Z. Liu, S. Wu, Y. Yin, and J. Wu, ‘‘Calibration of binocular vision sensors
based on unknown-sized elliptical stripe images,’’ Sensors, vol. 17, no. 12,
p. 2873, 2017.
[4] Z. Wang, Z. Wu, X. Zhen, R. Yang, and J. Xi, ‘‘An onsite structure
parameters calibration of large FOV binocular stereovision based on small-
size 2D target,’’ Optik—Int. J. Light Electron Opt., vol. 124, no. 21,
pp. 5164–5169, 2013.
[5] Q. Gai, ‘‘Optimization of stereo matching in 3D reconstruction based on
binocular vision,’’ J. Phys., Conf., vol. 960, no. 1, p. 012029, 2018.
[6] Y. Xu, F. Gao, H. Ren, Z. Zhang, and X. Jiang, ‘‘An iterative distortion
compensation algorithm for camera calibration based on phase target,’’
Sensors, vol. 17, no. 6, p. 1188, 2017.
[7] Z. Jia et al., ‘‘Improved camera calibration method based on perpendicular-
ity compensation for binocular stereo vision measurement system,’’ Opt.
Express, vol. 23, no. 12, pp. 15205–15223, 2015.
[8] X. Yang and S. Fang, ‘‘Eccentricity error compensation for geometric
camera calibration based on circular features,’’ Meas. Sci. Technol., vol. 25,
no. 2, pp. 149–156, 2014.
[9] B.-M. Chung, ‘‘Neural-network model for compensation of lens distor-
FIGURE 9. The result of simulation when xo1 = 320, xo2 = 3640,
tion in camera calibration,’’ Int. J. Precis. Eng. Manuf., vol. 19, no. 7,
y1 = 879.2, z2 = 130 and considered the poles. pp. 959–966, 2018.
[10] J. Wang, F. Shi, J. Zhang, and Y. Liu, ‘‘A new calibration model of camera
lens distortion,’’ Pattern Recognit., vol. 41, no. 2, pp. 607–615, 2008.
the range of x, y, z coordinates of the target points at this time
is all within the range calculated by the extreme elimination
method.
Given the above, it is reasonable to measure range of
Plane-Space Algorithm by using Extreme Value Elimination ENSHUO ZHANG was born in Chifeng, Inner
Mongolia, China, in 1992. He received the bach-
Algorithm. elor’s degree from the Inner Mongolia University
of Technology, Hohhot, China, in 2014, and the
VI. CONCLUSION master’s degree from Inner Mongolia University,
Aiming at the problem of how to use two different position Hohhot, China, in 2018.
He studied binocular stereo vision-based rang-
image sensors to measure the coordinates of target points, ing algorithms with the University of Inner Mon-
Plane-Space Algorithm is proposed. According to the struc- golia between 2015 and 2018, during which he
tural characteristics of the photosensitive element of the published two papers and applied for two Chinese
image sensor, the method of how to calculate Plane-Space patents.
Algorithm is proposed. The simulation results show that
the Plane-Space Algorithm is reasonable. In order to study
the range of Plane-Space Algorithm, Range Scanning Algo-
rithm is proposed, and the factors that affect the performance
of the algorithm are explored through simulation. The con-
Shubin Wang was born in Chifeng, Inner Mon-
clusion is that the larger the distance between the two image golia, China, in 1971. He received the B.S. and
sensors is, the farther the distance of the target points can M.S. degrees in physics and electronic engi-
be measured. In order to reduce the calculation load of the neering from Inner Mongolia University, China,
Range Scanning Algorithm, the Extreme Value Elimination in 1996 and 2003, respectively, and the Ph.D.
degree from the Beijing University of Posts and
method is proposed to calculate the measurement boundary of Telecommunications in 2010.
the Plane-Space Algorithm. The Extreme Value Elimination From 2009 to 2010, he was a Research Fellow
method can be seen, through the simulation data, to meet the supervised by Prof. K. S. Kwak with Inha Univer-
requirements of the measurement range of the plane space sity, Inchon, South Korea. Since 2013, he has been
a Professor with the College of Electronic Information Engineering, Inner
algorithm. Mongolia University. He is the author of over 40 articles, over 7 inventions,
and holds four Chinese patents. His research interests include wireless sensor
REFERENCES networks, cognitive radio networks, machine vision, binocular stereo vision,
[1] T. Junchao and Z. Liyan, ‘‘Effective data-driven calibration for a gal- and signal processing.
vanometric laser scanning system using binocular stereo vision,’’ Sensors, Dr. Wang was a recipient of the eighth China Inner Mongolia Autonomous
vol. 18, no. 1, p. 197, 2018. Region Youth Science and Technology Award in 2011, the Inner Mongolia
[2] W. Zhenzhong and Z. Kai, ‘‘Structural parameters calibration for binocular Autonomous Region Science and Technology Progress Award in 2009, and
stereo vision sensors using a double-sphere target,’’ Sensors, vol. 16, the CSPS Best Symposium Paper Award in 2014.
no. 7, p. 1074, 2016.

VOLUME 6, 2018 62457

You might also like