Professional Documents
Culture Documents
Junho Yang1 , Soon-Jo Chung1 , Seth Hutchinson1 , David Johnson2 , and Michio Kise2
I. INTRODUCTION
In this paper, we present a vision-based localization and
mapping algorithm for an autonomous mower. The objective
for the mowing robot is to maintain turf health and aesthetics
of the lawn autonomously by performing complete and
uniform coverage of a desired area. To safely achieve this
goal, the system must determine and maintain containment
inside of the permitted area. Vision sensors are an attractive
option for mowing robots for their potential to enable noinfrastructure installations and perform sensing functions
beyond positioning such as determining and diagnosing turf
problems.
There have been several research prototypes as well as
manufactured products developed for robotic lawn mowing
[1][4]. Boundary wires are widely used to ensure containment in available products [1]. However, they require users
to add infrastructure to the environment which increases setup time and decreases portability. GPS has been widely used
for navigation purposes [5] but performs best in a wide open
area. It can be difficult to get accurate position estimation
results with the GPS in a residential area occluded by walls
and tree canopies. RF and infrared signal based methods have
also been demonstrated for localization [4], [6] but they can
require costly infrastructure.
1 Junho Yang is with the Department of Mechanical Science and
Engineering, Soon-Jo Chung and Seth Hutchinson are with the Coordinated
Science Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL,
61801, USA {yang125, sjchung, seth}@illinois.edu
2 David Johnson is with the Turf & Utility Advanced Development, John Deere, Cary, NC, 27513, USA and Michio Kise is with
the Intelligent Solution Group, John Deere, Urbandale, IA, 50322, USA
{johnsondavida2, kisemichio}@johndeere.com
Omnidirectional
Camera
Landmark
Mapping
Localization
Boundary
Teaching
Containment
Determination
Teaching Phase
Mowing Phase
IMU
Speed Sensor
x i = [] xi v
(1)
!2 12
bj
2 21
b
x
b
yj 1 RT(
+ d,k
qbj)yk yj
dk = k
xbjk1
bj
k
xb k
(3)
where yk and yj are the direction vector measurements of a
landmark at timestep k and at a previous timestep j. Noise
in the pseudo-measurements dk is denoted by d,k R, and
R() is a rotation matrix. The robots body frame is located
w
at xw
b at timestep k and at xbj at timestep j. In Section III,
bj
bj and orientation quaternions q
we estimate the location x
of the robots body frame at timestep j with respect to its
current body frame. The current location of the mower with
respect to its body frame at timestep j, which is used in (3),
b
bj = RT (qbj )
x bj .
is x
C. Observer Design with Hybrid Contraction Analysis
The observer presented in this section updates the estimates of the states using vision measurements at discretetime instances and propagates the motion between the measurements in continuous-time. We use dwell-time tk =
tk tk1 for vision measurements yk since image processing
can be much slower than the inertial measurements which is
d d
vT y
=
(4)
y
T )vd1 [] y
(I y
dt y
+
dk + lk,1 (dk d
dk
k)
=
(5)
+
k
k + Ilk,2 (yk y
k )
y
y
where d R is an estimate of a landmarks depth between vi+
sion measurements, d
k R and dk R are estimates of the
depth before and after the measurement update at timestep k,
R3 is an estimate of a unit vector from the robots body
y
k R3
frame to the landmark between its measurements, y
+
3
k R are estimates of the unit vector before and after
and y
the measurement update at timestep k, and I R33 is an
identity matrix. The user can select observer gains lk,1 R
k respectively.
and lk,2 R for dk and y
The continuous system (4) for prediction of the states
y
T = (d,
T )T R4 is switched to the discrete system
z
(5) at every tk to update the states with measurements.
Estimation error for the hybrid system is defined as ek ,
k R4 where zk R4 is the ground truth of the
zk z
states.
Theorem 1: The observer in (4) and (5) is globally exponentially stable such that
k , k N
kek+1 k
k kek k exp t
(6)
if Assumption 1 is satisfied and the states are constrained by
k
yk = 1 and d > d, and if the observer gains are given by
1
lk,m = 1 exp
m tk , m {1, 2} (7)
2
k . The
where m R is defined by the user for dk and y
convergence rate of the system in the prediction stage (4)
= max (F T + F ), where max denotes the
is given by
maximum eigenvalue and F R44 is the Jacobian matrix
in (8).
Proof: We can write the first variation of system (4)
and (5) as
d d
0
vT
d
=
y
T )vd2 []+ y
vT +vTy
I d1
y
y
(I y
dt
(8)
+
1 lk,1
0
dk
dk
=
(9)
0
I(1 lk,2 )
yk+
yk
landmark trajectories
estimates
distance (m)
10
measurements
true
estimates
8
6
4
2
0
10
20
30
40
50
z (m)
1.2
0.6
0
2
unit vector
0.5
4
2
4
2
6
0.5
1
0
Fig. 3.
y (m)
10
20
30
time (sec)
40
50
Fig. 4.
frame
x (m)
k = max (FkT Fk ) = (1 lk )2
(10)
t
ln
k
k
+
+
=
(11)
tk
tk
=
where is selected to be negative. We then have
k ), k N
k
zk+1 k
k k
zk k exp(t
(12)
and since
zk tends to zero, the estimated states converge to
their true values globally exponentially fast.
2) Uncertainty Bound on the Estimation Error: Uncertainty bound on estimation error can be analyzed by considering the disturbance and measurement noise k =
T T
(d,k , y,k
) R4 , where y,k R3 Ris the noise in unit
z
k = z kxk k R be the
sphere vision measurements. Let e
quadratic bound of the observer error [7] which considers
the uncertainties. Then
k + ktk + lk k k
k+1
k exp t
e
k e
(13)
(14)
=
1
q
0
()
0
dt qbj
b
j
2
where () R44 is a skew symmetric matrix
0
T
() =
[]
(15)
hi (x) = yi j
b
= xi j /kxi j k2 , i {1, 2, , n}
(16)
P = AP + P A P H V HP + W
(17)
Fig. 5.
(18)
(19)
w
The measurement model g(xw
b , qb ) is a stacked vector of
measurements of each landmark given by
w
gi (xw
b , qb ) = yi
= xi /kxi k2 , i {1, 2, , n}
(22)
(23)
x 10
0
x
y
z
10
0
10
20
30
40
50
60
10
20
30
time (sec)
40
50
60
0.5
0.5
0
location (m)
0.1
0
0.05
0.1
0
orientation (quaternions)
x
y
z
0.05
50
100
150
200
250
300
350
400
450
500
0.02
q1
0.01
q2
q3
q4
0.01
0.02
0.03
0
50
100
150
200
250
300
time (sec)
350
400
450
500
Fig. 10.
images
15
distance (m)
z (rad/s)
0.5
0
0.5
measurements
estimates
10
1
20
40
60
80
100
120
0.6
0.4
0.5
unit vector
vx (m/s)
1.5
0
0.2
0
0.2
0
20
40
60
80
time (sec)
100
Fig. 13.
20
40
60
80
100
120
20
40
60
80
time (sec)
100
120
0
0.5
1
0
120
Fig. 11. Angular and linear velocities of the mower collected during the
experiments
and linear velocities measured from the IMU and the ground
speed sensor.
To demonstrate our algorithm, we manually selected 10
corners from the windows near the lawn as landmarks and
tracked the points with the pyramid Lucas Kanade optical
flow method [31]. Figure 10 shows a set of images collected
from our autonomous mower. The landmarks are marked in
Figure 10 with red dots. To extract the direction vector measurements y, the pixel coordinates p = (pu , pv , 1)T of each
feature were transformed to normalized image coordinates
with pn = C 1 p = (px , py , 1)T , where C is the camera
projection matrix. The calibration parameters for our camera
are shown in Table I. The normalized coordinates pn were
projected on a unit sphere through the method described in
[32] which is given by
+1+(1
2 )(p2 +p2 )
x
y
y=
(24)
py
p2x +p2y +1
p2 +p2 +1
x
Value
(476.60667, 476.74991)
(775.11715, 778.91684)
0.92036
0
(0.17357, 0.02025,
0.00209, 0.00091, 0)
(0.50050, 0.51821)
TABLE I
C ALIBRATION RESULTS