Professional Documents
Culture Documents
net/publication/23858981
CITATIONS READS
1,232 2,321
2 authors:
All content following this page was uploaded by Randall C. Smith on 16 May 2014.
Published by:
http://www.sagepublications.com
On behalf of:
Multimedia Archives
Additional services and information for The International Journal of Robotics Research can be found at:
Subscriptions: http://ijr.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
56
Downloaded from http://ijr.sagepub.com at SAGE Publications on April 2, 2008
© 1986 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Fig. 1. A sequence of approx-
imate transformations.
58
Downloaded from http://ijr.sagepub.com at SAGE Publications on April 2, 2008
© 1986 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
and X3, Y3 are given in Eq. ( 1 ). Covariance is defined Fig. 1, for example, A and B are compounded to give
as the expectation of the squared deviates. We AT E, then E and C are compounded to give F, and
&dquo;square&dquo; both sides of Eq. (2) by multiplying both finally F and D are compounded to give G. By a sim-
sides of the equation by their respective transposes. ple chain-rule argument, it can be shown that equiva-
Taking the expectation of the result, we get the covar- lent first-order estimates of G are obtained by using
iance matrix C3: the recursive approach above, or by defining the final
relationship in terms of all the variables at once, and
finding the first-order estimate of the mean and covar-
iance. Clearly, the recursive approach is simpler.
A given directed graph of uncertain relationships
The 3 X 3 matrix C3 expresses the covariances of may not be in a form needed by the above formulas.
the coordinates of L2 with respect to W (i.e., the AT For example, in Fig. 2A, B is pointing in the opposite
E), computed from the given covariance matrices direction to the form assumed above. To create the
C, and C2 (which express the error on the variables correct form, it is necessary to first reverse B, as in
of A and B). Because an element of the covariance Fig. 2B. This reversal is easily accomplished using the
matrix is by definition Cij E(t1x¡!Jixj) and the
=
following formulas (the inverse transformation):
standard deviation of variable x is <7~ E~, an
=
59
Downloaded from http://ijr.sagepub.com at SAGE Publications on April 2, 2008
© 1986 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Fig. 2. Examples of com-
pounding situations.
60
Downloaded from http://ijr.sagepub.com at SAGE Publications on April 2, 2008
© 1986 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
a vector) of the transformations to be merged, and X3 4.2. THE ASSUMPTIONS FOR MERGING
is the resulting estimated mean. The first step in the
merging operation is to compute the Kalman gain Kalman filter theory, or state estimation, is described
factor defined by: in many texts on estimation theory; the derivation of
the basic formulas is lengthy and is not presented here
(see Nahi 1976). The basic assumptions of the theory
in this context are:
This factor is used to compute the required merged
covariance matrix The errors in ATs to be merged are independent.
Extensions can be made to handle merging of
nonindependent ATs, when the dependence is
given.
and the merged mean The formula for merging two parallel ATs is appro-
priately defined as a weighted, linear combination
of the two estimates. If both ATs to be merged
have Gaussian distributions, it is known that the
These formulas are sufficient for merging any num- estimator that combines them and that minimizes
ber of parallel ATs. The apparent asymmetry of Eqs. the mean square estimation error has a linear
(8), (9), and (10) is not real, because the same result is form (Nahi 1976). Furthermore, we only estimate
obtained regardless of which transformation is labeled the first two moments of each AT’s distribution
C1 or C2, and so on. The order in which the mergings (for practical reasons). When the mean and var-
are performed is irrelevant. In one dimension, these iance of an unknown probability distribution are
formulas reduce to the following simple forms (where the only information available, a simple maxi- _
61
Downloaded from http://ijr.sagepub.com at SAGE Publications on April 2, 2008
© 1986 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Fig. 3. A network of approxi-
mate transformations.
62
Downloaded from http://ijr.sagepub.com at SAGE Publications on April 2, 2008
© 1986 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Fig. 4. Transformation of
Wheatstone bridge example.
The problem is to calculate the ATs for the equiva- 6. A Mobile Robot Example
lent &dquo;Y&dquo; (X, Y, Z), given the Delta composed of ATs
(A, B, C). First, compute the best estimate for each A mobile robot needs to use its sensors to build and
relation describing a side of the Delta (ignoring D and update a world map to navigate in bot4 known and
E). The new estimates are denoted by (A’, B’, C’) in unknown environments. At any stage, this map can be
Fig. 4. They are A’= A 0 (B (D CI), B&dquo; = B 0 viewed as a network of uncertain relations that can be
(A 59 C), and C’ C 0 (AR 59 B). Arbitrarily assign
=
used to decide important navigational questions. Ini-
the mean value of X to be the identity transformation. tially, the robot may take its starting position as the
With this mean value for X, the mean values for Y &dquo;world&dquo; frame. As it moves from its initial position,
and Z are the same as for B’ and A’ respectively. Next, the uncertainty of its location with respect to the world
calculate the covariance matrices for the ATs of the grows with each successive move, as shown in Fig. 1.
equivalent &dquo;Y&dquo; using the following formulas: Each relative move and its error determined from the
robot model are represented by an uncertain transfor-
mation. After a number of such moves, the robot’s
location with respect to the world frame becomes so
uncertain that the robot is unlikely to succeed in ac-
tions (e.g., going through a doorway) based purely on
its current information. The procedures described
above allow the robot to estimate the uncertainty of its
location relative to any other coordinate frame and
where K is defined in Eq. (3). decide whether the relationship is known accurately
If the system is linear in the random variables (i.e., enough to perform a particular task. Note that because
there is no significant angular error), then the mean this estimate can be made ahead of time, the robot
locations found in calculating A’, B’, C’ are consistent can decide that proposed motions will create too much
-that is, any one of the mean relations can be com- uncertainty before they are performed, or that sensing
puted from the other two. For example, the mean of will be necessary to reduce the uncertainty.
A’ (B C’ is equal to the mean of B’. This will not be
exactly true if there are significant nonlinearities in the
compounding and reversal operations. However, it is ~.1. ROBOT SENSING
possible to develop an iterative method for finding a
consistent set of means. This only becomes necessary A mobile robot is usually equipped with sensors that
if the angular errors are very large (e.g., with a standard allow it to determine the location of objects to an
deviation greater than 5 degrees). accuracy determined by the sensor resolution. Such
63
Downloaded from http://ijr.sagepub.com at SAGE Publications on April 2, 2008
© 1986 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
sensed relationships are also represented as ATs in the 2. Make the actual observation and decide if it is
AT network, along with those due to motion. The reasonable. Given the uncertain prior location
sensed information allows the robot to reduce the un- of the object to be sensed (estimated from the
certainty of its location with respect to other objects or current ATs) and the error associated with the
the world. Because the reduction of uncertainty sensor, decide whether the probability of the
through sensing can be calculated ahead of time, it is actual observation is below a predetermined
possible to decide if proposed sensing steps will reduce threshold. Such errors would occur, for exam-
uncertainties enough for the accomplishment of a task ple, if the camera viewed the wrong object.
before performing the sensing. If a proposed sensing 3. Combine multiple sensings. Merge independent
step is not sufficiently accurate, alternative sensing measurements by the sensor into a single AT
strategies can be evaluated. This means that the AT using the merging Eqs. (8), (9), and (10). If the
estimation procedure can be used for off-line pro- sensing information is in a different coordinate
gramming of a robot, as well as for real-time applica- system (e.g., polar rather than Cartesian), then
tions, such as mobile robot navigation. the merging operation is performed in the
It is important that the robot sense as accurately as sensor system and the final AT is mapped back
possible the location of many reference objects while it into the Cartesian form (the specific formulas
is still in its initial position (&dquo;world&dquo;). Once the robot for performing this mapping in the case of
moves, these objects allow the robot to estimate its polar to Cartesian are given in Appendix B).
position with respect to the world with the greatest
possible accuracy. Even if the robot is unable to sense
the original reference objects, accurate world locations 6.3. MONTE CARLO SIMULATIONS .
6.2. THE SENSING PROCEDURE values) is typically less than 1 %, unless the angular
errors are large (e.g., with a standard deviation of
Given that the robot has decided to perform a sensor greater than 6 degrees).
The assumption that the resulting probability distri-
step, the procedure it should adopt in adding a sensor- bution is Gaussian may not be correct, however. For
derived AT to the current network is as follows:
example, the crescent-shaped distribution of points in
1. Determine whether sensing is possible. After Fig. 5 does not match the superimposed error ellipse.
estimating the AT between the object and the Note that the mean and covariance estimates may still
sensor by the procedure described above, de- be accurate - it is the Gaussian assumption that is
cide whether the sensor is capable of making breaking down. The divergence occurs because of
the desired observations. For example, decide nonlinearities in the Jacobian (i.e., its dependence on
whether a wall is likely to be close enough to the trigonometric functions of 0) when the given an-
be detected with an acoustic range sensor, or gular errors are large. Even in this case, the final prob-
whether the object to be sensed is in the field of ability distribution formed from two crescent distribu-
view of a camera (see Appendix A). tions is very close to Gaussian, as seen in Fig. 5. If the
64
Downloaded from http://ijr.sagepub.com at SAGE Publications on April 2, 2008
© 1986 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
Fig. 5. A Monte Carlo simu-
lation.
probability distributions of the variables are not Gaus- mobile robot. The only significant divergence between
sian, the central limit theorem assures us that the final the estimates and the simulation occurs when the
compound distribution will be approximately Gaus- angular error is large compared with a subsequent dis-
sian provided that there are a &dquo;large&dquo; number of ATs. placement error. Under these circumstances, the re-
sultant probability distribution is significantly non-
Gaussian (i.e., noticeably nonelliptical in Cartesian
7. Discussion space). In all other cases, the mean and variance esti-
mates agreed extremely well with the simulated values.
The previous sections describe a procedure for making Even when the angular errors are large, the effect of
accurate quantitative estimates of the mean and covar- the nonlinearities on the resulting distribution is re-
iance of the location (position and orientation) of any duced when several ATs are combined, as seen in Fig. 5.
frame relative to any other frame given a network of The formulas given for three degrees of freedom
ATs. If the distribution associated with these estimates (X, Y, 8) can be easily extended to include the Z coor-
is assumed to be Gaussian, then probabilities can be dinate in the obvious way. Unfortunately, the exten-
estimated. Examples of the application of this theory sion to the full six-degrees-of-freedom case (X, Y, Z, 0,
are presented and the estimates are compared with 0, ~) is not so simple. The main problem is that the
Monte Carlo simulations for a three-degree-of-freedom additional angular terms can introduce singularities
65
Downloaded from http://ijr.sagepub.com at SAGE Publications on April 2, 2008
© 1986 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
(&dquo;poles&dquo;) into the Jacobian. The estimated covariance vector denoting a particular point. The contours of
shows extreme sensitivity to the error of particular equal probability of this distribution form ellipsoids in
&dquo;
input angles in the vicinity of a pole. These singular- n dimensional space that are centered at the mean
ities destroy the value of the estimates provided by the location ~i, and whose axes are only aligned with the
estimation procedure unless corrective measures are Cartesian frame if the covariance matrix C is diagonal.
taken. One approach we are investigating is to pre- The formulas for extracting the principal axes of a
rotate the world frame so that none of the ATs have two-dimensional ellipsoid are given below. In the case
nominal angles in the vicinity of a pole. where we are only interested in the positional error
There are many uses for the estimation procedures ellipse, C is the reduced (2 X 2) covariance matrix
described in this paper. The major use motivating this formed from the (3 X 3) matrix by extracting only the
work is the ability to estimate ahead of time when X, Y terms. In this case, the resulting marginal proba-
sensing steps are necessary and to determine whether a bility distribution is:
particular sensor is capable of making a particular
measurement or of supplying the required accuracy.
The procedure also supplies a quantitative method for
judging when a sensor has &dquo;glitched&dquo; (i.e., its mea-
sured value is too unlikely, given the prior expecta-
tions). Another possibility being investigated is
whether the results of running this procedure can be where p is the correlation coefficients for x and y. For
used to revise the error models used for particular example, we might want to check the probability of a
mobile robots or sensors, giving a form of autocalibra- camera seeing a particular object given our current
tion. locational uncertainty. We decide this question by de-
The limitation of the estimation procedure described termining whether the ellipse corresponding to a given
in this paper is that only two moments (the mean and confidence limit is completely contained within the
covariance) are estimated, hence the knowledge of the field of view.
underlying probability distribution is limited. By as- For decision making purposes (e.g. the field-of-view-
suming the (multivariate) distribution is Gaussian, we case), it is necessary to determine the explicit equipro-
can still make probabilistic estimates that agree with bable contours (ellipses or ellipsoids) of the multivar-
simulations very well, except when the angular errors iate Gaussian distribution specified by given mean X
are large. A way around this difficulty that we are vector and covariance Cx m trix. These ellipses can be
investigating is to estimate a third moment (a &dquo;skew- used to determine the probability that a given vector
ness&dquo; measure). will lie within, say, the 90% confidence ellipse. The
ellipsoid formula is: r
Appendix A. Derivation of Ellipse Parameters
The determination of whether the probability of ob- where k is a constant chosen for a particular confi-
dence threshold, and x is a point on the ellipsoid
serving the object is greater than a given threshold
assumes that the probability distribution of our knowl- boundary. The relationship between k and the proba-
edge of the object’s location is a multivariate (x, y, 0) bility of a point lying within the ellipsoid specified by
k is:
Gaussian distribution. The general form is given by:
66
Downloaded from http://ijr.sagepub.com at SAGE Publications on April 2, 2008
© 1986 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
where N is the number of dimensions and er f is the with the following confidence ellipses for different k:
error function.
In the particular case where the two-dimensional
(x, y) error ellipse is required given the 3-D (x, y, 0)
covariance matrix, the procedure is as follows. Firstly,
produce the two-dimensional marginal covariance
matrix from the 3-D covariance matrix by striking out
the row and column of the unwanted variable. That is:
Appendix B. Coordinate Frame Mappings
This appendix gives the explicit formulas for mapping
the mean and covariance matrix from polar (r, 0, 0)
The corresponding family of two-dimensional ellipses representation to an equivalent Cartesian form, where
B is the rotation of the coordinate frame in both the
is given by Eq. (A 1 ), and in this case reduces to
polar and Cartesian case. This mapping is necessary,
for example, if a camera expresses the location of an
object in polar coordinates and the camera error is
also given in polar coordinates, but the result of sens-
where A, B, C are found from the two-dimensional ing (or the merged results of many sensings) is re-
covariance matrix and Eq. (A 1 ). The angle 0 that the quired in a Cartesian representation. The nominal
major axis of this ellipse makes with the positive x- (mean) transformation is given by
axis is given by:
67
Downloaded from http://ijr.sagepub.com at SAGE Publications on April 2, 2008
© 1986 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.
REFERENCES IEEE Int. Conf. Robotics and Automation. St. Louis:
IEEE, pp. 138-145.
Brooks, R. A. 1982. Symbolic error analysis and robot plan- Gelb, A. 1984. Applied optimal estimation. Cambridge: MIT
Int. J. Robotics Res. 1(4):29-68.
ning. Press.
Brooks, R. A. 1985. Visual map making for a mobile robot. Nahi, N. E. 1976. Estimation theory and applications. New
Proc. IEEE Int. Conf. Robotics and Automation. St. York: R. E. Krieger.
Louis: IEEE, pp. 824-829. Paul, R. P. 1981. Robot manipulators: mathematics, pro-
Chatila, R., and Laumond, J-P. 1985. Position referencing gramming and control. Cambridge: MIT Press.
and consistent world modeling for mobile robots. Proc.
68
Downloaded from http://ijr.sagepub.com at SAGE Publications on April 2, 2008
© 1986 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.