You are on page 1of 16

Appendix A

Units of Measurement

Tables A.1 and A.2 show the units of measurement and their abbreviations.

Table A.1 Units of measurement


Property Variable Unit Abbreviation
Distance s Meter m
Time t Second s
Velocity v Meter/second m/s
Acceleration a Meter/second2 m/s2
Frequency f Hertz Hz
Angle θ Radian rad
Degree ◦

Table A.2 Prefixes


Prefix Meaning Abbreviation
kilo- Thousands k
centi- Hundredths c
milli- Thousandths m
micro- Millionths µ

Examples:

• 20 kHz = 20 kilohertz = 20,000 hertz


• 15 cm = 15 centimeters = 100
15
meters
• 50 ms = 50 milliseconds = 50
1000
seconds
• 10 µs = 10 microseconds = 1000 milliseconds
10
= 10
1,000,000
seconds

© The Author(s) 2018 293


M. Ben-Ari and F. Mondada, Elements of Robotics,
https://doi.org/10.1007/978-3-319-62533-1
294 Appendix A: Units of Measurement

Fig. A.1 Angles in degrees (left) and radians (right)

Angles, such as the heading of a robot and the direction to an object, are measured
in degrees or radians (Fig. A.1). By convention, angles are positive in the counter-
clockwise direction and negative in the clockwise direction as measured from the
front of the robot.
Appendix B
Mathematical Derivations and Tutorials

This appendix collects mathematical derivations used in the text as well as short
tutorials of concepts which may not be familiar.

B.1 Conditional Probability and Bayes Rule

Given a reading z of the sensor, what is the probability that we are at position xi ?
This is expressed as a conditional probability p(xi | z). The data we have available
are the current probability p(xi ) that we are at xi , and p(z | xi ), the conditional
probability that the sensor reads z if we are in fact at xi . Let us multiply these two
probabilities:
p(z | xi ) p(xi ) .

What does this mean? The event xi occurs with probability p(xi ) and once it occurs
the event z occurs with probability p(z | xi ). Therefore, this is the probability that
both z and xi occur, called the joint probability of the two events:

p(z ∩ xi ) = p(z | xi ) p(xi ) .

The joint probability can also be obtained by multiplying the conditional probability
for xi given z by the probability of z:

p(xi ∩ z) = p(xi | z) p(z) .

The joint probability is commutative so by equating the two expressions we have:

p(xi | z) p(z) = p(xi ∩ z) = p(z ∩ xi ) = p(z | xi ) p(xi ) .

© The Author(s) 2018 295


M. Ben-Ari and F. Mondada, Elements of Robotics,
https://doi.org/10.1007/978-3-319-62533-1
296 Appendix B: Mathematical Derivations and Tutorials

Dividing by p(z) gives:

p(z | xi ) p(xi )
p(xi | z) = ,
p(z)

which is known as Bayes rule.


If we know p(z | xi ) and p(xi ) for each i, p(z), the total probability of the event
z, can be computed by summing the individual known probabilities:

p(z) = p(z | xi ) p(xi ) .
i

Example Let us do the computation for the example in Sect. 8.4. Let xi be the event
that we are at position i and let z be the event that the robot detects a door. Initially,
p(xi ) = 0.125 for all positions xi , and, if the robot is at a door, the probability that
the sensor detects this correctly is 0.9, while the probability that it incorrectly detects
a door is 1 − 0.9 = 0.1. The probability of detecting a door, p(z), is obtained by
summing the probabilities at each position, where the probability is 0.125 × 0.9 =
0.1125 at a position with a door and 0.125 × 0.1 = 0.0125 at a position with no
door:

p(z) = 0.1125 + 0.1125 + 0.0125 + 0.0125 + 0.1125 + 0.1125 + 0.1125 + 0.0125 = 0.575 .

By Bayes rule, the probability of being at position i with a door if a door is detected
is:
p(z | xi ) p(xi ) 0.9 × 0.125
p(xi | z) = = = 0.196 ,
p(z) 0.575

while the probability of being at position i with no door, but incorrectly a door is
detected is:
p(z | xi ) p(xi ) 0.1 × 0.125
p(xi | z) = = = 0.022 .
p(z) 0.575

B.2 Normalization

The set of probabilities of the possible outcomes of an event must add up to 1 since
one of the outcomes must occur. If a door is detected, the robot must be at one of the
8 possible positions, but the sum over all positions i of the probability that the robot
is at position i was shown above to be:

0.1125 + 0.1125 + 0.0125 + 0.0125 + 0.1125 + 0.1125 + 0.1125 + 0.0125 = 0.575 .


Appendix B: Mathematical Derivations and Tutorials 297

The probabilities must be normalized by dividing by the sum 0.575 so that the sum
will be 1. The normalized probabilities are 0.1125/0.575 ≈ 0.19 and 0.0125/0.575 ≈
0.02, which do add up to 1:

0.19 + 0.19 + 0.02 + 0.02 + 0.19 + 0.19 + 0.19 + 0.02 ≈ 1 .

B.3 Mean and Variance

The mean1 μ of a set of values {x1 , . . . , xn } is:

1
n
μ= xi .
n i=1

Consider five people earning 8, 9, 10, 11, 12 thousand Euros per year, respectively.
Their mean salary is:

8 + 9 + 10 + 11 + 12 50
μ= = = 10 .
5 5
The mean doesn’t tell us very much because the same mean can be obtained from
very different data:

5 + 6 + 10 + 14 + 15 50
μ= = = 10 .
5 5
The mean is significantly influenced by outliers: values that are much higher or lower
than the rest of the values. If the person earning 10 thousand Euros suddenly received
a bonus of 90 thousand Euros, the mean salary is now:

8 + 9 + 100 + 11 + 12 140
μ= = = 28 .
5 5
A politician would jump at the opportunity of claiming that during his term of office
the average salary had risen by 180%!
Variance is a measure of the spread of a set of values. The closer together
the values are, the lower the variance. Measures like the mean are more reliable
if the values are clustered together and thus have a low variance. The formula for the
variance of a set of values {x1 , . . . , xn } is2 :

1 Mean is the technical term for average.


2n − 1 counts the degrees of freedom. Since the mean is computed before the variance, we can’t
choose the n values arbitrarily; the last value chosen is constrained to be the value that causes the
computation to produce the given mean.
298 Appendix B: Mathematical Derivations and Tutorials

1 
n
s2 = (xi − μ)2 .
n − 1 i=1

Each term xi − μ measures the distance of the value xi from the mean; the variance is
the average of the squares of these distances. The distances are squared so that values
on either side of the mean don’t cancel each other out. For example, given values 100
and 300, their mean is 200; if we computed the variance as (100−200)+(200−300),
the result would be 0 even though the values are spread out. Using the definition
above, the variance is (100 − 200)2 + (200 − 300)2 = 20,000.
For the data set {8, 9, 10, 11, 12} the variance is:

(−2)2 + (−1)2 + 0 + 12 + 22 10
s2 = = = 2.5 ,
5−1 4
while for the data set {5, 6, 10, 14, 15} the variance is:

(−5)2 + (−4)2 + 0 + 42 + 52
s2 = = 20.5 .
4
Since 20.5 is much larger than 2.5 the data in the second set are spread over a wider
range than the data of the first set. After the bonus is received the variance is:

202 + 192 + 722 + 172 + 162 6490


s2 = = = 1622.5 .
4 4
Clearly, one shouldn’t interpret the mean salary as meaningful if there are outliers.

B.4 Covariance

Consider a group of ten people earning the following salaries:

x1 = {11, 12, 13, 14, 15, 16, 17, 18, 19, 20} ,

in thousands of Euros. The average salary is:

1
μ1 = (11 + 12 + 13 + 14 + 15 + 16 + 17 + 18 + 19 + 20) = 15.5 .
10
We conjecture that people with higher salaries buy more expensive cars than those
with low salaries. Suppose that there are two models of cars being sold in that area,
one for 10 thousand Euros and one for 20 thousand Euros. The following data set
shows the cars bought by this group of people, where the ith element is the cost of
the car bought by the ith person:

x2 = {10, 10, 10, 20, 10, 20, 10, 10, 20, 20} .
Appendix B: Mathematical Derivations and Tutorials 299

To see if there is any connection between the salaries and the costs of the cars, the
covariance cov(x1 , x2 ) between the data sets x1 and x2 is calculated. The computation
is similar to that of the variance, except that instead of squaring the difference between
a value in a single set and the mean of that set, we multiply the difference between
a value from the first set and its mean by the difference between a value from the
second set and its mean:

1 
n
cov(x1 , x2 ) = (x1,i − μ1 )(x2,i − μ2 ) .
n − 1 i=1

The covariance of the sets of value x1 and x2 is 7.8, a positive value, which indicates
that salaries and car cost increase together, that is, people making more money tend
to buy more expensive cars. If the first five people buy cars worth 10 and the next
five people buy cars worth 20, the covariance becomes 13.9, indicating a stronger
connection between salary and the cost of a car. Conversely, if the first five buy
expensive cars and the next five buy cheap cars, the covariance is −13.9, so that as
the salary goes up, the cost of a car goes down. Finally, if everyone buys the same car,
the covariance is 0 and we conclude as expected that there is no connection between
one’s salary and the car one buys.
Covariance is symmetric because multiplication of real numbers is commutative:

1 
n
cov(x1 , x2 ) = (x1,i − μ1 )(x2,i − μ2 )
n − 1 i=1
1 
n
= (x2,i − μ2 )(x1,i − μ1 )
n − 1 i=1
= cov(x2 , x1 ) .

The covariance matrix combines the variances and the covariances:


 2 
s (x1 ) cov(x1 , x2 )
.
cov(x2 , x1 ) s 2 (x2 )

cov(x1 , x2 ) = cov(x2 , x1 ), so there are only three different values in the matrix.

B.5 Multiplication of Vectors and Matrices

Multiplication of a (two-dimensional) matrix M by a vector v gives a new vector:


⎡ ⎤⎡ ⎤ ⎡ ⎤
a b x ax + by
Mv = ⎣ ⎦⎣ ⎦=⎣ ⎦.
c d y cx + by
300 Appendix B: Mathematical Derivations and Tutorials

Multiplication of two matrices is performed by multiplying the rows of left matrix


separately with each column vector of the right matrix to get the column vectors for
the resulting matrix:
⎡ ⎤⎡ ⎤ ⎡ ⎤
a b x u ax + by au + bv
⎣ ⎦⎣ ⎦=⎣ ⎦.
c d y v cx + dy cu + dv

The identity matrix is: ⎡ ⎤


1 0
I=⎣ ⎦
0 1

and it is easy to check that for any matrix M, M I = I M = M. For a matrix M, its
inverse M−1 is the matrix that results in I when multiplied by M:
⎡ ⎤ ⎡ ⎤
a b 1 d −b
M=⎣ ⎦, M−1 = ⎣ ⎦,
c d det (M) −c a

where det (M), the determinant of M, is ad − bc. We can check this by multiplying:
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
a b d −b ad − bc −ab + ba ad − bc 0
⎣ ⎦·⎣ ⎦=⎣ ⎦=⎣ ⎦.
c d −c a cd − dc −bc + da 0 ad − bc

This is valid only for matrices whose determinant is non-zero, because singular
matrices—those whose determinant is zero—do not have an inverse.

B.6 The Area of a Trapezoid at the Base of a Triangle

The following diagram shows a triangle of width w and height h with a parallel line
at height h  that creates a trapezoid:
Appendix B: Mathematical Derivations and Tutorials 301

We want to find a formula for the area of the trapezoid using the values w, h, h  .
The area a is the difference between the the areas of the two triangles:

wh w (h − h  )
a= − .
2 2
By similar triangles:

h w
= ,
h − h w
so:

w(h − h  )
w = .
h
Substituting:

wh w(h − h  )(h − h  )
a= −
2 2h
w(h 2 − (h − h  )2 )
=
2h
w(h 2 − h 2 + 2hh  − h 2 )
=
2h
w(2hh  − h 2 )
=
2h
h
= wh  (1 − ).
2h

B.7 Algebraic Formulas for cos 15◦

Equation 16.1 claims that:




2+ 3
cos−1 = ±15◦ .
2

Using the formula for the cosine of the difference of two angles, we have:

cos 15◦ = cos(45◦ − 30◦ )


= cos 45◦ cos 30◦ + sin 45◦ sin 30◦
√ √ √
2 1 2 3
= · + ·
2 2 2 2
302 Appendix B: Mathematical Derivations and Tutorials
√ √
2+ 6
= .
4
We now compute:
√ √ 2 √ √ √
√ 2
2+ 6 8+2 2 6 2+ 3 2+ 3
= = = .
4 16 4 2
Index

A determining position by triangulation,


A∗ algorithm, 172 130
Acceleration, 65 determining position from an angle and
instantaneous, 67 a distance, 129
Accuracy, 33 different line configurations, 47
Ackermann steering, 12 Dijkstra’s algorithm for continuous
Activity maps, 172
A∗ algorithm, 176 Dijkstra’s algorithm on a grid map, 170
acceleration, 67 distance from speed and time, 69
accuracy, 33 distinct Euler angles, 289
analog artificial neurons, 206 dogged, 41
ANN for obstacle attraction, 208 dogged (stop), 42
ANN for obstacle avoidance: design, 208 driven, 44
ANN for obstacle avoidance: implemen- following an object, 239
tation, 208 forward kinematics, 269
ANN for spatial filtering, 213 frontier algorithm, 145
ANN with memory, 211 Fuzzy logic, 182
artificial neurons for logic gates, 205 Hebbian learning for obstacle avoidance,
attractive and repulsive, 42 219
BeeClust algorithm, 255 holonomic and non-holonomic motion,
Braitenberg’s presentation of the vehi- 91
cles, 51 homogeneous transforms, 282
change of velocity, 65 image enhancement: histogram manipu-
circular line following while reading a lation, 193
code, 117 image enhancement: smoothing, 191
combined effect of odometry errors, 75 indecisive, 41
combining path planning and obstacle insecure, 44
avoidance, 177 inverse kinematics, 273
computing distance when accelerating, learning by a perceptron, 248
68 linearity, 34
conditional expressions for wall follow- line following in practice, 50
ing, 112 line following while reading a code, 117
consistent, 56 line following with one sensor, 49
correcting odometry errors, 76 line following without a gradient, 50
detecting a blob, 199 line following with proportional correc-
detecting a corner, 197 tion, 49
detecting an edge, 195 line following with two sensors, 46
© The Author(s) 2018 303
M. Ben-Ari and F. Mondada, Elements of Robotics,
https://doi.org/10.1007/978-3-319-62533-1
304 Index

localization with uncertainty in the mo- triangulation, 29


tion, 138 velocity over a fixed distance, 65
localization with uncertainty in the sen- wall following with direction, 115
sors, 137 wheel encoding, 77
localize the robot from the computed per- Actuator, 81
ceptions, 159 Algorithm
localize the robot from the measured per- ANN for obstacle avoidance, 216
ceptions, 161 applying the Hebbian rule, 218
locating the nest, 119 classification by a perceptron (learning
measuring motion at constant accelera- phase), 245
tion, 68 classification by a perceptron (recogni-
measuring the attitude using accelerom- tion phase), 245
eter, 32 control algorithm outline, 97
multilayer ANN for obstacle avoidance, detecting a blob, 198
210 Dijkstra’s algorithm on a grid map, 167
multilayer ANNs, 210 distinguishing classes (learning phase),
multiple Euler angles, 288 227
obstacle avoidance with two sensors, 239 distinguishing classes (recognition
occlusion-based pushing, 262 phase), 227
odometry errors, 75 feedback on the robot’s behavior, 218
odometry in two dimensions, 73 frontier algorithm, 149
on-off controller, 100 integer multiplication, 15
paranoid, 43 linear discriminant analysis (learning
paranoid (alternates direction), 59 phase), 234
paranoid (right-left), 43 linear discriminant analysis (recognition
persistent, 56 phase), 234
PI controller, 106 line following with one sensor, 48
PID controller, 107 line following with two sensors, 46
play the landmark game, 128 on-off controller, 99
pledge algorithm, 116 paranoid, 43
precision and resolution, 33 persistent, 58
probabilistic map of obstacles, 144 proportional controller, 101
proportional controller, 104 proportional-integral controller, 104
pulling force by several robots, 260 proportional-integral-differential con-
range of a distance sensor, 28 troller, 107
recognizing a door, 199 search and approach, 60
reflectivity, 29 simple wall following, 113
regaining the line after losing it, 47 SLAM, 161
robotic chameleon, 228 timid, 40
robotic chameleon with LDA, 238 timid with while, 41
robotic crane, 87 wall following, 114
robotic crane (alternatives), 87 Apeture, 49
robotic lawnmower, 151
robot that can only rotate, 82
rotation matrices, 276 B
search and approach, 57 Braitenberg vehicle, 39
sensing areas of high density, 120 ANN implementation, 206
sensor configuration, 47 attractive and repulsive, 42
setting the control period, 98 consistent, 56
simple wall following, 113 dogged, 41, 42
thresholds, 29 driven, 44
timid, 41 indecisive, 41
total force, 260 insecure, 44
Index 305

neural network, 207 rule, 180


obstacle avoidance, 207
paranoid, 43, 59
persistent, 56 G
timid, 40 Global positioning system, 131

C H
Calibration, 34 Holonomic motion, 88, 89
Camera, 30 Homogeneous transform, 282
Charge-Coupled Device (CCD), 30
Classification algorithm, 221
Control, 95 I
algorithm, 99 Image processing, 183
gain, 101 box filter, 190
on-off, 99 color, 186
period of, 97 corner detection, 196
proportional, 101 edge detection, 193
proportional-integral, 104 enhancement, 187, 188
proportional-integral-derivative, 106 histogram manipulation, 191
closed loop, 96 optics, 186
integrator windup, 106 recognition, 188
open loop, 96 resolution, 186
Coordinate frame, 274 segmentation, 187
Sobel filter, 194
spatial filter, 189
D weighted filter, 190
Degree of freedom, 81 Inertial navigation system, 77
Degree of mobility, 88 accelerometer, 78
Differential drive, 11 gyroscope, 78
turning with, 42 Interpolation, 36
Dijkstra’s algorithm, 165
continuous map, 170
grid map with constant cost, 166 K
grid map with variable cost, 168 Karel the Robot, 254
Discriminant, 223 Kinematics, 267

E L
End effector, 268 Landmarks, 127
Euler angles, 286 Line following, 44
number of, 289 Line following with a code, 116
Event handler, 14 Localization
angle and distance, 128
probabilistic, 131
F triangulation, 129
Finite state machine, 55
final state, 57
nondeterminism, 58 M
Fuzzy logic, 179 Machine learning, 221
consequent, 180 discriminant based on the means, 223
crisp value, 182 discriminant based on the means and the
linguistic variable, 180 variances, 225
premise, 180 linear discriminant analysis, 228
306 Index

linear discriminant analysis example, Pose, 70


234 Precision, 32
Map Probability
encoding, 143 Bayes rule, 296
grid, 142 conditional, 295
Mapping covariance, 298
exploring the environment, 145 joint, 295
frontier algorithm, 146 mean, 297
with knowledge of the environment, 151 normalization, 296
occupancy probability, 145 variance, 297
visibility graph, 171 Pseudocode, 14
Markov algorithm, 132
Matrix multiplication, 299
Microelectromechanical systems, 78 Q
Quaternions, 290

N
Navigation R
map-based, 165 Range, 32
Neural network, 203 Reactive behavior, 39
artificial, 204 Redundant system, 83
biological, 203 Resolution, 32
Braitenberg vehicle implementation, 206 Right-hand rule, 284
Hebbian rule, 214 Robot
learning in, 213 classification, 2
with memory, 211 educational, 6
multilayer topology, 209 environments, 2
reinforcement learning, 214 generic, 11
spatial filter, 211 humanoid, 6
supervised learning, 214 industrial, 3
topology, 209 mobile, 4
unsupervised learning, 214 Rotation, 274
Neuron, 203 of a coordinate frame, 276
Nonlinearity, 34 matrix
three-dimensional, 285
two-dimensional, 276
O three-dimensional, 283, 289
Obstacle avoidance, 112 transforming a vector from one coordi-
Odometry, 69 nate frame to another, 277
linear, 69 of a vector, 274
with turns, 71

S
P Sensor, 21
Path finding accelerometer, 31
ants, 118, 123 distance, 22
probabilistic model of the ants, 121 elapsed time, 24
Path following and obstacle avoidance, 176 exteroceptive, 22
Perceptron, 241 ground, 13, 45
classification, 243 infrared, 24
learning, 244 laser, 27
learning rate, 248 linear, 34
Pixel, 30 microphone, 31
Polling, 14 optical, 24
Index 307

proprioceptive, 22 T
proximity, 12, 24 Threshold, 14, 29
touch, 31 Timer, 14
triangulating, 26 Trapezoid, 300
ultrasound, 23 Triangulation, 129
Simultaneous Localization And Mapping Turning radius, 42
(SLAM), 141
algorithm, 161
closing the loop, 151 U
numerical example, 153 Units of measurement, 293
overlap, 152
Software development environment, 9
State diagram, 55
Swarm robotics V
ASSISIbf implementation of the Variance, 225
BeeClust algorithm, 256 Velocity, 64
BeeClust algorithm, 255 instantaneous, 67
combining forces, 259
communications using objects, 253
distributed architecture, 253 W
by electronic communications, 253 Wall following, 112
by information exchange, 253 with direction, 114
occlusion-based collective pushing, 261 pledge algorithm, 116
physical collaboration, 258 Wheel encoder, 76
Swedish wheel, 89 Workspace, 270
308

© The Editor(s) (if applicable) and The Author(s) 2018. This book is an open access publication
Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this book are included in the book’s Creative Commons
license, unless indicated otherwise in a credit line to the material. If material is not included in the
book’s Creative Commons license and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

You might also like