Professional Documents
Culture Documents
Reading:
AIMA Chapter 25.1‐25.3, 25.7,25.8
Today’s lecture slides
Due this Friday
Update (approx 1 page)
▪ How far are you on plan of action?
▪ Any changes from previous plans?
▪ Results and Implementation so far
▪ Should already have some working prototype code and design docs for your
implementation plans.
▪ Don’t need to repeat things from proposal
Prof. Radhika Nagpal ▪ Upload same place as proposal
School of Engineering and Applied Sciences
Harvard University
Navigation: How to I get there? Simple Question: Where am I?
Localization: Where am I?
Not a simple answer:
Mapping: Where have I been? Do you have a map? How obvious the environment?
Can you sense your own self‐movement?
Exploration: Where haven’t I been? Can you sense external things like landmarks?
How certain are you about what you sense?
Do you know where you started and what you did in the past?
Scenarios:
Do you care about global position in the world or just in reference to
Tour guide robot in a museum, Indoor mail‐delivery robot yourself?
Autonomous car with GPS and Nav system,
Search and rescue robot in an indoor or outdoor environment Like navigation, Localization is also a collection of algorithms
Biological analogies: Humans, bees and ants, migrating birds
(This time though very different algorithms than prev CS182 materials)
1
11/15/11
Take two steps forward,
Take two steps back,
Are you back where you started?
Dead‐reckoning
How to keep track of where you are given your initial position and the series of movements/ How it works
actions that you made, using internal (proprioceptive) sensors. ▪ Keep track of where you are given your initial position and the series of
▪ Also called: Odometry, path integration, motion model, inertial‐navigation systems (INS) movements/actions that you made R
▪ Also called odometry, path integration, etc.
Landmarks
Triangulate your position geometrically by measuring range/bearing to one or more landmarks
▪ E.g. Visual beacons or features, Radio/Cell towers and signal strength, GPS! Common Motion model
▪ Assume that you have a mobile robot with wheel encoders, and can use
those to compute linear and angular velocities.
What if there is uncertainty in motion or sensing? => Probablistic Reasoning ▪ Position at time t = (xt, yt, ot)
Kalman Filters! ▪ Linear velocity = vt; Rotational velocity = wt (control input)
▪ Combine! (Dead‐reckoning + uncertainty) + (Landmarks + uncertainty ) ▪ Then for a small time dt, we can compute the new position xt+dtyt+dt
▪ Technique: Take advantage of mathematics of Gaussians to model uncertainty xt+dt = xt + vt dt cos ot (“instantaneous” change in position and orientation)
▪ Applications: Car + GPS, Car + visual landmarks, Lawnmower + beacons, warehouse robots (e.g Kiva)
yt+dt = yt + vt dt sin ot
ot+dt = ot + wt dt vtdt
vt dt sin ot
Particle Filters (also known as Monte Carlo Localization)
▪ What if the environment is really complex, and there is lots of ambiguity?
▪ Technique: Represent uncertainty by a discrete distribution of “particles” (think of sampling or histograms) Errors accumulate over time
▪ Fun and recently popular technique, which we will focus on today. ▪ Include gyroscope/accelerometers (INS: inertial measurement system) ot
▪ Application: A robot wandering in a building that provide better measurements of instantaneous velocity (wheel
encoders unreliable due to slippage); Expensive to reduce error xtyt vt dt cos ot
Who are the world’s best localizers? ▪ Odometry best over short distances
I can see the CITGO sign to my
southeast, 15 miles away
Where am I?
How it works
▪ Opposite of dead‐reckoning! What if there is uncertainty in motion or sensing?
▪ Use external landmarks of known position in the environment, measure L1 (xL1yL1) L3
distance and/or angle (range and/or bearing) to the landmark. Then both odometry and landmarks provide an “estimate” of where you are
▪ Use trigonometry to find where you are! If you can construct models of their noise,
▪ Also highly used!: Artificial beacons or features in the environment, use radio towers dL1 then you can combine these motion and sensing estimate smartly!
and signal strength to triangulate cellphone, even GPS! Different techniques from what we’ve covered in CS182 (more close to CS181)
2
11/15/11
Dead‐reckoning + uncertainty
Landmarks + uncertainty
Also known as inference, sensor fusion, kalman filters, etc
How it works
Take a motion step: use dead‐reckoning to get position (mean) but also
Problem
keep track of uncertainty in movement You want to know the state x at time t,
▪ (x can be anything: position, velocity, temp, happiness)
Take a sensing step: use landmarks to triangulate position, then
but you can not measure it directly.
combine with previous estimate based on relative confidence.
▪ Applications: Car+GPS, Car+visual landmarks, Lawnmower+beacons, warehouse robots (kiva) Instead you know what control input ut
you gave it to arrive here from the last time step (xt‐1),
Technique and Limitations And you can measure zt (some measurement)
Uses the mathematics of Gaussians (bell curves) to capture uncertainty,
Works well for tracking (where good certainty about initial position) Concrete example:
and motion/sensing models that can be represented by Guassians. A car with GPS and input controls (velocity/steering)
control or measurement, then
Robot
we know x, but if there
zt Is noise then the bestw e can
do is estimate x
Landmark
“Belief” of my current state
xt‐1 with stddev σ t‐1 Step 2: Take a measurement zt
and calculate new belief eσt
Inputs:
Control ut and its stddev r We expect our new xt to always lie between our ext
Measurement zt and its stdev q previous estimate and our measurement (why?) Zt (with stddev q)
My position
We are assuming that we can model of the noise in Therefore, our new estimate is a combination of
our system, as a gaussian with a mean and σt‐1 our old estimate and measurement
stdev and experimentally determine these
numbers. ▪ xt = a*ext + (1‐a) zt Step 1: Motion
adds uncertainty
The factor “a” is determined by our relative
My “estimated” position
Step 1: Take a step, calculate new belief confidence in our belief about our old state
ext = xt‐1 + ut
After I take a motion step
and our confidence in the measurement Step 2: Measurement
eσ t = σ t‐1 + r ▪ a = q / (q + eσ t ) Reduces uncertainty
▪ Consider case where q=0,
eσt
Note that my uncertainty has increased due to then we will go with our noise free measurement And Repeat!
the added noise in my control input’s affect. ext ▪ Consider case where eσ t=0 ,
then we will ignore our measurements
since we have no uncertainty about our current position.
3
11/15/11
I could be TWO PLACES at once!!
I could be TWO PLACES at once!!
The measurements of xt and ut and zt may not be in the same space What if?
(so may need transform) The environment is really complex and you have no idea
▪ E.g. If x = position of car, and ut is velocity vector, then must calculate the new position
at time t+1. Similarly if zt is position of some object relative to the car, but car position in where you are? And even though you can see landmarks
latitude/longitude, then will need to correct frame of reference there is a lot of ambiguity in what positions they suggest
Multiple sensors! (sensor fusion) Gaussians are not the best way to capture uncertainty
If you have multiple sensors you can simply repeat step 2 multiple times
This is especially useful if you have sensors that only occasionally give you
data (like detecting a landmark) Use Monte Carlo Localization
When is a Kalman Filter good to use? How it works
When control/sensor noise is well approximated by a Gaussian
▪ (e.g. GPS and car/robot controls are usually decently approximated this way) Represent our estimated position and uncertainty (our
“belief”) using a constant set of “particles”
When state (x) is representable by just a Gaussian. ▪ Think of this as a “sampling” from a probability distribution
▪ Example of a bad case: bird flying straight at a pole;
▪ expected location is best approximated by two gaussians on either side,
▪ not by a single gaussian at the pole!
Particle “density” = Probability
Motion Model My world consists of
Initially particles distributed everywhere Pr (xt+1 | xt, action_t) hallways, corridor ends
Extremely simple model And 4 unique offices
Observation of corridors narrows the Move using a Compass (N,S,E,W)
possibilities (bimodal distribution) Pr(stay) = 0.1 (fail to move); Pr(succeed) = 0.9
Pr (also depends on position)
More movement results in disambiguating the 0,1 4,1
▪ E.g. if obstacle (like a wall) then Pr(stay) = 1
two cases (now more Gaussian‐like)
Sensor Model
Pr(zt | xt) 0,0 1,0 2,0 3,0 4,0
Depends on where you are standing
But also your error in feature sensing 0,‐1 4,‐1
Pr (hallway detection | (1,0)) = 0.8
Pr (end detection | (1,0)) = 0.2 (error!)
▪ Chance that you may think you are at the end instead North
of a hallway….
East
4
11/15/11
North
Take a sensor reading and get “evidence”
Basic Question: Where am I? 0,1 4,1 Sensor => in a hallway
East
Instead of a Gaussian we will
represent position by a fixed 0,0 1,0 2,0 3,0 4,0 Weight each location’s particles by likelihood
number of particles distributed Pr (xt | given that you detected a hallway)
over space 0,‐1 4,‐1 Resample N particles but from the distribution of weights
Otherwise just like Kfilter! Now get a new particle distribution that represents your believed location
At the beginning of time
I could be anywhere with equal
likelihood
▪ N particles, then avg d/N particles in
each of the d locations.
Take a motion Step
Lets say you move west 1 spot
Use your motion model to predict what will happen
Consider location 0,1 and pretend 10 particles in it
If take a step west, 90% chance you succeed and move to (0,0)
There is a 10% chance you will not move and end up still in (1,0)
Resample
Roll the dice with 90% chance of success for each of the 30 particles, and move your particle
to new location
Then Repeat: Take a Sensor Reading and reduce your uncertainty!
5
11/15/11
Dead‐reckoning (motion)
Landmarks (sensing)
Uncertainty in motion and sensing
▪ Kalman Filters (also known as State Estimation)
▪ Particle Filters (also known as Monte Carlo Localization)
Desert Ant!
Honey Bees! Path integration
Who are the world’s best localizers? Optical flow and
sun compass
and sun compass
Argentine Ant!
Pheromone Trails Muller, Wehner, PNAS 1988
(aka bread crumbs)
6