## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Luke Cole <cole@lc.homedns.org> http://cole.homedns.org 31st of May, 2006

1

Introduction

Robot navigation is the task where an autonomous robot moves safely from one location to another. This involves three primary questions [Leonard and Durrant-Whyte, 1991]: • Where am I? which is known as robotic localisation. • Where am I going? which is known as goal recognition. • How do I get there? which is known as path planning. This project investigated, implemented and experimented a simulation of localisation task using Monte Carlo techniques. Answering the question “Where am I?” is done from the entities point of view and is a problem of estimating the robot’s pose (position, orientation) relative to its environment. The robot’s pose is typically the x and y coordinates and heading direction (orientation) of the robot in a global coordinate system and is often represented by a probability distribution. The localisation technique used in this project is based on a particle ﬁlter, which is an algorithm using a Monte Carlo Method and is commonly known as Monte Carlo Localisation (MCL). As a result the MCL developed worked very well in simulation. Once the robot determined its pose, it was able to continue tracking its pose for rest of the journey.

2

Motivation

In 1920, Karel Capek presented a play R.U.R. Rossum’s Universal Robots that showed a future in which all workers are automated. A worker was 1

called a robot, which is a well known term today as “a mechanical device that sometimes resembles a human and is capable of performing a variety of often complex human tasks on command or by being programmed in advance” [Miﬄin, 1995]. Robots would allow humans to concentrate their concerns on more important tasks, instead of cleaning ones house. Currently the community employees other humans to do their unwanted tasks, however delegation to humans may not be an ideal solution. For some tasks, humans come with major limitations such as a need for sleep, food and a desire for money. On the other hand, a robot does not need to sleep, eat or be paid so the development of robots for the community is promising. So it is clear an autonomous mobile robot would be of great beneﬁt to human society. However some may ague that robots will create unemployment. Though one argument to this, from a well known scientist, specialising in robotics and computer vision, is “at the moment the highest number of robots are in the nations with the lowest levels of unemployment. If we can make robots for the consumer, that will create a lot of employment.” [Zelinsky, 2000]. However a robot worker would require the robot to be mobile, which is the problem of robot navigation mentioned in the introduction. Robot navigation is occasionally referred to as the most fundamental problem for robotics [Cox, 1991], so a project looking at any areas of the robot navigation problem, such as localisation, would be of great beneﬁt to robotics and the robot dream. There are a number of problems faced by mobile robot localisation [Thrun et al., 2000]: • Position tracking problem - The robot is given the initial location • Global positioning - The robot is initially lost. i.e. can deal with multiple ideas (hypotheses) about its location. • kidnapped robot problem - The robot is suddenly ’kidnapped’ to a new location. Here the robot needs to determine that it has been kidnapped and furthermore, must work out its new location. The global positioning problem is a special case of the kidnapped robot problem where the robot is told it has been kidnapped [Fox et al., 1999]. Ideally a solution is needed to handle all these problems in real time, on a real robot and in real world. However for the purposes of this project only a simulation was used to test these problems via the MCL algorithm presented in this report. 2

There has been numerous research on MCL such as [Fox et al., 2000] and [Thrun et al., 2000], however most of the well known work stem from Sebastian Thurn [Thurn, 2006].

3

Localisation Theory

This section will describe localisation theory by ﬁrst mentioning the available information for a robot, then discussing methods for representing a pose estimation and ﬁnally outlining the approach used in this project.

3.1

Available Information

Simply speaking a robot either drives around, or senses an environment, which gives rise to the two diﬀerent kinds of information available. The ﬁrst is driving information, also known as relative position measurements, gathered by the robot itself as it drives around. The second is sensor information, also known as absolute position measurements, from observations of the environment as the robot moves. The most obvious relative position measurement is odometry, which comes from the Greek words for road and measure [Moravec, 1998]. However the problem can not be purely solved with odometry, since odometry information is generally from a drive system which naturally has drift and slippage, which lead to errors in both distance and orientation [Borenstein, 1995] and accumulate over time. Generally errors in orientation cause the largest pose uncertainties. Diﬀerent sized wheels can also cause errors [Shoval and Borenstein, 1995]. Odometry is also sensitive to terrain, so if the robot navigates over rocks for example, it may think it has travelled further than it actually has. Even though odometry causes increasing uncertainty over time, it is rather accurate for short periods of time, and since it is an inexpensive way to gain position information, it is an important source of information for localisation. Sensing information is typically obtained by representing the environment by a set of landmarks such as lines and doors. When the global position of the landmark is known and an observation of the environment is made, capturing a subset of the landmarks, the robot’s position can be determined. Landmarks are represented by a map or model which can be built in a number of ways, such as: topological, feature-based, grid-based and appearancebased and are either learned by the robot or given to the robot prior to navigation [Singhal, 1997]. This also provides a global reference frame for the robot to approximate its pose. 3

3.2

Pose Estimation

Estimating the pose is very much about representing uncertain pose information [Fox et al., 2000]. The pose of the robot is often represented by a probability distribution. The distribution may be represented by a Gaussian, in which case, the mean and covariance give the approximation of the robot’s pose. The distribution is represented by: p(xt | d0...t) (1)

Here x is the state, xt is the state at time t and d0...t are measurements from time 0 up to time t. In robot localisation there are two types of measurements as discussed in Section 2. So absolute position measurements will be denoted by a, for action and relative position measurements will be denoted by o, for observation, making equation 1: p(xt | ot , at−1 , ot−1 , at−2 , ..., a0 , o0 ) (2)

Estimating Equation 2 is the key idea of the Bayesian ﬁltering and is used by most of the pose estimation techniques presented below. There are many methods to pose estimation, the earliest pose estimation techniques were based on dead reckoning, which has been used ever since people started travelling around [Worsley, 1978]. The following list outlines the possible pose estimation techniques. • Dead-Reckoning Given a initial pose and odometry information allows one to calculate the ﬁnal pose. • Topological: The uncertainty of the pose information is stored as a graph. The location of the robot is returned as a particular area or room instead of actual coordinates. • Gaussian PDF Localisation: The pose is stored as a Gaussian probability distribution function (PDF). • Gaussian Sum PDF Localisation: When the pose is multi-modal, multiple Gaussian PDF’s are used. • Position Probability Grids: The pose space is divided into cells, each representing a possible pose. Thus a discrete approximation of the PDF can be made. • Monte Carlo Localisation: The PDF is represented by a set of samples. Eﬃcient approximation of the PDF is made by ﬁnite sized sample set. 4

Gaussian PDF localisation and Gaussian sum PDF localisation use Kalman ﬁlters that are have proved very successful methods for solving the position tracking problem [Leonard and Durrant-Whyte, 1992], quite eﬀectively solving the problem of compensating for small, incremental errors. Position probability grids use Markov localisation to solve the global positioning problem. However this method suﬀers from two problems: the computational expense, and the fact that accuracy is limited to the resolution of the approximation. Monte Carlo Localisation (MCL) solves the global positioning and kidnapping problem. It is an approach that overcomes the issues of the Markov localisation by an order of magnitude in its eﬃciency and accuracy.

3.3

Bayesian Filtering

Particle ﬁlters provide solutions to all the problems given in Section 2. To understand the implementation of particle ﬁlters, let us ﬁrst derive a Bayes ﬁlter which is the basis of Markov Localisation. First transform equation (2) by Bayes rule [Couch, 2001]: p(ot | xt , at−1 , ..., o0 )p(xt | at−1 , ..., o0 ) p(ot | at−1 , d0...t−1) (3) Now a Bayesian ﬁlter assumes the environment is Markov, that is, if the current state is known, past measurements are independent of future measurements, that is: p(xt | ot , at−1 , ot−1 , at−2 , ..., a0 , o0 ) = p(at | xt , at−1 , ...., o0 ) = p(ot | xt ) Which simpliﬁes equation (3) to: p(xt | ot , at−1 , ot−1 , at−2 , ..., a0 , o0 ) = p(ot | xt )p(xt | at−1 , ..., o0 ) p(ot | at−1 , d0...t−1) (5) (4)

Then the ﬁnal recursive form is obtained by integrating out the pose xt−1 at time t − 1, yielding from equation 5: p(ot | xt ) p(ot | at−1 , d0...t−1) p(xt | xt−1 , at−1 , ..., o0 )p(xt−1 | at−1 , ..., o0 )dxt−1 (6)

The Markov assumption also tells us that if xt−1 and at−1 are known, the state xt is conditionally independent of past measurements o1 , ..., ot−1 and a1 , ..., at−2 , that is: p(xt | xt−1 , at−1 , ...., o0 ) = p(xt | xt−1 , at−1 ) 5 (7)

So equation (6) can now be put into recursive form known as Bayes ﬁlter : p(ot | xt ) p(ot | at−1 , d0...t−1) = ηρ p(xt | xt−1 , at−1 )p(xt−1 | ot−1 , at−2 , ..., a0 , o0 )dxt−1 (8) (9)

αp(xt−1 | ot−1 , at−2 , ..., a0 , o0 )dxt−1

where η equals p(ot | at−1 , d0...t−1)−1 and is known as the normalising constant, and α and ρ are known as the motion model and perceptual model respectively. So the equation shows that given α and ρ and the previous approximation of the robots pose p(xt−1 | ot−1 , at−2 , ..., a0 , o0 ), the current approximation of the robots pose p(xt | ot , at−1 , ..., a0 , o0 ) can be determined. Hence, given the initial pose probability distribution the robot pose can always be estimated as the robot travels within its environment. Its hard to compute the Bayesian ﬁlter directly due to issues such as storing the distribution since the probability distribution is continuous and can not integrate that easy. For these reasons particle ﬁlters are used, which will be discussed in the following section.

3.4

Implementation of Particle Filters

The idea of the particle ﬁlter algorithm is to represent Equation 2 by a set of n weighted samples distributed according to the Equation 2, that is: {xi , pi }i=1,...,n (10)

where xi is a sample (or particle, which is a state) and pi are called the importance factors, which sum up to one, which makes it a probability distribution, and determine the weight of each sample. Before the algorithm is presented the initialisation of the samples will ﬁrst be discussed in reference to each of the localisation problems mentioned in Section 2. In the position tracking problem the initial pose is known, so the initial probability distribution is typically a set of samples distributed in Gaussian fashion around the given initial pose. However in the global localisation problem the initial probability distribution is typically a set of samples uniformly distributed in the state space (environment). The kidnapped robot problem is typically initialised similarly to the global localisation problem, once the robot has determined it was kidnapped. The particle ﬁlter algorithm is generally performed as follows: 1. Sample n particles xi from the p(xt−1 | ot−1 , at−2 , ..., a0 , o0 ) according to the importance factors pi . 6

2. The robot then moves (acquiring relative position information at−1 ). So the samples are moved according to at−1 using the motion model. 3. The robot then makes an observation (acquiring absolute position information ot ), which yields the importance factors and allows the weight for each sample xi to be calculated using the perceptual model. 4. The new importance factors are then normalised so they sum up to one. 5. Resample particles: sample new particles from the existing particle set according to the weights. Replace the particle set with the new one. Go to 2. Figure 3 shows an image sequence of the particle ﬁlter in progress. As can be seen, the particle spread is reduced as the robot drives around and makes observations.

3.5

Motion Model

The motion model provides the odometry information (translation, rotation) from the robot and comes directly from the robot drive system. However since this project will be a complete simulation, this section will now outline how the odometry information is modelled. Since this project investigates localisation, as opposed to its parent problem robot navigation, the program is given a set of poses, which deﬁne the route within the map for the robot. The odometry information is then simulated via the vector diﬀerence of the current pose and the previous nth pose. However no sensor is perfect, so various errors need to be introduced. This project introduced two forms of errors: resolution and random error. For example, with a a translation resolution of 0.002 metres and a maximum random error of 0.2 metres, a reading of 0.54321 metres would return a reading 0.542 ± 0.2 metres. Once the odometry information is known, each particle is updated using the Dead Reckoning equation [Rekleitis, 2003]: ′ x + ρcos(α) x y ′ = y + ρsin(α) (11) θ+α θ′ where, ρ and α is the odometry translation and rotation respectively. x, y and θ is the new pose and x′ , y ′ , θ′ is the previously known pose.

7

3.6

Perceptual Model

The perceptual model weights a given pose through the use of a sensor, which means a unique perceptual model is required for each sensor. However in this project only a ring sonar sensor was used. A sonar measures the line of sight distance to the nearest obstacle. A ring sonar sensor is simply a collection of sonars angularly displaced by 2π rad, where N is the number of sonars used. N Given a map of obstacles, one can model the sonar readings at any location and orientation within the map. This allows the robot to compare its current sonar readings with the sonar readings of where the robot thinks it is. These comparisons allow each particle to be weighted via: ωi = 1 − µ N

N

(ai − bi )2

k=0

(12)

where, N is the number of sonars, ai is ith modelled sonar distance and bi is the ith current sonar distance, µ is the sonar reading distance limit. The map, which can been seen in sequence Figure 1 is a binary image were the black indicates the walls. It is clear, given a pose, the distance to N obstacle can be computed. However errors must also be introduced to these measurements which were: resolution, random error and sonar reading distance limit. Since this project is simulating MCL, the current sonar distances also have to be modelled, though this is no diﬀerent from modelling sonar distances at any given pose.

3.7

Resampling

The resampling step of the particle ﬁlter can be implemented in a number of ways [Douc et al., 2005]. Though this project has used one of the more common techniques: Sample/Importance Resampling (SIR). The concept of SIR is a set of particles are separated via thresholding each particles weight. If a particles weight is greater then the threshold, it is kept, however if the particle weight is less then the threshold, it is removed from the set and one of the kept particles is duplicated. Though since MCL is a Monte Carlo method it is common practice to copy the kept particle with an extremely similar location and orientation, rather then just duplicating it. Ideally the threshold should be determined at each resampling step, however for the purpose of this project the most obvious threshold (particle count)−1 , is used at every resampling step. The lower the number of particles, the less computation is required. So a function has been implemented to determine a number of particles to use 8

function PostTasks(pose_xy_cov, sensor_weight, particle_count) { if (pose_xy_cov < SMALL_XY_COV && sensor_weight < HIGH_WEIGHT) { particle_count = PARTICLE_COUNT_MAX: InitaliseUniformPose(particle_count) } else if (pose_xy_cov < SMALL_XY_COV && particle_count > PARTICLE_COUNT_MIN) { particle_count = DecreaseParticleCount(particle_count); } else if (particle_count < PARTICLE_COUNT_MAX) { particle_count = IncreaseParticleCount(particle_count); } return particle_count; }

Figure 1: Resampling Post Tasks: Is the robot lost?, determine new particle count.

9

after each resampling step. The basic concept is once the x, y covariance is small the robot must be conﬁdent of its pose estimation, so if this is the case the particle count is reduced, otherwise it is increased. Increasing or decreasing the particle count can be done in a number of ways to improve performance, however in this project the particle count is simply increased or decrease by %5 of the number of current particles. Even though the robot may be conﬁdent of its pose, it may be wrong. So the robot must test its conﬁdence to ensure it is not lost. This is simply done by using Equation 12, where ai is the ith sonar distance modelled using the pose mean, once the robot is conﬁdent of its pose. If the weight is not extremely high, the robot must be lost, so the particles are uniformly redistributed across the map. These resampling post tasks can be further understood by taking a look at the algorithm seen in Figure 1.

4

Results

The localiser developed was tested on accuracy over four unique routes with diﬀerent program variables such as: maximum particle count, number of sonars, sonar resolution and odometry resolution. Figure 1 shows the an image sequence left to right, top to bottom, of the program output for one such experiment. The blue dots1 represent each particle pose. The red line indicates the true robot pose (an instance of the given route). Finally the green lines represent the the sonar readings from the robots true pose, in this case a 24 ring sonar sensor. This sequence shows the expected result of the particles converging around the true robot pose as time increases. The ﬁrst experiments where to determine what number of sonars required and the maximum sensor resolutions that can be achieved. Figure 2 shows 3 sets of graphs. The ﬁrst set is the entire left column, which shows the error verse time using various size ring sonar sensors tested on identical routes. The right column contains the two sets of resolution experiments. Each set shows two diﬀerent resolution setting, the top set being the sonar resolution, and the bottom set being the odometry translation resolution. It is clear from this graphs the ring sonar sensor must have at least 12 sonars. Furthermore, the sonar and odometry translation resolutions should be under 100cm and 2cm respectively. With this information the program was set with a 24 ring sonar sensor, 1cm odometry translation resolution, 0.5 degree odometry orientation resolution and a 4cm sonar resolution with a 4 metre range, which are reasonable speciﬁcations for these types of sensors. These setting were then used to test

1

They are actually very small lines (vectors)

10

Table 1: Program Output: The particles (blue) converge around the true robot pose (red). The green lines represent the sonar readings from the robots true pose.

11

Error (metres) using 2000 particles, 3 Ring Sonar

25

3 Ring Sonar Error (metres) using 2000 particles

50

100cm Sonar Resolution

20

40

15

30

10

20

5

10

0 0 2000 4000 6000 8000 Time 10000 12000 14000

0 0 2000 4000 6000 8000 Time 10000 12000 14000

25 Error (metres) using 2000 particles

6 Ring Sonar Error (metres) using 2000 particles

50

1m Sonar Resolution

20

40

15

30

10

20

5

10

0 0 2000 4000 6000 8000 Time 10000 12000 14000

0 0 2000 4000 6000 8000 Time 10000 12000 14000

25 Error (metres) using 2000 particles

12 Ring Sonar Error (metres) using 2000 particles

50

2cm Odometry Translation Resolution

20

40

15

30

10

20

5

10

0 0 2000 4000 6000 8000 Time 10000 12000 14000

0 0 2000 4000 6000 8000 Time 10000 12000 14000

25 Error (metres) using 2000 particles

24 Ring Sonar Error (metres) using 2000 particles

50

100cm Odometry Translation Resolution

20

40

15

30

10

20

5

10

0 0 2000 4000 6000 8000 Time 10000 12000 14000

0 0 2000 4000 6000 8000 Time 10000 12000 14000

Table 2: Left Column: One route using diﬀerent numbers of sonars. Right Column: One route using diﬀerent Odometry and Sonar resolutions 12

four unique routes within the map. Each route being tested with 2000 and 1000 particles. Figure 3 shows these eight experiments. The left column shows the error verse time for 2000 particles and the right column shows the error verse time for 1000 particles. For 2000 particle experiments, it can be seen that the total error always converges to zero. The individual translation and orientation error has also been shown, to demonstrate that orientation eﬀects the error the most, as mentioned in Section 3. For the 1000 particle experiments, only the total error is shown as these graph demonstrates the robot getting lost and then recovering once again. A animation of the program output for each route can be found at [Cole, 2006].

5

Future Work

Since MCL is still in the research community, there is much future work in this area. This project simply used a sonar sensor, however any sensor could be used, or used together. This would require perceptual models for each sensor and then the fusion of these perceptual models would need to be researched. This algorithm is given a few parameters, however it would be better the algorithm determined the parameters in run time. For example the program could use a 24 ring sonar sensor, but as the robot gains conﬁdants of its pose, particular sonars can be disabled. This would decrease the required processing2 and therefore increase the real time response.

6

Conclusion

It is clear from the results the algorithm works very well. Though because all the results are from a simulation, this algorithm may not work at all on a real robot in the real world at real time. Even though the simulation models the sensors so there readings would be similar to the real world, the real world can be very surprising. However given the time frame of the project and the resources available, getting a fully functional simulation of MCL for sonar sensors is an achievement in itself.

References

[Borenstein, 1995] J. Borenstein. Control and kinematic design of multidegree-of-freedom robots with compliant linkage. IEEE Transactions on

2

Also battery power is saved on real robot

13

25 Error (metres) using 2000 particles

20

Error (metres) using 1000 particles

Translation Orientation Total

25

Total

20

15

15

10

10

5

5

0 0 100 200 Time 25 Error (metres) using 2000 particles Translation Orientation Total 300 400 500

0 0 100 200 Time 25 Error (metres) using 1000 particles Total 300 400 500

20

20

15

15

10

10

5

5

0 0 500 1000 1500 Time 2000 2500

0 0 500 1000 1500 Time 2000 2500

50 Error (metres) using 2000 particles

40

Error (metres) using 1000 particles

Translation Orientation Total

25

Total

20

30

15

20

10

10

5

0 0 2000 4000 6000 8000 Time 10000 12000 14000

0 0 2000 4000 6000 8000 Time 10000 12000 14000

25 Error (metres) using 2000 particles

20

Error (metres) using 1000 particles

Translation Orientation Total

25

Total

20

15

15

10

10

5

5

0 0 500 1000 1500 Time 2000 2500 3000

0 0 500 1000 1500 Time 2000 2500 3000

Table 3: Four unique routes within the map, each tested with 2000 and 1000 particles 14

Robotics and Automation, 1995. [Cole, 2006] Luke Cole. Mcl sonar experiments, http://cole.homedns.org/fs.php?subject=thumbnails& dir=/my media/software/localisation/sonar/. 2006.

[Couch, 2001] L. Couch. Digital and Analog Communication Systems. Prentice Hall, 2001. [Cox, 1991] I. Cox. Blanche - an experiment in guidance and navigation of an autonomous robot vehicle. IEEE Transactions on Robotics and Automation, 7(2):193–204, 1991. [Douc et al., 2005] Randal Douc, Olivier Cappe, and Eric Moulines. Comparison of resampling schemes for particle ﬁltering. Image and Signal Processing and Analysis (ISPA). Proceedings of the 4th International Symposium on, 2005:64, 2005. [Fox et al., 1999] D. Fox, W. Burgard, and S. Thrun. Markov localization for mobile robots in dynamic environments. Journal of Artiﬁcial Intelligence Research, 11, 1999. [Fox et al., 2000] D. Fox, S. Thrun, W. Burgard, and F. Dellaert. Sequential Monte Carlo Methods in Practice. Forthcoming, 2000. [Leonard and Durrant-Whyte, 1991] J. Leonard and H. Durrant-Whyte. Mobile robot localization by tracking geometric beacons. IEEE Transactions on Robotics and Automation, 1991. [Leonard and Durrant-Whyte, 1992] J. Leonard and H. Durrant-Whyte. Directed sonar sensing for mobile robot navigation. Kluwer, Dordrecht, The Netherlands, 1992. [Miﬄin, 1995] H. Miﬄin. The American Heritage Stedman’s Medical Dictionary. Houghton Miﬄin Company, 1995. [Moravec, 1998] H. Moravec. When will the computer hardware match the human brain? Journal of Evolution and Technology, 1, 1998. [Rekleitis, 2003] Ioannis Rekleitis. Cooperative Localization and Multi-Robot Exploration. School of Computer Science, McGill University, Montreal, Quebec, Canada, 2003. [Shoval and Borenstein, 1995] S. Shoval and J. Borenstein. Measurement of angular position of a mobile robot using ultrasonic sensors. 1995. 15

[Singhal, 1997] A. Singhal. Issues in autonomous mobile robot navigation, May 1997. A survey paper towards partial fulﬁllment of MS degree requirements. [Thrun et al., 2000] S. Thrun, D. Fox, W. Burgard, and F. Dellaert. Robust monte carlo localization for mobile robots. Artiﬁcial Intelligence, 128(12):99–141, 2000. [Thurn, 2006] Sebastian Thurn, 2006. http://robots.stanford.edu/. [Worsley, 1978] F. Worsley. Shackleton’s boat journey. New York, 1978. [Zelinsky, 2000] A. Zelinsky. The 21st century: The age of intelligent machines, May 2000. http://www.science.org.au/future/zelinsky.htm.

16

- Assignment
- Adaptive Neuro-Fuzzy Extended Kalman Filtering for Robot Localization
- History of Normal Curve and Normal Distribution
- Copy of Adm 2302 Assignment 2 !!!
- Differential Equations Exam 3 Review
- 5-ContStates
- 5. Path Planning for a Mobile Robot in a Dynamic Environment
- 240-08-27
- [Presentation] Stokes flow finite element model
- Automating First- And Second-Order Monte Carlo Simulations for Markov Models in Treeage Pro
- Markov Chains
- 35182660 Chap12 Decision Analysis
- wtw25412t3.pdf
- 159ad77046bd11e4890309c665ddaa59
- Determining Optimal Testing Facility Locations for Mining Concentrations in Benguet Using Non-capacitated P-k Median Model
- Bias Variance Trade-Off.pdf
- GRAPH_DFS
- 255191
- HASIL penelitian chi square.docx
- National Science Foundation
- problemas resueltos de mecanica clasica
- Bch Code
- VIDEO SUMMARIZATION USING CONTOURLET TRANSFORM AND CLUSTERING
- Lecture1-Signals & Systems
- Asian Hydrogen Plant Case Story
- Ehsan Rezapour CV
- Assignment 1
- Simulation (1)
- Interpreting Results_ ROC Curves
- MATH 499 Homework 2

- autonomous_lego_climber_project-stingray
- trigonometry
- drawing_using_the_scorbot_manipulator_arm
- ipv6
- polynomials_ii
- wep
- monte_carlo_localisation_for_mobile_robots-short
- linux-2000-2004
- the_tangent_and_the_derivative
- sequences_and_series
- probability
- the_real_number_system
- relations_and_functions
- polynomials_i
- the_binomial_theorem
- intergation
- complex_numbers
- localisation_using_active_mirror_vision_system-anu05-slides
- geometrical_applications_of_differentiation
- airhockey-pergames06
- geometry
- conic_sections
- localisation_using_active_mirror_vision_system-anu05
- parabola
- insect_inspired_robots-acra06
- matrices-incomplete
- co-ordinate_geometry
- permutations_and_combinations
- circle_geometry

Sign up to vote on this title

UsefulNot usefulClose Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading